Disadvantages of excessive indexes

Guys,
I am writing a paper that outlines disadvantages excessive use of indexes or other indexes. I found these points.
Do you think i've missed any ?
Indexes consume disk space
Excessive use of indexes can pay serious performance penalty
And Queries can use wrong index plans, causing the queries to slow down
Anything else you think can be added ?
Cheers

Thanks for those who replied. The idea was to write a technical paper and write a supporting tool that will list all unused indexes on a Oracle database and list indexes that were used and if so, how they were used and when. I am working on a project basically.
Referencing papers aren't difficult. I can always do that. But we all being DBA's, there are one or the other issues that we would come across in day to day work. If i can reflect any of the real time problems and try to address it in my tool, it would be beneficial for all.
So that was the motto. Also, it's not only for Oracle. I am trying to write a tool that can possibly integrate with other databases such as SQL server and Sybase.
Thanks
G

Similar Messages

  • What are the disadvantages of secondary index

    Hi all,
    Can anybody tell me what are the disadvantages of creating secondary index?
    Also tell me what are the precautions should be taken to create a secondary index?

    Hi Vinil,
    You can search on SCN with the same subject... you will get lots of threads discussed on it...
    Still for your reference...
    secondary indices use disk space, not memory space. They are only in memory when they are used. Disk space is roughly width times number of records, can be GBs.
    Main disadvantages are:
    + indexes are updated when table is changed, i.e. the additional index makes other operations slower. You must check the importance of your application, if it is much lower than the standard usage than you can not create a secondary index.
    + Secondary indices can confuse the database optimzer, especially if you use field which appear also in other indices. DB calculates usefulness of different indices by some assumptions (check other sources for details), if two indices are similar, than the assumptions can lead to wrong decisions. Your new index can be used for other statements, even when it is not optimal there. Result your index causes problems somewhere else!
    Hope it will solve your problem..
    Thanks & Regards
    ilesh 24x7
    ilesh Nandaniya

  • Regarding secondary index

    how and when we create secondary indexes and what are the advantages and disadvantages of secondary indexes?

    Hi
    Index: Technical key of a database table.
    Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
    Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
    Structure of an Index
    An index can be used to speed up the selection of data records from a table.
    An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
    When creating indexes, please note that:
    An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
    Only those fields whose values significantly restrict the amount of data are meaningful in an index.
    When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
    Make sure that the indexes on a table are as disjunctive as possible.
    (That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
    Accessing tables using Indexes
    The database optimizer decides which index on the table should be used by the database to access data records.
    You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
    The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
    If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
    When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
    Regards

  • Index vs views in performance

    HI,
       Can creating a view will help to improve performance.I have to read from table RBKP but i don't have index fields , can i create view to get faster access.will it help over secondary index. 
    Regards,
    Karthik.k

    hi,
    if u are comparing views vs secondary indexes, then secondary indexes are better ,  view will improve performance little bit and will be constant but if u r usingh secondary indexes then overall performance will be better.
    Disadvantage of secondary indexes is that it will occupy space, it will slow down add , delete operation slow.
    Jogdand M B

  • Clustered indexes and deadlocks

    Hi,
    I have run into some problem with clustered indexes and deadlocks. I have found some breadcrumbs about this on the web but didn't really understand everything. I am an DBA by accident and mainly BI and DWH developer. The article most relevant to the problem
    seems to be the following:
    SQL Server Deadlocks Caused By Clustered Index Scan .
    The database is running with READ COMMITTED SNAPSHOT Transaction Isolation Level. Problematic seems to be the second query. First a row is inserted into table SubjectRevisionEntity.  Second a row is inserted into table Partner which has a foreign key
    on SubjectRevisionEntity. This foreign key is validated using a Clustered Index Seek.
    Having done some research on the topic using the internet my hypothesis is as follows:
    - Insert from Query 1 in Session 1 locks page in Table SubjectRevisionEntity
    - Now a new session (Session 2) is started. Insert from Query 1 in Session 2 probably locks the same page in Table SubjectRevisionEntity.
    - Insert from Query 2 in Session 1 locks page in Table Partner
       Lock on page in Table SubjectRevisionEntity is necessary to do the Clustered Index Seek. However this page is already locked by Session 2. However, Session 2 needs to lock page in Table Partner which is already locked by Session 1 --> deadlock
    occurs
    Does this make any sense? At the moment I am not having the means to test the hypothesis but I will look after that.
    I am just thinking about countermeasures to undertake. What about configuring the index to avoid page locks? All other queries seem to be fine I suppose as they operate on only one table. My fellows from software engineering favour to replace clustered indexes
    by nonclustered indexes as the already have done in the past. However, I think the disadvantages of nonclustered indexes aka heaps regarding storage (forwarded records) and query performance are much bigger than their use for problems like these.
    Regarding the
    article I did not understand the author's point, that two simultaneous table scans on one table by two sessions won't work. I thought that this is no problem as the sessions would use a shared lock on the table.
    Thank you very much for sharing your expertise in advance!
    Martin

    As you describe this that cannot be the cause of your deadlock.
    After session 1 executes query 1, it will have an IX (Intent Exclusive) lock on the clustered index of table SubjectRevisionEntity (note that a lock on a clustered index is a lock on the table since the table is contained in the clustered index), also an
    IX lock on the page in the index where the new row will be inserted and an X (exclusive) lock on the key that you just inserted.
    When session 2 executes it also needs an IX lock on the clustered index of table SubjectRevisionEntity, this is allowed because multiple sessions can have IX locks on the same resource at the same time, also an IX lock on the page in the index where the
    new row will be inserted (also allowed even if this entry is in the same page as the row inserted by session 1), and an X lock on the key that session 2 inserted.  This is also allowed UNLESS session 2 and session 1 are trying to insert a row with the
    SAME primary key value.  From your description, I gather that session 1 and session 2 are trying to insert different keys.
    Then session 1 attempts to insert a row in Partner which has a foreign key reference to the row in SubjectRevisionEntity.  That means it must check for the existence of the row that session 1 inserted.  But it can do this because all it needs is
    a S (shared) lock on the SubjectRevisionEntity table, and an S lock on the page.  It that get those even though session 2 has an IX lock on those resources because S locks and IX locks are compatible.  It also needs an S lock on the row in SubjectRevisionEntity. 
    That is no problem unless it is trying to reference the row which session 2 just entered.  (Once again, I assume this is not the case in your situation?)  It then inserts the row in Partner getting an IX lock on the Partner table, an IX lock on the
    page and an X lock on the new key in Partner.
    Then session 2 attempts to insert a row in Partner.  That will work unless either it is inserting a row in Partner with the same primary key as the row inserted by session 1 or it is trying to reference the same row in SubjectRevisionEntity that session
    1 inserted.
    So this cannot be the cause of your deadlock unless both sessions are entering the same key values.  The fact that they may both be entering keys on the same page should not be causing you deadlock problems.
    Regarding your question about why 2 table scans of the entire table (or entire clustered index scan if the table has a clustered index), you are correct, the scans do get shared locks and there is no deadlock problem UNLESS both sessions are holding locks
    that are incompatible with S locks.  For example, if you were in read committed mode and session 1 had inserted a row with clustered index key = 47 and session 2 had inserted a row with clustered index key = 23 and and both sessions attempt to do a complete
    clustered index scan (by, for example, by doing something like SELECT <blah blah> FROM <table> WHERE <some nonindexed column> = 0, then both sessions will try to get S locks on every row so session 1 will be stopped at key = 23 and session
    2 will be stopped at key = 47 and that is a deadlock.  BUT
    You are using READ COMMITTED SNAPSHOT, not READ COMMITTED.  In READ COMMITTED SNAPSHOT, writes do not block reads.  So the situation above does not apply to you since neither of the sessions would be blocked because it was attempting to read a
    row which was locked by an update from another session.
    Tom

  • Required information about Indexes

    Hello All,
    Please tell me the advantage and disadvantage of putting indexes on a table.
    How the number of indexes affects the DML and SELECT operations on table.
    Thanks

    Hi,
    Have a look at this link:
    http://momendba.blogspot.com/2008/03/how-much-expensive-are-indexes.html
    and to get a good understanding on indexes visit Richard Foote's blog:
    http://richardfoote.wordpress.com
    Regards

  • Dead Locks

    Gurus,
    Please clarify me the three questions which I am posting below
    1) What's the deadlock situation ? How oracle treats the dead lock situation
    2) Disadvantages of having index
    3) I have two tables A and B .. In table A, I have two columns (say col1, col2) .. Col1 is a primary key column .. In table B, I have two columns (say col3, col4) .. Col3 is a primary key column .. Col2 of A has a referrential integrity to Col3 of B ..And Col4 of B has a referrential integrity to col2 of A .. Now if I am inserting a values in table A ...it is showing error "parent value doesnt exist" .. like wise, if I am inserting values in table B, the above mentioned error is comming ..
    How to overcome this error
    Please advice
    Regards

    Hi.
    1) A dead lock is a situation where two or more sessions acquire locks which then prevent each other from moving on. ie session one updates a row aaa in a table and session two updates row bbb (no commits). Session one then attempts to update row bbb and session two attempts to update row aaa and both wait for the locks to clear (default behaviour). Oracle monitors for these situations and will automatically kill one of the sessions and allow the other to complete.
    2) Indexes are used to speed up access to data in the database and if associated with a Primary or Unique Key, enforce uniqueness. They have the disadvantages of taking up space and slowing down updates and inserts.
    3) This is not a deadlock. It is a circular reference. You cannot insert into one table because the other table is expected to have a parent value and vice versa. From a data modelling point of view a circular reference is unsupportable and meaningless. Like trying to be your father's son and your father's father at the same time.
    Regards
    Andre

  • Performance of WAD query

    Hi All,
    We maintain documents through BI query in portal. Just to give some backgroundu2026Query is developed in Web application designer and u201CSingle documentu201D web item is used for entering notes. Notes will be saved as a document in BI.
    Initially when a document is created and saved it was fast but as the users started creating more documents, it takes long time for saving the document.
    I read about RSODADMIN transaction where indexing for metadata is possible. But I am not clear if it affects just documents or all metadata (like Cubes, DSO etc).
    Did anyone use this transaction RSODADMIN to setup indexing? And will indexing improve the document saving time? Or is there any other way of improving the performance.
    Please let me know if anyone knows the advantages and disadvantages of using indexing via RSODADMIN.
    Thank you in advance!
    Sonali.

    refer:
    Web template creation for beginners
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/403c789a-aafb-2910-e5a4-835645f0721b
    Embedding Custom Images Inside Tables in Web Templates
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/403c789a-aafb-2910-e5a4-835645f0721b

  • Transaction RSODADMIN

    Hi All,
    We maintain documents through BI query in portal. Just to give some backgroundu2026Query is developed in Web application designer and u201CSingle documentu201D web item is used for entering notes. Notes will be saved as a document in BI.
    Initially when a document is created and saved it was fast but as the users started creating more documents, it takes long time for saving the document.
    I read about RSODADMIN transaction where indexing for metadata is possible. But I am not clear if it affects just documents or all metadata (like Cubes, DSO etc).
    Did anyone use this transaction RSODADMIN to setup indexing? And will indexing improve the document saving time? Or is there any other way of improving the performance.
    Please let me know if anyone knows the advantages and disadvantages of using indexing via RSODADMIN.
    Thank you in advance!
    Sonali.

    Hi Santosh,
    Thank you for the reply!
    I have read the documentation but I wanted to know if anyone has actually configured the RSODADMIN to do indexing. I want to know the affects on the already entered documents after I configure the indexing and will there be any affect on other queries,cubes etc.
    Thanks,
    Sonali.

  • High redo log space wait time

    Hello,
    Our DB is having very high redo log space wait time :
    redo log space requests 867527
    redo log space wait time 67752674
    LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
    Also, the amount of redo generated per hour :
    START_DATE START NUM_LOGS MBYTES DBNAME
    2008-07-03 10:00 2 1000 TKL
    2008-07-03 11:00 4 2000 TKL
    2008-07-03 12:00 3 1500 TKL
    Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
    Thanks in advance ,
    Regards,
    Aman

    Looking quickly over the AWR report provided the following information could be helpful:
    1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
    In particular the large_pool_size setting seems to be quite high although you're using shared servers.
    Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
    2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
    In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
    3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
    4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
    5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
    6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
    7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Turning off Spotlight

    I need to turn off Spotlight in order to defragment an external drive: Terminal is showing two open files (both named "Spotlight-V100/.store.db") on my external, so my optimization software won't unmount it. I have tried unmounting the drive through the Finder, turning it off, back on and mounting it, as well as re-starting my hard drive, but nothing works. I have tried dragging the drive to the "Privacy" window in the Spotlight preferences, but all that does is open up a window to ask which folders on the drive should be private. I don't use Spotlight anyway; don't like it. (I find it much clunkier than performing a "find" in the Finder.) So how can I get rid of it?

    Here's more detail on Spotlight's indexing, including details on how to remove indexing files and possible disadvantages of disabling indexing (such as indicated by other poster).
    [X-lab Spotlight tips|http://www.thexlab.com/faqs/stopspotlightindex.html#Anchor-Stopping-47857]

  • Finder Cannot Find

    I cannot find (Command F) files on my external drives with Finder.
    I have 1000's of jpeg files on a drive and Finder returns 0 files when I search for file names that contain JPG. I have tried this on FireWire 800 and USB2 and finder refuses to find any files.
    I have my external drives excluded from Spotlight to avoid excessive indexing so I can't use that either.

    [Link to EasyFind download|http://www.macupdate.com/info.php/id/11076/easyfind]
    [FindAnyFile|http://apps.tempel.org/FindAnyFile> - "This is a free program for Mac OS 10.4 and later that lets you search for files on your disks, primarily on HFS formatted ones." Read the [FAF FAQ|http://apps.tempel.org/FindAnyFile/#faq] for information about fast searching capability.

  • Questions about Indexing and Using an Indexing POA

    Although I have only about 50 users, at least 15 of them have in excess of 100,000 messages in their accounts and the POA (version 7.0.2) is regularly slowing to a crawl. (I just know that plans for revolution are fomenting!) I have embarked on a campaign to reduce these accounts by archiving everything off to get mail accounts down to 3000 or fewer pieces. I have achieved user buy-in, but have worked on only a few users so far.
    In another closely related thread, it was suggested to me that the PO speed issues relate to broken indexes. And I suspect that given so many messages, the indexes were never getting fully rebuilt with the default QF POA settings. I am trying to fix that situation in addition to reducing mail account sizes. So, I have set up a second POA on another server and dedicated it to the indexing task. The /qfinterval is set for 1 hour, other /qf switches at default. The POA-QF does no mail delivery, but it does do nightly user upkeep.
    The POA-QF seems to be steadily working away and making progress at reducing the number of unindexed messages. However, I have questions about what I am seeing and what more I can do:
    1. Is the progress I am seeing real progress? For example I have a user with over 100,000 messages to be indexed and every time I check the logs, the count drops by about 500 messages per hourly QF cycle. I assume that if I just let it keep running, it will eventually get caught up and fixed. Not only with this user, but with all the others as well. Will my patience (and theirs) be rewarded? Are there any gotchas I need to prepare for?
    2. One user has recently had virtually all of her messages successfully moved to archive. I can see them in the Archive, and do not see them in the online account. However, now over a week later, QF still shows >130,000 items still left to index for that user. The POA-QF is making slow, steady progress reducing that number, but why is this user's QF count still so high? Does it just need more time, or is there something amiss for this user?
    3. I may want to rebuild indexes for single users from scratch. I have seen the TID 3105742 which tells how to do this: Essentially you turn off mail delivery functions, and make some other switch changes to dedicate the POA to indexing for just a single user, and then you let the POA rebuild the indexes. The implication of that scenario is that the POA is now enjoying exclusive access to the user's databases.
    If I want to use my secondary POA-QF to rebuild a user's index from scratch, does the main POA have to be offline and the user out of GWise? That is, Does the QF process require exclusive access in order to rebuild indexes from scratch?
    Thanks for any thoughts or suggestions.
    Peter Smick

    pgsmick wrote:
    > 1. Is the progress I am seeing real progress? For example I have a user with
    > over 100,000 messages to be indexed and every time I check the logs, the count
    > drops by about 500 messages per hourly QF cycle. I assume that if I just let
    > it keep running, it will eventually get caught up and fixed. Not only with
    > this user, but with all the others as well. Will my patience (and theirs) be
    > rewarded? Are there any gotchas I need to prepare for?
    Set this switch for this indexing POA - /qflevel=999 - this will index
    everything in one run. It will take a long time, but with no qflevel switch you
    are indeed only indexing 500 messages at a time, and if the user has that much
    mail, it might never really catch up.
    >
    > 2. One user has recently had virtually all of her messages successfully moved
    > to archive. I can see them in the Archive, and do not see them in the online
    > account. However, now over a week later, QF still shows >130,000 items still
    > left to index for that user. The POA-QF is making slow, steady progress
    > reducing that number, but why is this user's QF count still so high? Does it
    > just need more time, or is there something amiss for this user?
    >
    This is odd, because really the index count should drop to nothing, but with the
    above switch this might get resolved as well.
    > 3. I may want to rebuild indexes for single users from scratch. I have seen
    > the TID 3105742 which tells how to do this: Essentially you turn off mail
    > delivery functions, and make some other switch changes to dedicate the POA to
    > indexing for just a single user, and then you let the POA rebuild the indexes.
    > The implication of that scenario is that the POA is now enjoying exclusive
    > access to the user's databases.
    Not really - the POA is not enjoying exclusive access to the user's database,
    the indexer is just avoiding an attempt to index anything else.
    > If I want to use my secondary POA-QF to rebuild a user's index from scratch,
    > does the main POA have to be offline and the user out of GWise? That is, Does
    > the QF process require exclusive access in order to rebuild indexes from
    > scratch?
    No - QF never requires exclusive access. That said, you may find that an
    extremely vigorous QF can cause slowdowns for the user.
    Danita
    Novell Knowledge Partner
    Moving GroupWise to Linux?
    http://www.caledonia.net/gwmove.html

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

  • Disadvantages in prepared statement

    hi
    i could not tell the answer for the following question in my interview can u pleae tell me the answer for this question
    one of the disadvantages with prepared statement in jdbc?

    I have no idea what WorkForFood is talking about.Sorry, perhaps an example will help clarify:
    Using a PreparedStatement using two replacement parameters:
    String sql = "SELECT COL1 FROM MYTABLE WHERE COL1 = ? AND COL2 = ?";
    ps.setSetring("WORK");
    ps.setString("FORFOOD");Contrasted against a Statement using literals:
    "SELECT COL1 FROM MYTABLE WHERE COL1 = 'WORK' AND COL2 = 'FORFOOD'";When you execute these two queries, they can generate different explain plans, for example the Statement may use an index while the PreparedStatement may attempt a full table scan. The optimizer is choosing a more efficient explain plan for the Statement because it has more information on which to make a decision (the literal values). A full table scan will be significantly slower then using an index to return the same results. In some databases, you can provide hints, but in some cases, the only way to get this query to use an index is to provide the actual values WORK and FORFOOD in the query itself. When using literals rather then replacement parameters it should be more efficient to use Statement then PreparedStatement (see jschell's list #1). So, in these cases, you should use a Statement rather then a PreparedStatement to get best performance from your query. And again, this issue comes up very infrequently but can be quite debilitating to an applications performance when it does occur.

Maybe you are looking for

  • Replication of Item Categories from R/3 to CRM

    Hi, How can we replicate Item Categories and Item Category Determination from R/3 to CRM. Could you please let me know detail steps as I am new to middleware. Thanks in advance. Sai

  • How to find LOV Query in Oracle Web Form

    Hi All, I am currently use oracle ebs R12.1.2 and i just want to know the query which is attached in lov for Oracle web pages kindly help. Regards

  • MRP run for header material with bom usage-3

    Dear All, We have header material with bom usage-3.We maintain the stock at child components only ,not at header I have created the planned independent requirement for header material.I have created a sales order but sales order is not shown in md04.

  • Hyperlinks to folder locations don't work

    Hello, I'm having trouble creating functioning hyperlinks to folder locations on local or network drives. I've tried this both in InDesign CS3 and InDesign CS1 on Windows XP. I select the text, create new hyperlink, choose 'URL' as Type, then paste t

  • What are the steps involed in Mvt types

    Dear friends I have 3 Questions 1.    How to create movement types? What are the steps involved? Can someone explain in details? 2. What is the difference between the stock transfer between 2 plants of same company code, and stock transfer between 2