Index hit rate
Hi Brother,
We are using 10g, after run statspack, we found the index hit rate is very low (below 60%), How can I found out the problem? rebuild index? add PGA/SGA memory
Thanks
Please could you share with us the statspack report section for hit rates?
Seems you're talking about buffer cache, which is shared by tables/index data... we need to precise what is the source.
As others said here, need to focus on SQL that is scanning data intensely, hence the buffer cache rotation.
Best Regards
Ignacio
http://oracledisect.blogspot.com
Similar Messages
-
How to monitor page hit rates?
I see a lot of references in docs and white papers to cache hit rates for the Parallel Page Engine, but I've not found any documentation on how to monitor this system-wide (only how to turn on portlet statistics and get a manual page-by-page look at cache success/failure).
Is there any way to determine the cache hit rate for the PPE?Stephen,
I'm sorry you seem to be experiencing some problems with the support and documentation of the portal software.
The link above was posted on May 1st 2002, and was posted as an absolute link rather than a dynamic product link. As the document name is revved the link will change, hence why an old link from the forums will no longer work.
The document you seek is here. I assume that by referring to webcache you mean that when you add the XLF logging format options webcache complains about x-ecid (for example). This indicates that you have not applied the core iAS patchset which is required to allow webcache to understand these extended log parameters. I believe that the document refers to this as a requirement to get 'full functionality' - perhaps that is a little esoteric, I'll change it in the next rev.
If you continue to be unsuccessful in deploying this note please let me know what errors you're seeing and I'll see what I can do. I'm working on an end-to-end performance kit at the moment that will supercede this document so I'll take your comments on board when revving this area.
Regards
Jason Pepper
Principal Product Manager
Oracle9iAS Portal -
Catalog Cache Hitrate - Low hit rate in the catalog cache (76.170000% 80.
HI...
LC10
in Alert Display CCMS Monitoring architecture
We get error as
Context : MaxDB Monitoring: LCA
Caches : Caches
Short Name : Catalog Cache Hitrate
Alert Text : Low hit rate in the catalog cache (76.170000% < 80.000000%)
Rgds
PR
**http://maxdb.sap.com/doc/7_7/45/748f0f12e14022e10000000a1553f6/content.htm*Hello,
1. I recommend to create the SAP message & get SAP support.
2. It will be helpful to know the SAP basis SP on your system and the liveCache version you are currently using,
also what liveCache applications were running at the time when this Alert reported.
3. In general the information is available at MAXDB library:
http://maxdb.sap.com/doc/7_7/default.htm
Tools
Database Analyzer
Database Analyzer Messages
Catalog Cache Hit Rate
Regards, Natalia Khlopina -
How to find out Tcode Used and their hit rate over a given period of time.
Hi All,
We wanted to know what are the tcodes that are used and how many times over a period of time..
Can you please suggest the ways to find this out..
I have heard of ST03N but not sure how to use this and then download the information ..
Regards,
Vidya..Hi,
Check this
[How to get list of frequently used TCodes] -
Active Directory - DS Name Cache rate hit thershold
Hi Team,
I would like the information for DS Name Cache Rate Hit threshold limit. What is threshold limit so that I can understand there is some issues.
Our monitoring systems is throwing alerts for DS Name Cache rate hit low its saying <80% on domain controllers. Also I have checked all domain controllers the DS Name Cache rate hit being changing in the flection of second 0 to 85 to 100 not
constant.
Also the memory and normal.
So can someone explain how I need analyze in more details so that ensure the alert valid or not.
Monitoring systems alerts come and clears automatically.
Could you please help me analyze this issue.
D.K Konar. NMSDid you gave a look to that? http://social.technet.microsoft.com/Forums/en-US/9d7b4fa2-ba0f-412e-981a-dbfafa0e55d2/ds-name-cache-hit-rate?forum=winserverDS
This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
Get Active Directory User Last Logon
Create an Active Directory test domain similar to the production one
Management of test accounts in an Active Directory production domain - Part I
Management of test accounts in an Active Directory production domain - Part II
Management of test accounts in an Active Directory production domain - Part III
Reset Active Directory user password -
Constantly inserting into large table with unique index... Guidance?
Hello all;
So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
This DB is about 1.7 TB of small record data.
One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
Now what we are observing is that the inserts into this table
- Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
- Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
- If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.Hello,
Here is a link to a blog article that will give you the right questions and answers which apply to your case:
http://jonathanlewis.wordpress.com/?s=delete+90%25
As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
(a) unique index (sourceid, timestamp)
(b) index(create time)
Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
create index indexname (sourceid, timestamp) compress;
or
alter index indexname rebuild compress; You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
Best Regards
Mohamed Houri -
Redo log tuning - improving insert rate
Dear experts!
We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is performed (the system architecture can't be changed - for example to commit every 10th record).
So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.
Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?
Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?
I would be grateful for any information!
Thanks
MarkusWe've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is >>performed (the system architecture can't be changed - for example to commit every 10th record).Doing commit after each insert (or other DML command) doesn't means that dbwriter process is actually writing this data immediately in db files.
DBWriter process is using an internal algorithm to decide where to apply changes to db files. You can adjust the writing frequency into db files by using "fast_start_mttr_target" parameter.
So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running >>in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.Placing the redo log files on SSD disks is indeed a good action. Also you can check buffer cache hit rate and size. Also stripping for filesystems where redo files resides should be taken into account.
Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?It's an extremely bad idea. NOLOGGING option for a tablespace will lead to an unrecovearble tablespace and as I stated on first sentence will not increase the insert speed.
Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?I don't think you need this.
Better check indexes associated with tables where you insert data. Are they analyzed regularly, are all of them used indeed (many indexes are created for some queries but after a while they are left unused but at each DML all indexes are updated as well). -
P13n.ddl probs with UDB: field "value" too long to index
We're trying to deploy the p13n.ddl on a UDB DB2 6.1.
However, we encountered some problems:
In UDB 6.1, an indexed column can be max 255 bytes long. So, the index
WEBLOGIC_USER_ID_INDEX cannot be created, as the column "value" is one
byte too long.
CREATE TABLE WEBLOGIC_USER (userid int, property varchar(100), value
varchar(256));
CREATE INDEX WEBLOGIC_USER_ID_INDEX ON WEBLOGIC_USER (userid, property,
value)
How hard will omitting the "value" column from the index hit performance?
All the columns in the table WEBLOGIC_USER has been indexed in one index.
Was that done to prevent the DBMS from looking into the actual table at
all (by looking only in the index)?
Would it be possible to use the column as a varchar(255) or must it be
256 chars wide? (Taking into consideration that the values at present are
far from 256 chars.)
Anders B. Jensen
Consultant, Research & Development
[email protected]
LEC AS
Denmark
Remove the SPAMLESS to mail me.What you are trying to do shouldn't be a problem.
There is no problems creating the weblogic_user as a varchar(255).
I am not sure about omiting the "value" column and related performance
issues, but I don't think it will be significant.
"Anders B. Jensen" wrote:
We're trying to deploy the p13n.ddl on a UDB DB2 6.1.
However, we encountered some problems:
In UDB 6.1, an indexed column can be max 255 bytes long. So, the index
WEBLOGIC_USER_ID_INDEX cannot be created, as the column "value" is one
byte too long.
CREATE TABLE WEBLOGIC_USER (userid int, property varchar(100), value
varchar(256));
CREATE INDEX WEBLOGIC_USER_ID_INDEX ON WEBLOGIC_USER (userid, property,
value)
How hard will omitting the "value" column from the index hit performance?
All the columns in the table WEBLOGIC_USER has been indexed in one index.
Was that done to prevent the DBMS from looking into the actual table at
all (by looking only in the index)?
Would it be possible to use the column as a varchar(255) or must it be
256 chars wide? (Taking into consideration that the values at present are
far from 256 chars.)
Anders B. Jensen
Consultant, Research & Development
[email protected]
LEC AS
Denmark
Remove the SPAMLESS to mail me. -
Database growth following index key compression in Oracle 11g
Hi,
We have recently implemented index key compression in our sap R3 environments, but unexpectedly this has not resulted in any reduction of index growth rates.
What I mean by this is that while the indexes have compressed on average 3 fold (over the entire DB), we are not seeing this with the DB growth going forward.
ie We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
Our trial with ACO compression seemed to yield reduction of table growth rates that corresponded to the compression ratio (ie table data growth rates dropped to a third after compression), but we havent seen this with index compression.
Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
Cheers
TheoHello Theo,
Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
I wrote a blog about index key compression internals long time ago ([Oracle] Index key compression), but now i noticed that one important statement is missing. Yes future entries are compressed too - index key compression is a "live compression" feature.
We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
Do you mean that your DB size still increases ~15GB per month overall or just the index segments? Depending on the segment type growth - maybe indexes are only a small part of your system at all.
If you have enabled compression and perform a reorg of them, you can run into one-time effects like 50/50 block splits due to fully packed blocks, etc. It also depends on the way the data is inserted/updated and which indexes are compressed.
Regards
Stefan -
How should i increase over all buffer hit ratio.....
Hi all,
As shown below if my DB2 databse buffer Qulaity is low .. How should i increase over all buffer hit ratio..
Please advice on any sap standrd notes or procedures
Number 1
Total Size 80,000 KB
Physical Reads 6.65 ms
Physical Writes 0.00 ms
Overall Buffer Quality 86.05 %
Data Hit Ratio 85.79 %
Index Hit Ratio 87.50 %
No Victim Buffers 259,079,295
--rahulOne of the options is to simply increase the bufferpool size using the following command
db2 alter bufferpool immediate <bufferpool name> size <new bufferpool size>
However, this will affect the hit ration for a particular bufferpool. If you have more than one bufferpool, you need to identify the bufferpool(s) with worst hit ratio. In the SAP DBA Cockpit, check using
Performance -> Bufferpool
The victim buffer information is only useful in case you use alternate page cleaning.
Note that there are other options to fight bad bufferpool hit ratio - however, with your small bufferpool size (80MB) maybe increasing the size is the appropriate step.
Malte -
Longer Boot Times correspond to low Cache Hit % in Readyboost log.
Hi, my boot times vary regardless of any change. The Windows Readyboost log, event id 1015, shows the value for Cache Hit Percentage. When this is low, for example 0.4, the boot time is almost double that of a 0.9 Cache Hit Percentage.
Does anyone know why this may be and what I can do to stabilise my Cache Hit Percentage?
Windows 7 64bit Lenovo L420 Laptop.Hi,
ReadyBoost can speed up your computer by using storage space on most USB flash drives and flash memory cards.
First. I'd like to share the following links with you to better understand the Readyboost
http://windows.microsoft.com/en-HK/windows7/products/features/readyboost
http://technet.microsoft.com/en-us/magazine/ff356869.aspx
The space you defined can effect the peoformance of Readyboost, to further determine what effects the Cache Hit Percentage, I suggest you use performance monitor (Computer Management\System Tools\Performance\Monitoring Tools\Performance
Monitor) to monitor the effectiveness of ReadyBoost,
Monitor the
ReadyBoostCache performance counters. Bytes Cached is the amount of data stored in the cache. More
bytes in the cache improve the chances for a higher hit-rate. The effectiveness of the cache is measured by the hit-rate, which you can obtain from the counters
Cache Read Bytes/sec and Total Read
Bytes/sec.
Yolanda Zhu
TechNet Community Support -
No swaps anyware still hit Ratio around 4% for Initial Record Buffer
Dear Gurus
I am using ABAP NW7.0 System.
in the system all buffers have hit ratio 99+%
but the initial record buffer hit ratio it below 10%
any way there is no swaps at all still hit Ratio is 4%
The current parameters are:
rsdb/ntab/irbdsize --->6000
rsdb/ntab/entrycount---> 20000
HITRATIO -
> % 4
HITS -
> 64
REQUESTS -
> 1.729
DB access quality % -
> 4
DB access -
> 1.656
DB access saved -
> 64
Reorgs -
> 0
Allocated -
> KB 6.625
Available -
> KB 6.000
Used -
> KB 1.099*
Free -
> KB 4.901
Available -
> 5.000
Used -
> 1.656
Free -
> * 3.344*
Objects swapped -
> 0
Frames swapped -
> 0
Total -
> 0
Pl suggest me how can i have the hitratio more.
Thanks in advanceHello,
Unfortunately we can not tell exactly why the value is low,
however it is not necessarily an incorrect value.
The quality of a buffer and how often it is accessed is measured by the
'%Hit Ratio'. This value will indicate if the information stored in the
buffers, such as table entries, programs and screens, is being hit
directly from the buffer itself or, on the other hand, if the system
needs to bring that data from the database since it was not found in the
buffer.
To find out the buffers with poor quality, first check the system
startup time. When the system is started, all buffers (except the
program buffer which has a pre-load) are initially empty. Therefore,
all objects that are accessed for the first time have to be read from
the database and then loaded into the buffers.
If objects are not yet in the buffer, the hit ratio for the buffer will
be low. The hit ratio increases from the time objects are loaded into
the buffers. The rate of the increase depends on the workload in the
system and is different for each buffer.So it is to be noted that how often you restart the system which resuls in the loading of object again and causes hit rate to be low.
Poor buffer quality is not always due to a real problem. For example,
transports into a system can reduce buffer quality. Keep in mind though
that a value lower does not always shows that you have a problem.
A more pressing concern would be if we saw swaps on the system. As you
can see, there are no swaps.
Swapping occurs when the buffer is full, and the SAP System has to
load additional objects into the buffer. Objects in the buffer that
were used the least recently are removed. In this context, the term
"swap" means the objects removed from the buffer are lost and cannot
be replaced until a new database access is performed (replacing what
was lost).
There are two possible reasons for swapping
1 There is no space left in the buffer data area
The buffer is too small. You should increase the buffer size.
2 There are no directory entries left.
Therefore, to conclude, although the hitratio appears low it does
not mean that there are any performance issues. The fact that there is
sufficient free space and there are no swaps confirm this.
You can try to increase size of intial record buffer(in steps) as from current setting it seems to be small.
I hope this helps.
Regards,
Archana -
Only "average" Experience Index for new fully loaded T430s?
I have a new t430s. Windows 7 64 bit, Intel i7, 16 Gigs Crucial Ram, 256GB mSATA for O/S & applications, 6300 Wifi, Intel HD4000 + Nvidia NVS 5200M. I ran Windows "Experience Index" to rate my computer. With no real applications installed, I only rated a 6.6. Processor: 7.2, Memory: 7.6, Graphics: 6.6, Gaming Graphics 6.6: Primary Hard Disk: 7.8.
I'm suprised that my Processor and Graphics rated relatively poorly.
Don't get me wrong. I LOVE my new set up (cost a pretty $). 16 GB RAM was overkill and in retrospect should have gotten 8 GB. And I changed my mind about Nvidia (I'm not a gamer) but kept it anyway sine Lenovo customer service was making it quite painful to make alternations to my computer after order.
That said, 6.6 is kind of disappointing. Why did my graphics and processor score so low? I have the BEST processor at time of purchase.. 7.2 ain't to trifle at, but out of 7.9 i only scored a 7.2 with the latest/greatest? Can a laptop not score a 7.9?
I'm happy the primary disk (mSATA) and memory (crucial 16GB) scored near-perfect scores, however.
Solved!
Go to Solution.Hi realtanu
It is optimum for T430s to have such WEI results. In fact, an overall of 6.6 is considered high, ranging from 1.0 - 7.9. If you want a full 7.9 WEI, you should consider other high end gaming laptops with quad core i7, GTX / Quadro (not NVS) processor which is usually thick, heavy and have less battery life.
If I am not wrong, you change to two 8GB DDR3-1333 DIMM so your memory score is 7.6. If you change it to two 4 / 8GB DDR3-1600 DIMM, your memory score will go up to 7.8. By default, it comes with 1600MHz memory for ##30 series ThinkPad.
With T430s, you could at most get an dual core i7 and subrange NVS card due to it's thinness and cooling limitation.
Have a nice day!
Peter
W520 (4284-A99)
Does someone’s post help you? Give them kudos as a reward, as they will do better to improve | Mark it as solved if the solution works for you, so it could be reference for others in the future
=====================================
Sound Enthusiast and Enhancement (Post comments, share mixes, etc.)
Dolby Home Theater v4 ; IdeaPad Slate Tablet -
For my OLTP system, i got about 0.38 for index hit ratio. My question is how to improve it in general? thank you in advance.
SELECT NAMESPACE,GETHITRATIO FROM V$LIBRARYCACHE WHERE NAMESPACE = 'INDEX'indrabudiantho wrote:
Isn't that for an OLTP all the hit ratio should be > 90%?
Here one example , check this link plz
http://www.orafaq.com/wiki/Improving_Buffer_Cache_Hit_Ratio
and think about, what did you get if you increase it?
Lib. cache hit ratio is considered for the SQLAREA , might be your queries not using index because for oracle execution plan of queries other than index usage more effective, and we couldn't say that it is not good or good, as Manik said your question is relative imho
Ramin Hashimzade -
Spotlight mds bringing down system, constantly leaking RAM & disk access
Spotlight is going rampant here.
The mds (metadata server, aka Spotlight) process is constantly accessing the disk and writing some data to it, filling it until there is no space left on the device. I cannot find out what file it is writing to. I've tried with fsevener which sadly didn't help.
mds is also eating up all the CPU cycles it can get leaving my system fully loaded at all times.
The mds process is also taking up inceasingly huge amounts of memory, causing permanent swapping of memory and bringing the system to a grinding halt. (Cache Hits go down below 1%, About 250.000 pageins happen per minute.) Memory Footprint is going up to several hundred MB or real RAM.
$ vm_stat
Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free: 53166.
Pages active: 275736.
Pages inactive: 135633.
Pages wired down: 59698.
"Translation faults": 10217862.
Pages copy-on-write: 136166.
Pages zero filled: 5983710.
Pages reactivated: 96772.
Pageins: 1236330.
Pageouts: 77113.
Object cache: 24391 hits of 1183451 lookups (2% hit rate)
(This is 95 minutes after reboot. Only running: Finder, Terminal, LaunchBar, Safari, Mail)
This bahaviour goes until the disk is full and I get a "low disk space" warning. Then mds crashes, the system frees up the (temporary?) files written by the process and also frees the RAM taken up by it. Resulting in even more swapping, as the system tries to page-in other processes now. Subseqently the system launches a new mds process and the game starts over.
Things I've tried to remedy:
Rebuilt spotlight index several times. Including totally deleting the .Spotlight-V100 folder on local volumes. Doesn't help, takes about 16 to 20 hours to rebuild on my machine, so is not a thing one wants to just-try.
Reinstalled 10.5.5 Combo Update, doesn't help. Privileges are repaired.
Turn off spotlight completely: Does indeed stop the madness, but now I cannot find anything on my system. So this is not a solution at all.
I've tried to keep the index small by excluding more and more stuff in the privacy settings of the Spotlight PreferencePane. This doesn't solve the problem either.
My System:
PowerBook G4 1.5GHz
2GB RAM (Maximum) (Tested with ASD ok, Rember ok. Different RAM modules, also ok.)
160GB HD (5GB free) (SMART ok, Blockscan ok, Filesystem ok, File privileges ok)
Mac OS X 10.5.5 (9F33) including Security Update 2008/007 (no further updates available)
Any help or suggestions are welcome. I'm happy to provide any further details you may need.
Best regards
PepiJ D McIninch wrote:
This probably isn't Spotlight so much as one of the index plugins (mdimporters). Spotlight itself is very robust, however, all it takes is a single bad indexing plugin. MDS, basically, identifies a file type and loads in a list of plugins that are registered for that file type and executes a routine in the plugin to grab the metadata. If the plugin is improperly written, it can go haywire.
Thanks for the suggestion, I already had checked that.
I don't have any custom (read non-apple provided) Spotlight or QuickLook Plug-Ins on my system. So it is unlikely that it is related to a custom .mdimporter Plug-In. (Might be a buggy Apple one going rampant though.)
I've personally found Spotlight to be quite a solid technology as well. Just the user interface provided by Apple is terrible and especially useless in the Finder (Taking away functionality, or making it horrific to use since Tiger… but that is another story.)
What was the last application that you installed prior to it going crazy? Perhaps one of the last 2-3...
Failing that, you can wipeout the metadata database for a Volume like so (from Terminal.app):
*sudo mdutil -E /Volumes/yourvolumehere*
... or '/' in place of '/Volumes/...' for the local disk. You can turn off metadata indexing for a volume entirely like so:
As I already said, rebuilding the index from scratch doesn't help. Completely deleting it, and rebuilding doesn't remedy as well.
*sudo lsof | grep mdimporter*
... which might help you understand what's being used to index, and this:
/System/Library/Spotlight/Chat.mdimporter/Contents/MacOS/Chat
/System/Library/Spotlight/iCal.mdimporter/Contents/MacOS/iCal /System/Library/Spotlight/Mail.mdimporter/Contents/MacOS/Mail
Nothing fancy here.
I ran a diff on my daily installation report and it gives me these app, that have been modified (Installed or updated in the last 31 days.)
/Applications/Address Book.app
/Applications/Adium.app
/Applications/AppleScript/Script Editor.app
/Applications/Automator.app
/Applications/Carbon Copy Cloner.app
/Applications/Dashboard.app
/Applications/Dictionary.app
/Applications/DVD Player.app
/Applications/Expose.app
/Applications/Flip4Mac/WMV Player.app
/Applications/iCal.app
/Applications/iChat.app
/Applications/iPhoto.app
/Applications/iSync.app
/Applications/iTunes.app
/Applications/JollysFastVNC.app
/Applications/Mail.app
/Applications/OmniFocus.app
/Applications/Photo Booth.app
/Applications/Preview.app
/Applications/Safari.app
/Applications/Skitch.app
/Applications/Spaces.app
/Applications/StuffIt/StuffIt Expander.app
/Applications/StuffIt 12/StuffIt Expander.app
/Applications/Time Machine.app
/Applications/Utilities/Activity Monitor.app
/Applications/Utilities/Bluetooth File Exchange.app
/Applications/Utilities/ColorSync Utility.app
/Applications/Utilities/Directory Utility.app
/Applications/Utilities/Directory.app
/Applications/Utilities/Disk Utility.app
/Applications/Utilities/Keychain Access.app
/Applications/Utilities/Migration Assistant.app
/Applications/Utilities/Network Utility.app
/Applications/Utilities/Podcast Capture.app
/Applications/Utilities/RAID Utility.app
/Applications/Utilities/Remote Install Mac OS X.app
/Applications/Utilities/Terminal.app
/Applications/Utilities/VoiceOver Utility.app
/Applications/Utilities/X11.app
/System/Library/CoreServices/Apple80211Agent.app
/System/Library/CoreServices/AppleFileServer.app
/System/Library/CoreServices/Automator Launcher.app
/System/Library/CoreServices/Automator Runner.app
/System/Library/CoreServices/AVRCPAgent.app
/System/Library/CoreServices/Bluetooth Setup Assistant.app
/System/Library/CoreServices/BluetoothAudioAgent.app
/System/Library/CoreServices/BluetoothUIServer.app
/System/Library/CoreServices/CCacheServer.app
/System/Library/CoreServices/CoreServicesUIAgent.app
/System/Library/CoreServices/DiskImageMounter.app
/System/Library/CoreServices/Dock.app
/System/Library/CoreServices/File Sync.app
/System/Library/CoreServices/FileSyncAgent.app
/System/Library/CoreServices/Finder.app
/System/Library/CoreServices/Help Viewer.app
/System/Library/CoreServices/Installer.app
/System/Library/CoreServices/Kerberos.app
/System/Library/CoreServices/KerberosAgent.app
/System/Library/CoreServices/loginwindow.app
/System/Library/CoreServices/ManagedClient.app
/System/Library/CoreServices/NetAuthAgent.app
/System/Library/CoreServices/Network Diagnostics.app
/System/Library/CoreServices/Network Setup Assistant.app
/System/Library/CoreServices/OBEXAgent.app
/System/Library/CoreServices/ODSAgent.app
/System/Library/CoreServices/PreferenceSyncClient.app
/System/Library/CoreServices/Problem Reporter.app
/System/Library/CoreServices/RemoteManagement/ARDAgent.app
/System/Library/CoreServices/Screen Sharing.app
/System/Library/CoreServices/Spotlight.app
/System/Library/CoreServices/SystemUIServer.app
/System/Library/CoreServices/VerifiedDownloadAgent.app
As you can see, there is nothing special in there as well. Just Apple Apps, and a few non-suspicious ones from my point of view.
Tom in London wrote:
rather than mess around with all of this and maybe fix it until the next Spotlight problem manifests itself, why not just disable Spotlight with Spotless, and use EasyFind instead? Easyfind finds everything, has more options, is reliable, and doesn't cause problems .... and doesn't eat up any space on your hard drive.
...and doesn't find what I need. Thanks for the suggestions though. As I said, totally disabling Spotlight is not an option. I'll stay far away from mysterious tools, as I know what I am doing in a Terminal.
Best regards
Pepi
Maybe you are looking for
-
Lenovo G560 and windows 8.1
Hi This question is related to a laptop (Lenovo G560) with "Intel HM55 chipset + Integrated Intel HD graphics w/ shared video memory". I wanted to know if windows 8.1 will run on it (At present it is running on windows 7 hom basic). The lenovo site d
-
Calculate all days in month from schedule line delivery date for 12 mths
Hi experts, I am trying to solve a problem and would appreciate any help! Im on BI7. The user will input a date for the schedule line delivery date via a variable. From this date I need to go forward 12 months exactly and display a key figure (outsta
-
Hi, I have recently upgraded from a Canon 20D to a Canon 70D. Now my CS5 states that the raw files are not supported. How do I add the 70D as a supported camera? I prefer using the raw editor from Bridge/PS to the software from Canon which will conve
-
How to query requirements of an sccm 2012 application
Dear all I have some problem to create an OSLanguage or Device_OSLanguage Requirement using powershell script it seems that the value passed to the script is not assume by sccm because the record is not availble on the console i tryed with 1033 , en-
-
SAP Netweaver 7.1Web services error physical destination when i test
Hi, When i test Logical destination(created on template destination) on Web service navigator i have this error. Attribute name "noresize" associated with an element type "frame" must be followed by the ' = ' character. Could you help me to solve th