Disk Index File Issue

A friend of mine who has an iMac flat panel recently asked me about an issue he was having with files involuntarily being moved to different locations on his hard drive and then being renamed. He had consulted someone else who told him he had a "bad block" on his HD and needed to repair the index file that keeps track of files on the hard drive. I suggested he verify his disk permissions with Disk Utility but didn't know what to do further. How do you fix problems with your index file and/or repair "bad blocks"?
Sam

Repairing permissions has nothing to do with this problem. Repairing the drive, however, does. Here's what is done:
Repairing the Hard Drive
Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger and Leopard.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer. Now restart normally.
If DU reports errors it cannot fix, then you will need Disk Warrior (4.0 for Tiger, and 4.1 for Leopard) and/or TechTool Pro (4.6.2 for Leopard) to repair the drive. If you don't have either of them or if neither of them can fix the drive, then you will need to reformat the drive and reinstall OS X.
There is only one way to repair bad blocks, if indeed that is the problem. You must reformat the drive then use the Zero Data security option. Of course this will be a destructive process and all data on the drive will be permanently destroyed. Be sure to backup first.
A "bad block" is a completely different problem from a corrupted disk directory (index.) A bad block rarely if ever occurs where the directory is located on the drive. If a bad block did occur there then the drive would be essentially unusable.

Similar Messages

  • Problems with indexing files from my hard disk

    Hi everybody,
    I'm a newbie to oracle and i'm trying to index files from my hard disk with oracle text. After i created a simple text file (path: 'c:\tmp\test.txt') and filled it with a short text, i executed the following lines without an error message:
    grant connect, resource, ctxapp to myuser;
    create table myuser.testtab(id number primary key, text BFILE);
    create or replace directory test_dir as 'c:\tmp';
    grant read on directory test_dir to myuser;
    insert into myuser.testtab (id, text) values (1, BFILENAME('test_dir','test.txt'));
    create index myuser.idx_test on myuser.testtab(text) indextype is ctxsys.context;
    the record in the table testtab is stored, but the index table is empty. After i tried to get information about the bfile with java i got an SQLException with the Error-Code ORA-22285 after executing the following lines:
    BFILE file = null;
    try{
    rset = stmt.executeQuery("select text from testtab where id = 1");
    if (rset.next()) {
    file = ((OracleResultSet) rset).getBFILE(1);
    System.out.println("Result from fileExists: " + file.fileExists()); //This is the line where the exception is thrown
    } catch (Exception e){
    e.printStackTrace();
    I would be obliged for any information to this problem.
    Thanks in advance,
    Chris J.
    PS: i'm using Oracle 11.2g on Windows 7

    I got your problem. Drop all the directory,table and index. Log in as "myuser" and do all the things. See the demo below. Don't use myuser.object name When executing the commands from "myuser".
    SQL> conn sys@xe as sysdba
    Enter password: ******
    Connected.
    SQL> create table hr.testtab(id number primary key, text BFILE);
    Table created.
    SQL> create or replace directory test_dir as 'c:\';
    Directory created.
    SQL> grant read on directory test_dir to hr;
    Grant succeeded.
    SQL> insert into hr.testtab (id, text) values (1, BFILENAME('test_dir','test.txt'));
    1 row created.
    SQL> ed
    Wrote file afiedt.buf
      1* create index hr.idx_test on hr.testtab(text) indextype is ctxsys.cont
    SQL> /
    Index created.
    SQL> select * from ctxsys.CTX_INDEX_ERRORS;
    ERR_INDEX_OWNER                ERR_INDEX_NAME                 ERR_TIMES
    ERR_TEXTKEY
    ERR_TEXT
    HR                             IDX_TEST                       31-AUG-10
    AAAEjaAABAAAKsKAAA
    DRG-50857: oracle error in drstldef
    ORA-22285: non-existent directory or file for FILEEXISTS operation
    SQL> conn hr@xe
    Enter password: **
    Connected.
    SQL> select * from CTX_USER_INDEX_ERRORS;
    ERR_INDEX_NAME                 ERR_TIMES ERR_TEXTKEY
    ERR_TEXT
    IDX_TEST                       31-AUG-10 AAAEjaAABAAAKsKAAA
    DRG-50857: oracle error in drstldef
    ORA-22285: non-existent directory or file for FILEEXISTS operation
    SQL> conn sys@xe as sysdba
    Enter password: ******
    Connected.
    SQL> drop directory test_dir;
    Directory dropped.
    SQL> conn hr@xe
    Enter password: **
    Connected.
    SQL> create or replace directory test_dir as 'c:\';
    Directory created.
    SQL> drop table testtab;
    Table dropped.
    SQL> create table testtab(id number primary key, text BFILE);
    Table created.
    SQL> set serverout on
    SQL> DECLARE
      2   v_file BFILE := BFILENAME ('TEST_DIR', 'test.txt');
      3   BEGIN
      4   IF DBMS_LOB.FILEEXISTS (v_file) = 1 THEN
      5  DBMS_OUTPUT.PUT_LINE ('File exists.');
      6   ELSIF DBMS_LOB.FILEEXISTS (v_file) = 0 THEN
      7  DBMS_OUTPUT.PUT_LINE ('File does not exist');
      8  ELSE
      9   DBMS_OUTPUT.PUT_LINE ('Unable to test existence');
    10   END IF;
    11   END;
    12  /
    File exists.
    SQL> insert into testtab values (1,BFILENAME ('TEST_DIR', 'test.txt'));
    1 row created.
    SQL> create index idx_test on testtab(text) indextype is ctxsys.context;
    Index created.
    SQL> select * from CTX_USER_INDEX_ERRORS;
    no rows selected

  • Content searching fails, indexing yields empty or nonexistent index files

    If I remove the content index by issuing these commands in succession: "sudo mdutil -i off /" and "sudo mdutil -E /", then turn indexing back on with "sudo mdutil -i on /", the needed index files in /.Spotlight-V100 are either empty or nonexistent (sometimes an empty ContentIndex.db file is created), and Spotlight searching by content yields no results.
    I posted a similar message previously, but it was at the end of another topic, and the problem remains unsolved with the 10.4.5 update. I'm still looking for an answer. One partly effective workaround is to force updating of the index with this command:
    sudo mdimport -f /
    That doesn't allow index updating to occur incrementally and it doesn't prevent the indexing attempt from occurring again after each login, though.
    I've already tried reinstalling the 10.4.4 and 10.4.5 combo updates after performing an archive and install of 10.4.3. I did retain users and network settings when installing. I've tried everything I've been able to find in the Spotlight topic on these forums as well. I can turn off indexing with the "sudo mdutil -i off /" command on each volume and delete the "/.Spotlight-V100" folder and it's contents using "rm -ri" on each volume. After restarting and turning indexing back on, each time I start up again or switch users the Spotlight menu contents change from search field to indeterminate progress bar often displaying other volumes but always finally displaying my internal boot drive as the target of the indexing process with the phrase "Calculating time" underneath. If I watch the progress bar long enough, the contents of the window will change directly from the indeterminate progress bar to the normal Spotlight search entry field. No crash is reported. Here's a snippet from the Terminal showing results of the listing for the pertinent folder on the internal boot volume as well as on an external volume. The following is the terminal session excerpt containing the output of commands issued after the indexing process gave up.
    Thin:~ strap$ sudo ls -la /Volumes/MacHD/.Spotlight-V100
    Password:
    total 480
    drw------- 7 root admin 238 Feb 17 11:20 .
    drwxrwxr-t 48 root admin 1666 Feb 17 11:18 ..
    -rw------- 1 root admin 0 Feb 17 11:20 .journalHistoryLog
    -rw------- 1 root admin 151552 Feb 17 11:21 .store.db
    -rw------- 1 root admin 238 Feb 17 11:20 _IndexPolicy.plist
    -rw------- 1 root admin 378 Feb 17 11:20 _rules.plist
    -rw------- 1 root admin 86016 Feb 17 11:20 store.db
    Thin:~ strap$ sudo ls -la /.Spotlight-V100
    Password:
    total 480
    drw------- 7 root admin 238 Feb 17 11:08 .
    drwxrwxr-t 57 root admin 2040 Feb 17 11:00 ..
    -rw------- 1 root admin 0 Feb 17 11:08 .journalHistoryLog
    -rw------- 1 root admin 151552 Feb 17 11:08 .store.db
    -rw------- 1 root admin 238 Feb 17 11:08 _IndexPolicy.plist
    -rw------- 1 root admin 378 Feb 17 01:08 _rules.plist
    -rw------- 1 root admin 86016 Feb 17 11:08 store.db
    For both disks I've repaired permissions while booted both internally and externally and disk while booted externally. What could be wrong with Spotlight or its settings that prevent the files from being created and/or populated? Thanks.
      Mac OS X (10.4.5)  

    PARTIAL ANSWER
    The problem was a file in /Library/Preferences called com.apple.metadata.mdserver.plist
    Its entire contents existed of a single boolean in the Root dictionary: EnableSniffing: No
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>EnableSniffing</key>
    <false/>
    </dict>
    </plist>
    I'm assuming a user defaults command issued by some application resulted in the generation of this file, but I need to do more research.

  • Index File group on same drive as data files

    I've just found a file group used for indexes on the same drive as the data files.
    Am i correct in saying there is little benefit to this. The index file group should be on it's own spindle?
    Mr Shaw... One day I might know a thing or two about SQL Server!

    Definitely there will be performance gain provided you are querying for related data which as references index on those index filegroups.
    It helps in parallel processing , having data and index on multiple disk heads helps in reading the data parallel. For more information you can refer the below link
    https://technet.microsoft.com/en-us/library/ms190433%28v=sql.105%29.aspx
    --Prashanth

  • Edit index file for multiple websites on iWeb (on a non-MobileMe server)

    Hi, I try to make this as simple as I can:
    A) I own three different domain names, 1.com, 2.com and 3.com
    B) I have bought hosting space with a company that offers 10 free independent domains in the package. Meaning, all domains I upload should go into the folder /html/
    C) I use iWeb 3.0.1 built 9833)
    Now, whenever I upload 2 or more sites to the server, iWeb rewrites the riit index file - meaning the index file inside the /html/ folder. Whereas the other index files inside the folders for the three domains - /html/1, /html/2 and /html/3 - obviously remain intact.
    Result: if I upload all three domains (= with www,3,com being the last one) and then open www.1.com, it shows... right, www.3.com
    I have tried to create subdirectories in the /html/ folder, but of course this does not resolve the issue.
    HOW KNOWS A SOLUTION ??? WHAT DO I DO ???
    I am not a techie nor am I able to write html, but hey, one has to learn: so if anybody has a solution that is not simply drag-and-drop and might involve the use of, say Dreamweaver, do it. I will try to do my best.
    Thanks!!!

    This is a question for your host's tech support.
    What your hosting service should be saying is 10 dedicated IP addresses.
    If this is the case you should be able to create a root folder for each site on the server and upload the contents of the folder produced by iWeb to it. Notice that I said contents and not the folder itself and the external index.html file.
    This is how my host - Host Excellence - works and why I use them.
    Other services have different arrangements. If you are asked to upload several sites - each contained inside the folder produced by iWeb - to a folder named public_html or something like that then obviously you can't have several index.html files coexisting in this folder. The domain name for each site needs to be directed to the index.html file inside the folder containing the website files. Get tech support to explain how this is done or, better still, do it for you.

  • URGENT - Error "Unable to open file because it isn't a valid Keynote document" - and there is no index file (so the usually suggested solution doesn't work)

    Hi there,
    As you see in the heading, I am getting the error "Unable to open file because it isn't a valid Keynote document". There has been a number of threads on this and there seems to be a usual work around that works in many cases, by changing the file extension to .zip and then looking for the index file and making some more extension changes...unfortunately, in my case (and it also happened to others), there is no index file, so the usually suggested solution doesn't work...Can someone please help? I am working on a tight deadline and would like to try and recover the file.
    Thanks a lot in advance.
    Best,
    Just a regular apple user
    PS: any other presentation opens fine in Keynote (09)

    Have you tried to create a new Keynote Presentation? Do you have another previously saved Keynote file you can try to open? These will make sure it is a problem with this specific presentation and not the whole program.
    Try to delete the Keynote Preferences. They are located in the folder Macintosh HD>Users>your username>Library>Preferences and titled com.apple.iWork.Keynote.plist.
    Your profile shows that you are still on Mc OS 10.6.6, is that true? You might try to update to 10.6.7 as I believe that there was a font issue fixed in this update (I don't know for sure since I am still on 10.5.8).
    Try those and report back and we will see what we can come up with.

  • Index file increase with no corresponding increase in block numbers or Pag file size

    Hi All,
    Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
    I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
    The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB.  We expect this to grow fairly rapidly over the next 12 months or so.
    After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
    When I then perform a dense restructure, the index file reduces to 0.6GB.  The PAG file remains around 12GB (a minor reduction of 0.4GB occurs).  The number of blocks remains exactly the same.
    If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
    If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
    Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
    Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
    I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
    I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it.  At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
    After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1. 
    I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
    http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
    However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right?  But I am getting more than 160% growth (1.6GB / 0.6GB).
    And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
    The Index file growth in itself is not a problem.  But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st.  And with the expected growth of the model, this will likely get much worse.
    Anyone have any explanation as to what is occurring, and how to prevent it...?
    Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.

    alan.d wrote:
    The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
    Thanks for reading.
    I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
    Sabrina

  • Table files and Index files 2GB on Windows 2003 Server SP2 32-bit

    I'm new to Oracle and I've ran into the problem where my Table files and Index files are > 2GB. I have an Oracle instance running version 10.2.0.3.0. I have a number of tables file and index files that have a current files size of 1.99GB. My Oracle crashes about three times a week because of a "Write Fault/Failure. I've detemined that the RDBM is trying to write a index or table files > 2GB. When this occurs it crashes.
    I've been reading the Oracle knowledge base that it suggest that there is a fix or release of Oracle 10g to resolve this problem. However, I've been unable to locate any fix or release to address my issue. Does such a fix or release exist? How do I address this issue? I'm from the world of MS SQL and IBM DB2 and we don't have this issue. I am running and NTFS files system. Could this be issue be related to an Windows Fix?
    Surely Oracle can handel databases > 2GB.
    Thanks in advance for any help.

    After reading your response it appears that my real problem has to do with checking pointing. I've included below a copy of the error message:
    Oracle process number: 8
    Windows thread id: 3768, image: ORACLE.EXE (CKPT)
    *** 2008-07-27 16:50:13.569
    *** SERVICE NAME:(SYS$BACKGROUND) 2008-07-27 16:50:13.569
    *** SESSION ID:(219.1) 2008-07-27 16:50:13.569
    ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
    ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
    ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    error 221 detected in background process
    ORA-00221: Message 221 not found; No message file for product=RDBMS, facility=ORA
    ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
    ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
    ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    Can you tell me why I'm having issues with checking point and the control file?
    Can I rebuild the control file if it s corrupt?
    The problem has been going on since April 2008. I'm takening over the system.
    Thanks

  • "Always Open In..." View Options for Root Level of a Disk Image File

    I ran into a problem/bug tonight that I can't find listed anywhere and was wondering if anyone else has encountered it. I am using Snow Leopard (now 10.6.1), upgraded from the last revision of Leopard.
    I create DVD-R sized disk images with Disk Utility (using "Mac OS Extended" only w/o Journaling) for saving downloaded files. Because of the number of files on this disk, I create single-letter folders (for alphabetical filing) and set up these disk images to "Always Open In List View". For the individual folders, I can set these to "Always Open In List View" without a problem. However, for the "root level" of these disk images, I can only access "Always Open In Icon View" no matter which view option is selected.
    For previously created disk images that were already set to "Always Open In List View", these show the previously set "List View" at the disk image "root level" as expected. But if I uncheck the box, it immediately goes to "Always Open In Icon View". And like before, no matter which view option is selected, I can never get back to "Always Open In List View". Note also that I can set this option as expected for "root levels" of real disks - this only seems to be a problem with the disk image files.
    I had no problems at all with this under Tiger or Leopard. I've only run into this now under Snow Leopard. The upgrade tonight to 10.6.1 hasn't seem to affect this problem.
    So should I be doing something differently now to get this option to reappear, a possible conflict with something on my computer or is this a bug with Snow Leopard? Can anyone else duplicate this issue?

    Additional info, in case anyone is running into this issue...
    If I do the following with the "root level" of the disk image, I can set the "Always Open in List View" option.
    1. Open the "root level" view of the disk image - for me, it always opens in icon view. Then, select "Show View Options" from the View menu.
    2. Check the "Always Open in Icon View" box. While leaving this dialog open, select "as List" from the View menu.
    3. Uncheck the "Always Open in Icon View" box. It will instantly turn into "Always Open In List View". Recheck this box immediately, then close dialog. This will make the setting stick.
    I've been able to repeat this situation several times. I might be wrong, but it sure acts like a small bug to me. Hope this helps anyone else who might have encountered this issue.

  • Can't publish my iWeb site - has index file something to do with it?

    Hi, I've been trying to publish my iWeb site (not on a .Mac account but through ftp upload) but it hasn't worked out so far... I phoned my webspace provider's support service and the wguy I spoke to was convinced that it's due to the index file not being an .html file (he thought that the problem lies with the index.xml.gz file). Searching the discussions on this site, I figure that this isn't what's causing the problem after all...
    To continue working on the site while using another computer, I copied the 'Domain' over. Now that I've brought it back to the original computer, it seems to be causing these problems. Any idas what I could do or where the problem lies? I'd be really grateful for any hints as it's driving me 'round the bend! Cheers, A

    Did you name any of your site's pages "index"?  That can cause problems.
    Are you able to publish your site to afolder on your hard drive and open it locally with your browser?  If you haven't tried  do so and see if you can.  If you can and the site works as you designed it then the issue is in the uploading of the files.
    In that case you might try using  Cyberduck to upload your site folder and  index.html file to your server.
    OT

  • Oracle 9i(9.2.0.5.0) - Oracle Text - Indexing files on FTP Server

    I am using Oracle 9i(9.2.0.5.0) and I am unable to upgrade to a newer version of Oracle DB.
    I am new to this technology and I have not tried it yet myself.
    I was reading some articles, documents or references about Oracle text technology and I have find out that Oracle text should be able to create a context index over the files which resides on the FTP server.
    I have also found out, that for this purpose an "URL_DATASTORE" should be used.
    I would be pleased if someone can answer my question before I decide to start using this technology:
    - Is there any limitation which I should be aware of when creating context index over files which resides on FTP server? (file size limit, supported file types limitation)
    - During index creation process are the indexed files downloaded and copied to the Oracle database permanently or only temporary until index is created?
    - Is any incremental indexing possible(when I add new files to the datastore I do not have to rebuild the whole index)?
    - Is there any formula between context index disk size and indexed files disk size?
    Regards,
    Michal

    - Is there any limitation which I should be aware of when creating context index over files which resides on FTP server? (file size limit, supported file types limitation)
    Max file size is configurable up to 2GB. No limitation on the file type from the datastore itself, but if you want to process binary files the normal list of suported filter file formats will apply (see the appendix in the admin guide)
    - During index creation process are the indexed files downloaded and copied to the Oracle database permanently or only temporary until index is created?
    Only temporarily
    - Is any incremental indexing possible(when I add new files to the datastore I do not have to rebuild the whole index)?
    From the question, I suspect you're seeing this as a crawler - you expect to provide the address of an FTP site and have it fetch all the documents. That's not how it works. Rather, you must put all the URLs into a table, and Text will index those URLs (and only those URLs)
    If new files are added, you must arrange somehow to have the new rows added to your table. Then Text will do an incremental update, it won't have to rebuild the whole index.
    - Is there any formula between context index disk size and indexed files disk size?
    It varies quite a lot depending on types of data and indexing options chosen, but a typical result is that the index will be 40% of the total file size. However, if the documents are formatted (eg Word, PDF) the percentage will be much smaller.

  • Store Critical: Unable to read index file for user/mailtest: System I/O err

    more imap
    [27/Nov/2007:13:36:52 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:52 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&V4NXPnux-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:52 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&XfJT0ZAB-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:52 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&g0l6Pw-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:52 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/test: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:52 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:54 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:54 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&V4NXPnux-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:54 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&XfJT0ZAB-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:54 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&g0l6Pw-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:54 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/test: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:36:58 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:37:00 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:37:00 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&V4NXPnux-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:37:00 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&XfJT0ZAB-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:37:00 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/&g0l6Pw-: System I/O error. Administrator, check server log for details.
    [27/Nov/2007:13:37:00 +0800] e69-1-c imapd[5984]: Store Critical: Unable to read index file for user/mailtest/test: System I/O error. Administrator, check server log for details.

    whr25 wrote:
    root@e69-1-c # ./imsimta version
    Sun Java(tm) System Messaging Server 6.3-0.15 (built Feb 9 2007)
    libimta.so 6.3-0.15 (built 19:27:56, Feb 9 2007)
    SunOS e69-1-c 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-FireThis is an old release of 6.3, you should be planning to upgrade to proactivly prevent known bugs.
    prstat
    2255 mailsrv 407M 310M sleep 59 0 0:00:44 0.0% imapd/3Not 3GB. Of course if you had just restarted messaging server as you noted below then that isn't unexpected.
    I'm restart Messaging Server,I't is not problem
    This issue is about two days after the RuningWhen the problem does occur what is the prstat output? The size of the imapd processes will increase over-time depending on the number of people accessing the store via IMAP, and the size of the mailboxes (store.idx files) they are accessing.
    Regards,
    Shane.

  • In HTTP log:  Store Critical: Unable to read index file for user/ uid

    All:
    Sun Java(tm) System Messaging Server 6.2-7.05 (built Sep 5 2006)
    libimta.so 6.2-7.05 (built 12:18:44, Sep 5 2006)
    We recently have started to see the following errors in our http logs:
    [01/Mar/2007:13:03:43 -0500] httpd[5174]: Store Critical: Unable to read index file for user/<uid>: System I/O error. Administrator, check server log for details.
    It's occurring a couple of different times during the day to certain users. Then it won't happen for days to anyone, but then start up again. I saw a similar thread to this re: IMAP and I'm curious if http could be having the same problem. We increased the number of process of http (from 2 to 4 a few months ago) but kept the same maxsessions (6000), so maybe I need to change the maxsession to something lower? We only started to see the I/O error two weeks ago We're not seeing the error in imap logs. Also there's no errors in the default log related to the users that receive this in http.
    I'm planning on running a reconstruct -m in the mean time to see if that helps. There have been no changes to the server or application for quite some time. Any thoughts?

    Yes, http can have the same issue. Yes, lowering the maxsessions from 6000 is the answer, IF it's the same problem. Likely, but not guranteed.
    If you actually look at the store.idx for that particular user, what do you see? Is it near 2 gig? If so, then the user needs to either delete some messages or move some to another folder, as 2 gig is the limit for the store.idx file.....
    jay

  • [warn] mod_bonjour: Cannot read template index file '/System/Library/User Template/English.lproj/Sites/index.html'.

    Operating System: Lion 10.7.5
    I was getting this warn in the logs
    [warn] mod_bonjour: Cannot read template index file '/System/Library/User Template/English.lproj/Sites/index.html'.
    and looking to the System directory on;
    System/Library/User Template
    User Template was locked and onwned by the System.
    I went to the terminal and type;
    sudo mkdir "/System/Library/User Template/English.lproj/Sites/"
    sudo touch "/System/Library/User Template/English.lproj/Sites/index.html"
    re-started Apache
    The warn went away gracefully

    I am adding here that this seems to be a permissions bug since the "User Template" is owned by the system and no one else have access to it. The warn went away temporarily because the permissions still wrong in that directory. I changed the permissions on the User Template directory to read and see what is inside and it loops to the user system structure. Most of the directories in the system structure are locked leaving only the public and sites directory with the correct permissions. Inside of the sites folder have a blank index.html file with read access.
    So I am not sure if what I did until now will resolve the warn issue.
    What I did was to get info on the User Template directory, authenticate as root and change the permission to the admin to read only. That is harmles since not even the admin can change its content. The warn seems to have gone away for now. However, the point here is to find out if the permissions should be read and write for the admin instead of read only or some other conf. More latter!

  • Convert from index file keywords to topic keywords

    I'd like to store keywords in the topics instead of having
    them in a central index file.
    For legacy help systems, is there a way of moving the
    keywords from the .HHK file into the .HTM topics?
    Thanks,
    Salan

    Hi Salan,
    I'm afraid I have some bad news for you: There is no
    automatic way to do this in RoboHelp. You have to do it by hand. I
    had a discussion about this with eHelp (back in Oct. 2003...). I
    could forward the email to you, if you wish. A quote:
    "We do have request 6390 submitted to our R&D department
    asking to address this issue. I created an inquiry under your
    record and tied it to this request. This way in the event that this
    is a addressed in a service release or next version you will be
    notified. "
    I guess this was with X3, but I never heard anything after
    that, and I doubt very much that the various new owners would have
    prioritized this issue.
    Anyway, if anyone else knows of an efficient way to do this,
    please post it, because this is STILL an issue for my team!
    Regards,
    Eileen

Maybe you are looking for

  • How to change data element of IDOC segment

    HI Experts, I have copied a segment E1ISU_MEASUREMENT_MEA to ZE1ISU_MEASUREMENT_MEA. Now i want to change data element of one of the field of the segment. Please guide me how i can do it? is there any disadvantages of doing this? can we do it easily?

  • Custom Fields not being displayed.

    Hi Experts, I have recently done a EEWB enhancement and also CRMV_SSC configuration to enable display of a custom field for my support notifications. However these custom fields are not appearing. This looks like an issue with the configuration in CR

  • Management and Shared Code

    I�m Microsoft .NET Developer and I use the Microsoft Visual Source Safe to share and to manager code in team projects. I would like of know if exist a similar Software to Java Environment?

  • RGB - CMYK problems

    Good Day, I have bought Adobe Creative Suite 4 and this document is generated with Adobe InDesign CS4 my document looks like that: this is a advertisment which is for a journal in my document i have transparency an a table with some information......

  • No folders in the library?

    just a question to organize my stuff, can i create folders in the library like as in flash?