Panic - File System Broken in Tiger?

When I first purchased my PowerBook G4 (6 months or so ago) it came with 10.3/Panther and TechTool Deluxe (version 1.4). Yesterday, I used TechTool Deluxe but then later found it that it doesn't work with 10.4/Tiger - which I have upgraded to. Someone told me that TechTool Deluxe will break Tiger's file system.
Is my system doomed? Why isn't there a warning message in TechTool Deluxe warning users of this - especially those who got it free with Panther/10.3?
I'd really appreciate any help or input on this. Thanks.

Hi, Wave39.
See "Mac OS X 10.4: Don't use Tech Tool Deluxe 3.0.3's Volume Structure repair."
As FYI, Micromat, the makers of TechTool, offer an inexpensive upgrade from TechTool Deluxe to the latest edition of TechTool Pro 4.x, which offers more tests.
Good luck!
Dr. Smoke
Author: Troubleshooting Mac® OS X

Similar Messages

  • Root file system broken

    My root filesystem seems to be broken after I forced my laptop restart by powering it off.
    Now I am always dropped into an emergency shell.  The log tells me that mount got segmentation fault when it tried to mount the root partition. I'm using btrfs for the root partition(silly me!).
    I already tried mounting the root system from a live cd, got the same segmentation fault.
    Can anyone shed some light on this? Is there any tool similar to fsck for btrfs?
    If the log is required, I will attach it later.
    Thanks in advance.
    EDIT:
    I tried running
    btrfsck --repair /dev/sda7
    but also failed with
    enabling repair mode
    checking extents
    checking fs roots
    checking root refs
    btrfsck: extent-tree.c:2549: btrfs_reserve_extent: Assertion `!(ret)` failed.
    [1] 385 abort (core dumped) btrfsck --repair /dev/sda7
    EDIT 2:
    The problem seems to be solved by following
    http://www.funtoo.org/wiki/BTRFS_Fun
    section  Clearing the BTRFS journal  and Using btrfsck
    Last edited by qiuwei (2013-05-17 15:22:14)

    falconindy wrote:
    https://plus.google.com/108087225644395 … kMy11j1yW7
    shorter version:
    # btrfs-zero-log /dev/sda7
    Many thanks. It indeed solved the problem.

  • Trashed File System

    The file system got scrambled enough on my Tiger installation that I couldn't get the Startup Disk control panel to load on boot. I've bought a new, bigger disk, installed Leopard, and would now like to salvage the file system on my Tiger installation so I can move things. (I have a clone that's a few weeks old - just hoping to preserve more recent information.)
    Verifying with Disk Utility failed. The log is short :
    = Verifying volume “disk1s10”
    = Checking Journaled HFS Plus volume.
    = Invalid B-tree node size
    = Volume check failed.
    =
    = Error: Filesystem verify or repair failed.
    I only tried verification - haven't gone any farther yet.
    I haven't had to recover data in quite a while, so I'm not sure about the latest tech. A few years ago, my first try would've been with DiskWarrior. Then Apple Disk Utility, then Micromat. I don't have a current copy of DiskWarrior, but do have a shiny copy of Disk Utility handy.
    Any thoughts welcome regarding where best to start.

    Many thanks for the reply - which was more or less what I expected. Still, good to have a second opinion for those of us who don't have to do this kinda thing very often.
    The disk appears to be producing a lot of physical R/W errors. The file system was pretty mangled. Bad enough that it wouldn't try to rescue the original disk. I had to copy everything out of the preview to a new location. Stuff is missing. *sigh*
    Better situation than before, though, when the partition wouldn't even mount.

  • Mounting the Root File System into RAM

    Hi,
    I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself.  At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
    I have now succeeded, at least it seems, in performing such a task. There are still some issues.
    For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
    A custom kernel hook is used to (overall):
    Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
    Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
    Copy the non-volatile content into the ramdisk.
    Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
    A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
    To integrate this hook into a initramfs, a preset is needed.
    I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
    The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
    When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
    I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
    Some prerequisites exist (by default):
        Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
        Referencing paths:
            The variables:
                source=
                temporary=
            ...should have the same value in all of these files:
                "/etc/initcpio/hooks/ram"
                "/root/Documents/rootfs/unmount-root-fs.sh"
                "/root/.rsync/exclude.txt"    -- Should correspond.
            This is needed to sync the RAM disk back to the hard disk.
        I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
    Here are all the accepted and used parameters:
        Parameter                       Allowed Values                                          Default Value        Considered Values                         Description
        root                                 Default (UUID=+,/dev/disk/by-*/*)            None                     Any string                                      The source root
        rootfstype                       Default of "-t <types>" of "mount"           "auto"                    Any string                                      The FS type of the source root.
        rootflags                         Default of "-o <options>" of "mount"        None                     Any string                                      Options when mounting the source root.
        ram                                 Any string                                                  None                     "1"                                                  If this hook sould be run.
        ramfstype                       Default of "-t <types>" of "mount"           "auto"                     Any string                                      The FS type of the RAM disk.
        ramflags                         Default of "-o <options>" of "mount"        "size=50%"           Any string                                       Options when mounting the RAM disk.
        ramcleanup                    Any string                                                   None                     "0"                                                  If any left-overs should be cleaned.
        ramcleanup_source       Any string                                                   None                     "1"                                                  If the source root should be unmounted.
        ram_transfer_tool          cp,find,cpio,rsync,unison                            unison                   cp,find,cpio,rsync                           What tool to use to transfer the root into RAM.
        ram_unison_fastcheck   true,false,default,yes,no,auto                    "default"                true,false,default,yes,no,auto        Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_use        0,1                                                              None                    0                                                      If unison should use any available cache. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_update   0,1                                                              None                    0                                                     If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
    This is the basic setup.
    Optionally:
        I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
        I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
    Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
    I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
    Of course, use what you need from the provided files.
    Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
        find+cpio:  2:10s (2:12s on slower hardware)
        unison:      3:10s - 4:00s
        cp:             4 minutes (31 minutes on slower hardware)
        rsync:        4:40s (55 minutes on slower hardware)
        Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
    These are the remaining issues:
        find+cpio option does not create any destination files.
        (On some older hardware) When booting up, the source disk is not always detected.
        When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
    Inconveniences:
        Unison needs to perform an update detection at each startup.
        initramfs' ash does not parse wild characters to use "cp".
    That's about what I can think of for now.
    I will gladly try to answer any questions.
    I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
    Last edited by AGT (2014-05-20 23:21:45)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • PS CS5 The operation could not be completed. A file system I/O error has occurred

    Howdy...
    I'm using Photoshop CS5 and when I go to use a handful of filters such as Liquify or Lens Correction I am suddenly getting an error message saying:
    "The operation could not be completed. A file system I/O error has occurred"
    Doesn't matter if it's a massive file or a small one. Sometimes the Liquify box will actually come up, but after I make one or two strokes, the beachball comes up and moments later I get the error message again. It doesn't actually crash PS, it just prevents me from using those filters. Each time I click, the error message comes up again. I've also tried it with my PS CS3 on the same Mac and it does the same thing. However on my 4 year old MacBook Pro, it's absolutely fine. Grrrrr!
    I have my PS set as default on 64 bit. I have a scratch disk with nearly 500gb free (which is not my start-up disk) I am running a Mac Pro 2.8 GHz 8 Core Intel with 16gb of ram, so it's not a memory problem. Running Mac OS 10.6.7 but did the same on 10.6.6 too.
    I have repaired the disk permissions on the start-up disk. I have reloaded a fresh copy of PS (a few times). Even did a hard Un-install with maccs3clean after trying a regular Un-install with no success.
    I had an 1.5hr conversation with an Adobe Tech guy earlier today and he's scratching his head, so I'd thought I'd throw it open to you guys hoping someone has had the same problem and can save the rest of my hair from being pulled out.
    HELP!

    Hi Guys,
    I know this is an old post but I need to bump it.
    I work for a image processing company and we all sit on top spec mac pro's with 16-32 GB of RAM, top graphic cards and SSD drives.
    All of my collegaues' computers are working fine when it comes to using liquify and all of the filters that come with CS5, but it sure doesn't work on my mac at all.
    We have a tech guy who has re-installed the OSX on a new SSD drive now two times and we have started from scratch with installing CS5 but it still will not work. I keep getting "the operation could not be completed..." every time i try
    to liquify or do any other filter work.
    I am 100% sure it doesn't have to do with any disk permissions or broken drives in my case.
    Could there be an issue with the graphic card or any other hardware component since it's clearly not a software related issue?
    Any help would be much appreciated because we are very much in the dark here.
    Best regards,
    Anders

  • Disk Utility can't format my external HD - File System Formatter fails

    Hi. I have a Freecom 1.5TB external hardrive that is connected by firewire to my Intel iMac. The drive has worked fine for a year and then suddenly the "error 36 code" appeared and nothing else could be saved to the drive.
    Disk Utility told me that it could not repair the disk and that I should format it. At this stage the problems start!
    DU fails to format the drive with the error "File System Formatter Failed."
    I have read various conversations on this error on these forums and followed the instructions but nothing has worked. I have already
    1) Partioned the disk making sure it is "GUID" - then formatted and it still fails
    2) Used MS DOS to do it but that also fails.
    I have noticed that the SMART status is "not recognised".
    Is there anything else that I can do or am I just doing this wrong? I am thinking that I might be able to do something using the start-up disks but I don't know how to do that and don't want to jepardise my iMac whichis otherwise working fine!
    Any help appreciated.
    Rob

    I went crazy last night with the same issue trying to format my Toshiba external drive. I could format as MSDOS but not Mac journaled. I tried the guid partition scheme etc... no luck.  Then I read somewhere that Leopard has an issue with the partitioning schemes. I hooked it up to my macbook on Tiger and whalla, formatted no problem. Then I was able to hook it up to Time Machine and let it reformat it on my Snow Leopard version of OSX on the iMac. I'm on OSX 10.6.8.

  • File system chaos; how to backup and restore one's home folder??

    My young son managed to drag the desktop icon from the hard disk to the trash. I dragged the desktop out of the trash but the file system on the hard disk seems to be whacked: iphoto, itunes were unable to find their libraries (I solved this though), and other apps seem to suffer the same problem as well. I'm guessing my best option is to reformat the disc and reinstall OS 10.4.7 but I want to make sure that I get all documents, files, etc. from my home folder. Can I simply drag that folder to an external drive, perform the reformat/reinstall and then drag the home folder from the external drive back? What is the best way to proceed?

    There are some article by Apple on Disk Utility, and you do not want to always use the Install Disk to repair your drive if the version is earlier. You'd be better off with fsck in single user mode.
    You can repair permissions from any boot drive running Tiger, as it looks in the /Receipts regardless (unlike 10.2 which had a messed up method).
    As for finding all the files to backup - I would use Disk Utility Restore, then backup the whole volume to a blank partition on your FW drive. And while at it, setup an emergency boot volume for OS X that you can use for repairs in the future.
    And maybe create separate accounts, then backup those accounts, and limit what others can or cannot do.

  • Crashes and read-only file systems

    Notice: I apolgize for the long post, I've tried to be as thorough as possible.  I have searched everywhere for possible solutions, but things I've found end up being temporary workarounds or don't apply to my situation.  Any help, even as simple as, "have you checked out XYZ log, it's hidden here", would be greatly appreciated.  Thanks
    I'm not sure what exactly caused the issues below, but they did start to happen within a day of running pacman -Syu.  I hadn't run that since I first installed Arch on December 2nd of this year.
    Setup:
    Thinkpad 2436CTO
    UEFI/GPT
    SSD drive
    Partitions: UEFISYS, Boot, LVM
    The LVM is encrypted and is broken up as: /root, /var, /usr, /tmp, /home
    All LVM file systems are EXT4 (used to have /var and /tmp as ReiserFS)
    The first sign that something was wrong was gnome freezing.  Gnome would then crash and I'd get booted back to the shell with all filesystems mounted as read-only.  I started having the same issues as this OP:
    https://bbs.archlinux.org/viewtopic.php?id=150704
    At the time, I had /var and /tmp as ReiserFS, and would also get reiserfs_read_locked_inode errors.
    When shutting down (even during non-crashed sessions) I would notice this during shutdown:
    Failed unmounting /var
    Failed unmounting /usr
    Followed by a ton of these:
    device-mapper: remove ioctl on <my LVM group> failed: Device or resource busy
    Nother of these errors had ever appeared before.
    After hours of looking for solutions (and not finding any that worked) I was convinced (without any proof) that my Reiser file systems were corrupt and so I reformatted my entire SSD and started anew - not the Arch way, I know   I set all logical volumes as EXT4.
    After started anew, I noticed
    device-mapper: remove ioctl on LVM_SysGroup failed: Device or resource busy
    was still showing up, even with just a stock Arch setup (maybe even when powering off via Arch install ISO, don't remember).  After a lot of searching, I found that most people judged it a harmless error, so I ignored it and continued setting up Arch.
    I set up Gnome and a basic LAMP server, and everything seemed to work for a couple of hours.  Soon after, I got the same old issues back.  The System-journald issue came back and per the workaround on https://bbs.archlinux.org/viewtopic.php?id=150704 and a couple other places, I rotated the journals and stopped journald from saving to storage.  That seemed to stop THOSE errors from at least overwhelming the shell, but I would still get screen freezes, crashes, and read-only file systems.
    I had to force the laptop to power off, since poweroff/reboot/halt commands weren't working (would get errors regarding the filesystems mounted as read-only).
    I utilized all disk checking functions possible.  From running the tests (SMART test included) that came as part of my laptop's BIOS to full blown fsck.  All tests showed the drive was working fine, and Fsck would show everything was either clean, or
    Clearing orphaned inode ## (uid=89, gid=89, mode=0100600, size=###
    Free blocks count wrong (###, counted=###)
    Which I would opt to fix.  Nothing serious, though.
    I could safely boot back into Arch and use the system fine until the system decides to freeze/crash and do the above all over again.
    The sure way of recreating this for me is to run a cron job on a local site I'm developing. After a brief screen freeze (mouse still moveable but everything is otherwise unreponsive) I'll systemctl status mysqld.service and notice that mysqld went down.
    It seems that it's at this point my file systems are mounted as read only, as trying to do virtually anything results in:
    unable to open /var/db/sudo/...: Read-only file system
    After some time, X/Gnome crashes and I get sent back to shell with
    ERROR: file_stream_metrics.cc(37)
    RecordFileError() err = 30 source = 1 record = 0
    Server terminated successfully (0)
    Closing log file.or_delegate.h(30)] sqlite erro1, errno 0: SQL logic error or missing database[1157:1179
    rm: cannot remove '/tmp/serverauth.teuroEBhtl': Read-only file system
    Before all this happened, I was using Arch just fine for a few weeks.  I wiped the drives and started anew, and this still happens with just the minimal number of packages installed.
    I've searched for solutions to each individual problem, but come across a hack that doesn't solve anything (like turning off storing logs for journal), or the solution doesn't apply to my case.
    At this point, I'm so overwhelmed I'm not even sure where exactly to pick up figuring this issue out.
    Thanks in advance for any help

    Did this occur when you booted from the live/install media?
    What is your current set up? That is, partitions, filesystems etc. I take it you have not yet reinstalled X but are in the default CLI following installation?
    If turning off log storage didn't help, reenable it so that you may at least stand a chance of finding something useful.
    What services, if any, are you running? What non-default daemons etc.?
    Does it happen if you keep the machine off line?
    Have you done pacman -Syu since installation and dealt with any *.pacnew files?
    Last edited by cfr (2012-12-26 22:17:57)

  • Lucreate - „Cannot make file systems for boot environment“

    Hello!
    I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
    --($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207  -m /:/dev/md/dsk/d200:ufs
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <d100> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Searching /dev for possible boot environment filesystem devices
    Updating system configuration files.
    The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <S10U6_20081207>.
    Source boot environment is <d100>.
    Creating boot environment <S10U6_20081207>.
    Creating file systems on boot environment <S10U6_20081207>.
    Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
    Mounting file systems for boot environment <S10U6_20081207>.
    Calculating required sizes of file systems              for boot environment <S10U6_20081207>.
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
    Well - why's that?
    I can do a "newfs /dev/md/dsk/d200" just fine.
    When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
    /bin/nawk: can't open file /etc/lu/ICF.2
    Quellcodezeilennummer 1
    Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
    lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
    Would anyone maybe have an idea about what's broken there?
    --($ ~)-- LC_ALL=C metastat
    d200: Mirror
        Submirror 0: d20
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d20: Submirror of d200
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s0          0     No            Okay   Yes
    d100: Mirror
        Submirror 0: d10
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d10: Submirror of d100
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s0          0     No            Okay   Yes
    d201: Mirror
        Submirror 0: d21
          State: Okay        
        Submirror 1: d11
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 2097414 blocks (1.0 GB)
    d21: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s1          0     No            Okay   Yes
    d11: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s1          0     No            Okay   Yes
    hsp001: is empty
    Device Relocation Information:
    Device   Reloc  Device ID
    c1t1d0   Yes    id1,sd@THITACHI_DK32EJ-36NC_____434N5641
    c1t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
    --($ ~)-- /bin/df -k | grep md
    /dev/md/dsk/d100     15490539 10772770 4562864    71%    /Thanks,
    Michael

    Hello.
    (sys01)root# devfsadm -Cv
    (sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
    --($ ~)-- sudo env LC_ALL=C LANG=C lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    d100                       yes      yes    yes       no     -        
    --($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
                   boot environment name: d100
                   This boot environment is currently active.
                   This boot environment will be active on next system boot.
    Filesystem              fstype    device size Mounted on          Mount Options
    /dev/md/dsk/d100        ufs       16106660352 /                   logging
    /dev/md/dsk/d201        swap       1073875968 -                   -In the rebooted system, I re-did the original lucreate:
    <code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
    Copying.
    *{color:#ff0000}Excellent! It now works!{color}*
    Thanks a lot,
    Michael

  • Macbook Pro restarts twice a day-screen goes black-panic file listed below

    My Macbook Pro is really acting up. I can't even delete my trash or it will crash.
    Here is from the panic file any help would be appreciated:
    Anonymous UUID:       D02C9AD5-8587-A528-F1C8-D5FD08E29527
    Mon Mar 24 17:18:16 2014
    panic(cpu 0 caller 0xffffff801aadbe2e): Kernel trap at 0xffffff7f9cf8e6b9, type 14=page fault, registers:
    CR0: 0x0000000080010033, CR2: 0x0000000000000040, CR3: 0x0000000134c2502c, CR4: 0x00000000000206e0
    RAX: 0xffffff80d337d12c, RBX: 0x0000000000000000, RCX: 0x0000000000000000, RDX: 0xffffff80434a2a04
    RSP: 0xffffff8096efb7e0, RBP: 0xffffff8096efb830, RSI: 0xffffff80d337d11c, RDI: 0xffffff803f342f04
    R8:  0xffffff803f4b6f64, R9:  0xffffff803f4b6f64, R10: 0xffffff7f9baaf0f0, R11: 0x0000000000000000
    R12: 0xffffff803f342704, R13: 0xffffff803f342f04, R14: 0x0000000000000000, R15: 0xffffff80434a2a04
    RFL: 0x0000000000010297, RIP: 0xffffff7f9cf8e6b9, CS:  0x0000000000000008, SS:  0x0000000000000010
    Fault CR2: 0x0000000000000040, Error code: 0x0000000000000000, Fault CPU: 0x0
    Backtrace (CPU 0), Frame : Return Address
    0xffffff8096efb470 : 0xffffff801aa22fa9
    0xffffff8096efb4f0 : 0xffffff801aadbe2e
    0xffffff8096efb6c0 : 0xffffff801aaf3326
    0xffffff8096efb6e0 : 0xffffff7f9cf8e6b9
    0xffffff8096efb830 : 0xffffff7f9cf8ae4e
    0xffffff8096efb880 : 0xffffff7f9cf7f2e3
    0xffffff8096efbb10 : 0xffffff7f9cf83265
    0xffffff8096efbb50 : 0xffffff801abff397
    0xffffff8096efbbc0 : 0xffffff801abec531
    0xffffff8096efbf50 : 0xffffff801ae3e363
    0xffffff8096efbfb0 : 0xffffff801aaf3b26
          Kernel Extensions in backtrace:
             com.paragon-software.filesystems.ntfs(82.0)[980A61F9-49E4-399F-92B2-EDBBCC061FD C]@0xffffff7f9cf7c000->0xffffff7f9cfb1fff
    BSD process name corresponding to current thread: Locum
    Mac OS version:
    13C64
    Kernel version:
    Darwin Kernel Version 13.1.0: Thu Jan 16 19:40:37 PST 2014; root:xnu-2422.90.20~2/RELEASE_X86_64
    Kernel UUID: 9FEA8EDC-B629-3ED2-A1A3-6521A1885953
    Kernel slide:     0x000000001a800000
    Kernel text base: 0xffffff801aa00000
    System model name: MacBookPro6,2 (Mac-F22586C8)
    System uptime in nanoseconds: 2542431549909
    last loaded kext at 1065784439422: com.paragon-software.filesystems.ntfs          82 (addr 0xffffff7f9cf7c000, size 282624)
    last unloaded kext at 1141170873327: com.apple.driver.AppleUSBCDC          4.2.1b5 (addr 0xffffff7f9cf78000, size 16384)
    loaded kexts:
    com.paragon-software.filesystems.ntfs          82
    org.virtualbox.kext.VBoxUSB          4.2.4
    org.virtualbox.kext.VBoxDrv          4.2.4
    com.apple.filesystems.smbfs          2.0.1
    com.apple.driver.AudioAUUC          1.60
    com.apple.driver.AppleHWSensor          1.9.5d0
    com.apple.driver.AGPM          100.14.15
    com.apple.filesystems.autofs          3.0
    com.apple.iokit.IOBluetoothSerialManager          4.2.3f10
    com.apple.driver.AppleMikeyHIDDriver          124
    com.apple.driver.AppleUpstreamUserClient          3.5.13
    com.apple.GeForceTesla          8.2.4
    com.apple.driver.AppleHDA          2.6.0f1
    com.apple.driver.AppleMikeyDriver          2.6.0f1
    com.apple.driver.AppleIntelHDGraphics          8.2.4
    com.apple.iokit.IOUserEthernet          1.0.0d1
    com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport          4.2.3f10
    com.apple.Dont_Steal_Mac_OS_X          7.0.0
    com.apple.driver.AppleHWAccess          1
    com.apple.driver.AppleSMCPDRC          1.0.0
    com.apple.driver.AppleIntelHDGraphicsFB          8.2.4
    com.apple.driver.AppleSMCLMU          2.0.4d1
    com.apple.driver.AppleMuxControl          3.4.35
    com.apple.driver.ACPI_SMC_PlatformPlugin          1.0.0
    com.apple.driver.AppleLPC          1.7.0
    com.apple.driver.AppleMCCSControl          1.1.12
    com.apple.driver.SMCMotionSensor          3.0.4d1
    com.apple.driver.AppleUSBTCButtons          240.2
    com.apple.AppleFSCompression.AppleFSCompressionTypeDataless          1.0.0d1
    com.apple.AppleFSCompression.AppleFSCompressionTypeZlib          1.0.0d1
    com.apple.BootCache          35
    com.apple.driver.AppleUSBTCKeyboard          240.2
    com.apple.driver.AppleIRController          325.7
    com.apple.driver.AppleUSBCardReader          3.4.1
    com.apple.iokit.SCSITaskUserClient          3.6.6
    com.apple.driver.XsanFilter          404
    com.apple.iokit.IOAHCIBlockStorage          2.5.1
    com.apple.driver.AppleUSBHub          666.4.0
    com.apple.driver.AirPort.Brcm4331          700.20.22
    com.apple.iokit.AppleBCM5701Ethernet          3.8.1b2
    com.apple.driver.AppleFWOHCI          4.9.9
    com.apple.driver.AppleAHCIPort          3.0.0
    com.apple.driver.AppleUSBEHCI          660.4.0
    com.apple.driver.AppleSmartBatteryManager          161.0.0
    com.apple.driver.AppleACPIButtons          2.0
    com.apple.driver.AppleRTC          2.0
    com.apple.driver.AppleHPET          1.8
    com.apple.driver.AppleSMBIOS          2.1
    com.apple.driver.AppleACPIEC          2.0
    com.apple.driver.AppleAPIC          1.7
    com.apple.driver.AppleIntelCPUPowerManagementClient          216.0.0
    com.apple.nke.applicationfirewall          153
    com.apple.security.quarantine          3
    com.apple.driver.AppleIntelCPUPowerManagement          216.0.0
    com.apple.AppleGraphicsDeviceControl          3.4.35
    com.apple.kext.triggers          1.0
    com.apple.iokit.IOSerialFamily          10.0.7
    com.apple.driver.DspFuncLib          2.6.0f1
    com.apple.vecLib.kext          1.0.0
    com.apple.iokit.IOAudioFamily          1.9.5fc2
    com.apple.kext.OSvKernDSPLib          1.14
    com.apple.nvidia.classic.NVDANV50HalTesla          8.2.4
    com.apple.nvidia.classic.NVDAResmanTesla          8.2.4
    com.apple.iokit.IOBluetoothHostControllerUSBTransport          4.2.3f10
    com.apple.iokit.IOSurface          91
    com.apple.iokit.IOBluetoothFamily          4.2.3f10
    com.apple.driver.AppleHDAController          2.6.0f1
    com.apple.iokit.IOHDAFamily          2.6.0f1
    com.apple.iokit.IOFireWireIP          2.2.6
    com.apple.driver.AppleSMBusPCI          1.0.12d1
    com.apple.driver.AppleBacklightExpert          1.0.4
    com.apple.iokit.IONDRVSupport          2.4.1
    com.apple.driver.AppleGraphicsControl          3.4.35
    com.apple.driver.IOPlatformPluginLegacy          1.0.0
    com.apple.driver.IOPlatformPluginFamily          5.7.0d10
    com.apple.driver.AppleSMBusController          1.0.11d1
    com.apple.iokit.IOGraphicsFamily          2.4.1
    com.apple.driver.AppleSMC          3.1.8
    com.apple.driver.AppleUSBMultitouch          240.9
    com.apple.iokit.IOUSBHIDDriver          660.4.0
    com.apple.iokit.IOSCSIBlockCommandsDevice          3.6.6
    com.apple.iokit.IOUSBMassStorageClass          3.6.0
    com.apple.driver.AppleUSBMergeNub          650.4.0
    com.apple.driver.AppleUSBComposite          656.4.1
    com.apple.iokit.IOSCSIMultimediaCommandsDevice          3.6.6
    com.apple.iokit.IOBDStorageFamily          1.7
    com.apple.iokit.IODVDStorageFamily          1.7.1
    com.apple.iokit.IOCDStorageFamily          1.7.1
    com.apple.iokit.IOAHCISerialATAPI          2.6.1
    com.apple.iokit.IOSCSIArchitectureModelFamily          3.6.6
    com.apple.iokit.IOUSBUserClient          660.4.2
    com.apple.iokit.IO80211Family          630.35
    com.apple.iokit.IOEthernetAVBController          1.0.3b4
    com.apple.driver.mDNSOffloadUserClient          1.0.1b5
    com.apple.iokit.IONetworkingFamily          3.2
    com.apple.iokit.IOFireWireFamily          4.5.5
    com.apple.iokit.IOAHCIFamily          2.6.5
    com.apple.iokit.IOUSBFamily          675.4.0
    com.apple.driver.AppleEFINVRAM          2.0
    com.apple.driver.AppleEFIRuntime          2.0
    com.apple.iokit.IOHIDFamily          2.0.0
    com.apple.iokit.IOSMBusFamily          1.1
    com.apple.security.sandbox          278.11
    com.apple.kext.AppleMatch          1.0.0d1
    com.apple.security.TMSafetyNet          7
    com.apple.driver.AppleKeyStore          2
    com.apple.driver.DiskImages          371.1
    com.apple.iokit.IOStorageFamily          1.9
    com.apple.iokit.IOReportFamily          23
    com.apple.driver.AppleFDEKeyStore          28.30
    com.apple.driver.AppleACPIPlatform          2.0
    com.apple.iokit.IOPCIFamily          2.9
    com.apple.iokit.IOACPIFamily          1.4
    com.apple.kec.pthread          1
    com.apple.kec.corecrypto          1.0

    Sam,
    OS X can read and write FAT formatted drives, which is the typical format of a USB thumb drive. NTFS is the native format of Windows, which is what the Paragon software was for. OS X can read NTFS files, but not write to them. I've used the Paragon stuff in the past. My guess is you had an old version that didn't play well with your version of OS X.

  • SOLVED: kernel loads, but doesn't have a root file system

    Hi,
    The system is an Asus X202E. It does UEFI and has a GPT partition system. I've gotten through that part. And it is clear to me that the kernel loads.
    It's the next step that's giving me grief. I've tried this with two bootloaders: gummiboot and rEFInd.
    With gummiboot, the kernel panics because it can't mount the root file system. With rEFInd, it gets to the intial ramdisk and then drops me to a shell, apparently because the root file system is set to null, and it obviously can't mount that as "real root".
    Here is what I posted on the Arch mailing list, documenting that I have indeed specified the correct root (I'm copying this from the email, eliding the unfortunate line wraps):
    bridge-live# cat /boot/loader/entries/arch.conf
    Title Arch Linux
    linux /vmlinuz-linux
    initrc /initramfs-linux.img
    options root=PARTUUID=d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
    bridge-live# ls -l /dev/disk/by-partuuid
    total 0
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 0ab4d458-cd09-4bfb-a447-5f5fa66332e2 -> ../../sda6
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 3e12caeb-1424-451c-898e-a4ff05eab48d -> ../../sda7
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 432a977b-f26d-4e75-b9ee-bf610ee6f4a4 -> ../../sda3
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 95a1d2c2-393a-4150-bbd2-d8e7179e7f8a -> ../../sda2
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 a4b797d9-0868-4bd1-a92d-f244639039f5 -> ../../sda4
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 d5bb2ad1-9e7d-4c75-b9b6-04865dd77782 -> ../../sda8
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 ed04135b-bd79-4c7c-b3b5-b0f9c2fe6826 -> ../../sda1
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 f64f82a7-8f2b-4748-88b1-7b0c61e71c70 -> ../../sda5
    The root partition is supposed to be /dev/sda8, that is:
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 d5bb2ad1-9e7d-4c75-b9b6-04865dd77782 -> ../../sda8
    So the correct PARTUUID followed by the one I have specified in
    arch.conf is:
    d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
    d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
    I'm guessing that this is really the same problem with both gummiboot and with rEFInd, but don't really know. It's clear to me that the initrd is not being correctly constructed. So I removed /etc/mkinitcpio.conf and did, as per the Arch wiki,
    pacman -Syyu mkinitcpio linux udev
    No joy.
    I don't even know which way to go at this point. If I even knew how to tell it where the real disk is in the initial ram disk shell, that would help. Better of course, would be actually solving the problem.
    Thanks!
    Last edited by n4rky (2013-04-17 21:41:36)

    I have made extremely limited progress on this issue.
    My previous attempt to specify the root partition in mkinitcpio.conf was insufficient. Furthermore, this is no place--despite the documentation--for the orthodoxy about using UUIDs rather than the straight /dev/sdx. In my case:
    root=/dev/sda8
    and run
    mkinitcpio -p linux
    It still drops me into the shell at boot. I can do
    mount /dev/sda8 /new_root/
    and exit the shell. It still won't believe it has the root device and drops me back in. I just exit.
    At this point, for a very brief moment, things look promising. It appears to be starting normally. Then, gdm.service, NetworkManager.service, and dbus.service all fail to start. There may be others but the screen goes by too quickly. At this point, it hangs trying to initialize the pacman keyring and all I can do is CTRL-ALT-DEL.
    It occurred to me that this might extend to the rEFInd configuration and so I modified it to also use /dev/sda8 rather than the UUID, but this made no difference. Trying to boot via gummiboot still yields the previously specified kernel panic.

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • Zerofree: Shrinking ARCH guest VMDK--'remount the root file-system'?

    Hi!
    [using ZEROFREE]
    Getting great results with and extra ARCH install running as a VMDK in Workstation.
    REALLY need tips on shrinking the VMDK. obviously have deleted unneeded files
    and now rather urgently need to learn what's eluding me so far.
    1) zerofree is install IN the virtual machine (VMDK)workstation  running on windows 8.
    2) Here's the instructions for zerofree:
           filesystem has to be unmounted or mounted  read-only  for  zerofree  to
           work.  It  will exit with an error message if the filesystem is mounted
           writable.
           To remount the  root  file-system  readonly,  you  can  first
           switch to single user runlevel (telinit 1) then use mount -o remount,ro
           filesystem.
    As it a VMDK and it's running would the only/best option be to: "remount the  root  file-system  readonly" ??
    OR, could i add the VMDK to another running arch system that I do have and NOT mount the VMachine thereby
    allowing zero free to run even better on that?
    Are both method JUST as efficive at shrinking? My guess would be the remount root file-system as read only
    would NOT be as efficient at shrinking.
    I could really use a brief walk-through on this as all attempts have failed so far.
    I boot the ARCH virtual machine and do what may I ask?
    Last edited by tweed (2012-06-05 07:43:41)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • SAPDB in status STOPPED INCORRECTLY due to file system failure

    Hi community,
    we have a problem with our SAPDB server.
    The file system on which the whole database is installed disappeared form the list of filesystem mounted because of hardware problems,  the database instances crashed. Now the problem should be technically solved, the filesystem is mounted again and files should be not corrupted
    The first problem was to start the x server, If I tried to start it replied with the following error:
    en950_GetProgramExecPath failed:
    OS_ERROR  0: No system errortext for ERRNO 0
    RTE_ERROR 1: Open Registry:No such file or directoryIndepPrograms
    I resolved the problem looking at the file in directory /var/spool/sql/ini and changing the extensions of two files:
    SAP_DBTech.ini from Registry_dcom.ini.cnt01
    SAP_DBTech.ini from SAP_DBTech.ini.cnt01
    Now the x server starts correctly, but the state of the 2 SAPDB instances appears as STOPPED INCORRECTLY
    My questions:
    - Why the configuration files had that (wrong) extension ?
    - What is the correct procedure to try to start the instances in this case?
    Regards, Valerio

    > The file system on which the whole database is installed disappeared form the list of filesystem mounted because of hardware problems,  the database instances crashed. Now the problem should be technically solved, the filesystem is mounted again and files should be not corrupted
    SHOULD is the keyword of the last sentence!
    > The first problem was to start the x server, If I tried to start it replied with the following error:
    >
    >
    en950_GetProgramExecPath failed:
    > OS_ERROR  0: No system errortext for ERRNO 0
    > RTE_ERROR 1: Open Registry:No such file or directoryIndepPrograms
    >
    > I resolved the problem looking at the file in directory /var/spool/sql/ini and changing the extensions of two files:
    >
    > SAP_DBTech.ini from Registry_dcom.ini.cnt01
    > SAP_DBTech.ini from SAP_DBTech.ini.cnt01
    Hmm... one thing is for sure: the MaxDB software does not rename these files!
    > Now the x server starts correctly, but the state of the 2 SAPDB instances appears as STOPPED INCORRECTLY
    Did you had a look into the KNLDIAG files?
    > My questions:
    > - Why the configuration files had that (wrong) extension ?
    No idea? Storage/Filesystem issue?
    Bad user?
    > - What is the correct procedure to try to start the instances in this case?
    Depending on how much is broken here... reinstall the software from scratch and either re-register the instances or perform a restore and recovery of them.
    regards,
    Lars

  • Force-installing Xsan 1.4.2 File System Update on Leopard

    If you didn't read the README and upgraded to Leopard prior to installing the Xsan 1.4.2 update, you're in a pickle. Xsan prior to 1.4.2 won't work on Leopard, and once you've upgraded to Leopard, you can't install the 1.4.2 update. Catch-22. Fixable, though.
    NOTE: This is a hack. It may seriously screw up your system. Worst case, though, you'll have to reinstall Tiger anyway, so what the hey.
    First, update all the other Xsan servers & clients to 1.4.2 and restart everything. Follow the upgrade instructions at:
    http://docs.info.apple.com/article.html?artnum=305035
    Now, on the Leopard-upgraded workstation(s) where Xsan no longer works, do the following:
    1. Download the Xsan 1.4.2 File System Update:
    http://www.apple.com/support/downloads/xsan142filesystemupdate.html
    And, if you need the admin update:
    http://www.apple.com/support/downloads/xsan142adminupdate.html
    2. Mount the Xsan File System Update 1.4.2 disk image.
    3. Right-click the XsanFilesystemUpdate.mpkg file and choose "Show Package Contents"
    4. Browse to Contents:Installers, right-click the XsanFilesystem.pkg file, and choose "Show Package Contents"
    5. In Contents, drag the Archive.pax.gz file to your desktop.
    6. Open Terminal.app and type "sudo su -" at the prompt. Enter your password to authenticate the sudo session. You are now logged in as root, so don't do anything stupid, because you can completely hose your system. You have been warned.
    7. Type the following commands, exactly as written. Note the spaces etc. (copy the text into a monospace font if you're not sure):
    # cd /
    # mv /Users/<your username/Desktop/Archive.pax.gz .
    # gunzip Archive.pax.gz
    # pax -rp e -f Archive.pax
    # exit
    8. Reboot.
    9. Install the Xsan File System Update 1.4.2 from the disk image, as well as the admin update if you need it. Both should install successfully.
    *What did you just do?* When you run the Xsan updater, it checks for various software versions, namely Mac OS X and Xsan. If it finds the wrong versions, it won't run the update. The above procedure "force-installs" the basic files from the 1.4.2 update--which are contained in the Archive.pax.gz archive--so that when you run the updater in Step 9, it finds an acceptable version of Xsan and proceeds with the installation.

    Let me clarify the second line of the terminal commands in Step 7:
    mv /Users/(your username)/Desktop/Archive.pax.gz .
    Replace (your username) with your account's short name; e.g. if your account is "bubba" then it would read:
    mv /Users/bubba/Desktop/Archive.pax.gz .
    Or, "mv(space)/Users/bubba/Desktop/Archive.pax.gz(space)."

Maybe you are looking for

  • Native SQL "Table does not exist in database"

    Hi Developers, I'm doing a database connection to an Oracle db and trying to read data using native SQL - I keep getting the runtime error "table does not exist in database" on the statement Fetch Next Cursor. The following is the code snippet (I've

  • Listen to current OS language

    Hello, I would like to know if there is a way to know the current OS language, since when the user is clicking ALT+SHIFT he can switch the OS language. I tried to use the System.getProperty("user.lang") , but the string returned is the String represe

  • POTS/Analog Dial Access with PVDM3 and 3900? Possible?

    I have a PVDM3-16, 3925 and HWIC-CE1T1-PRI. I have a T1 terminating in the HWIC and the other end goes to an ATLAS 800. I dial into it with an analog v.92 modem. It provides dial-tone and a phone numbering setup. Sort of a self-contained phone system

  • Protecting your jar file from been extracted

    Please i noticed that the java jar file of a software can easily be extracted to review the class files with winrar which allows just anyone to have access to your class file and decompile to get source codes, allowing pirating of the software. How c

  • JAXB type names

    Why in jaxb context can't we have two nested classes with the same name? The JAXBContext can't be created with an error: (...) Two classes have the same XML type name "inner" (...) JAXB shouldn't have enough information to qualify name types? thanks