UFS file system mount options

I'm doing some performance tuning on a database server. In mounting a particular UFS file system, I need to enable the "forcedirectio" option. However, the "logging" option is already specified. Is there any problem mounting this file system with BOTH "logging" and "forcedirectio" at the same time? I can do it and the system boots just fine but I'm not sure if it's a good idea or not. Anybody know?

Direct IO bypasses the page cache. Hence the name "direct".
Thus, for large-block streaming operations that do not access the same data more than once, direct IO will improve performance while reducing memory usage - often significantly.
IO operations that access data that could otherwise be cached can go MUCH slower with direct IO, especially small ones.

Similar Messages

  • Not UFS file system

    After deleting all the partitions on my intel Pentium III computer.
    I boot from the first (1/2) CD to start Solaris 8 installation. But
    I got the following message:
    not UFS file system
    Then the computer halted.
    Please help me to overcome the problem
    Thank you!
    Michael

    Ever figure this out? I just DL'ed the CDs and get the same thing, even created a 2G DOS partition with same results!

  • More than 1 million files on multi-terabyte UFS file systems

    How do you configure a UFS file system for more than 1 million files when it exceeds 1 terabyte? I've got several Sun RAID subsystems where this is necessary.

    Thanks. You are right on. According to Sun official channels:
    Paula Van Wie wrote:
    Hi Ron,
    This is what I've found out.
    No there is no way around the limitation. I would suggest an alternate
    file system if possible suggest ZFS as they would get the most space
    available as inodes are no longer used.
    Like the customer noted if the inode values were increased significantly
    and an fsck were required there is the possibility that the fsck could
    take days or weeks to complete. So in order to avoid angry customers
    having to wait a day or two for fsck to finish the limit was imposed.
    And so far I've heard that there should not be corruption using zfs and
    raid.
    Paula

  • ASM vs ext3 File system(mount point)

    Please suggest which one is better for small databases.
    ASM or ext3 File system(mount point)?
    Any metalink note.

    ASM better if you do not want to play with I/O tiuning, (if you tune ext3 file system it woud be the same from performace view),
    but it more compilcated for admininstering database files in ASM then in ordinary file system.
    Oracle is recomending to use ASM for database file system.
    I woud think if you have some development database and nead a lot of cloning, moving of datafiles its better to use ordinary file system,
    so you can use copy OS comands, not so complicated.
    If you nead some striping, miroring, snapshoting from ext3 you can use LVM on unix/linux.
    I am not sure but I think what striping or miroring is better on ASM then on LVM, becouse ASM is doing it better for databse I/O.

  • New zone and inherited file system mount point error

    Hi - would anyone be able to help with the following error please. I've tried to create a new zone that has the following inherited file system:
    inherit-pkg-dir:
    dir: /usr/local/var/lib/sudo
    But when I try to install it fails with:
    root@tdukunxtest03:~ 532$ zoneadm -z tdukwbprepz01 install
    A ZFS file system has been created for this zone.
    Preparing to install zone <tdukwbprepz01>.
    ERROR: cannot create zone <tdukwbprepz01> inherited file system mount point </export/zones/tdukwbprepz01/root/usr/local/var/lib>
    ERROR: cannot setup zone <tdukwbprepz01> inherited and configured file systems
    ERROR: cannot setup zone <tdukwbprepz01> file systems inherited and configured from the global zone
    ERROR: cannot create zone boot environment <tdukwbprepz01>
    I've added this because unknown to me when I installed sudo from sunfreeware in the global it requires access to /usr/local/var/lib/sudo - sudo itself installs in /usr/local. And when I try to run any sudo commands in the new zone it gave this:
    sudo ls
    Password:
    sudo: Can't open /usr/local/var/lib/sudo/tdgrunj/8: Read-only file system
    Thanks - Julian.

    Think I've just found the answer to my problem, I'd already inherited /usr ..... and as sudo from freeware installs in /usr/local I guess this is never going to work. I can only think to try the sudo version of the Solaris companion DVD or whatever it's called.

  • Mounting a UFS file system using  NFS on Solaris 10

    Quick question: If I have a Solaris 10 server running ZFS can I just mount and read a UFS partition without any issues?

    NFS is filesystem agnostic.
    Your not mounting a UFS filesystem, you mounting an NFS filesystem.
    So the answer is yes.

  • Question about a file system storage option for RAC on 10g

    Hello everyone,
    I am in the beginning of connecting our storage and switches, and building RAC on them but there is a little argument between our specialists.
    We have two database servers(10g with OEL 5) to be clustered and two visible disk groups to each of those nodes. So question is can we choose only one disk group as a shared storage leaving the rest one as the redundant copy during database a creation window while installing the database.  Because some of us argue that oracle database has a built-in capability to decide on what level of RAID we store our data.
    Thank you for your help.

    "some of us argue that oracle database has a built-in capability to decide on what level of RAID we store our data". 
    That statement is not true.  Oracle has optional multiplexing for control files, redo logs, and archive logs but this is not enabled by default and Oracle will not automatically enable it.  If you want redundancy of tables, indexes, temp, and undo you must provide this because Oracle does not offer it standard or as an option.  You can achieve redundancy with RAID at the array level, or host based mirroring (like ASM redundancy groups or Linux mdadm).  This can also depend on your file system because, I think, OCFS2 does not support host based mirroring (so you cannot use mdadm or lvm to mirror the storage if you are using OCFS2).
    Redundancy is not required, but it is recommended if you are using hard disks because they are prone to failures.  You can configure RAID 10 across all disks in the array and present this as one big LUN to the database server.  If you have two storage arrays and you want to mirror the data across the two arrays, then present all of the devices as JBOD and use Linux mdadm to create your RAID group.
    RAC requires shared storage.  Maybe you have a NAS or SAN device, and you will present LUNs to the Oracle database servers.  That is no problem.  The problem is making those LUNs usable by Oracle RAC.  When I used Oracle 10g RAC, I used the Linux raw device facility to manage these LUNs and make them ready for Oracle RAC.  However, raw has been desupported.  Today I would use either ASM or OCFS2.  This has nothing to do with redundancy, this is just because you are using RAC.

  • ZFS file system mount in solaris 11

    Create a ZFS file system for the package repository in the root pool:
    # zfs create rpool/export/repoSolaris11
    # zfs list
    The atime property controls whether the access time for files is updated when the files are read.
    Turning this property off avoids producing write traffic when reading files.
    # zfs set atime=off rpool/export/repoSolaris11
    Create the required pkg repository infrastructure so that you can copy the repository
    # pkgrepo create /export/repoSolaris11
    # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > \
    sol-11-1111-repo-full.iso
    # mount -F hsfs /export/repoSolaris11/sol-11-1111-repo-full.iso /mnt
    # ls /mnt
    # df -k /mnt
    Using the tar command as shown in the following example can be a faster way to move the
    repository from the mounted file system to the repository ZFS file system.
    # cd /mnt/repo; tar cf - . | (cd /export/repoSolaris11; tar xfp -)
    # cd /export/repoSolaris11
    # ls /export/repoSolaris11
       pkg5.repository README
       publisher sol-11-1111-repo-full.iso
    # df -k /export/repoSolaris11
    # umount /mnt
    # pkgrepo -s /export/repoSolaris11 refresh
    =============================================
    # zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
    ==============================================I am trying to reconfigure the package repository with above steps. when reached the below step
    # zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
    created the mount point but not mounted giving the error message
    cannot mount ,directory not empty When restarted the box, threw service adm screen with error message
    not able to mount all pointsPlease advise and Thanks in advance.

    Hi.
    Don't mix content of directory as mountpoint and what you see after FS was mounted.
    On othet ZFS - mount point also clear. You see contetn of ZFS file system.
    For check you can unmount any other ZFS and see that mountpoint also clear.
    Regards.

  • File system mount prob

    after starting my server i found some of the partitions( where oracle data files are located) not mounted.what soud i do to mount properly that file system

    Hi,
    Are you using Linux? You need to add the for these files systems to /etc/fstab. Suppose you have a hard disk /dev/sdc1 which is mounted on /u01 file system, add following line at the bottom of your /etc/fstab file (suppose you are using ext3 file system
    /dev/sdc1                 ext3    defaults        1 2This will auto mount your file system on every startup
    Salman

  • How to Stop Finder WEBDAVFS from requesting .hidden, ._Directory , ._FileName files after Remotes File System Mount has happened?

    Hi,
    After I have mounted remote directory on Finder , Finder requests for all file types including .hidden, ._Directory , ._FileName files and all such files which is creating havoc to the performance of the Finder.
    Case 1 : In one Directory I have 500 files+directories. When the Finder tries to fetch the directory content it is sending requests for even .hidden, ._Directory , ._FileName files and so the request count grows exponentially causing breakdown for Finder. Its taking 10 mins to load the directory. The same directory request when posted by Transmit by Panic it loads within 30 secs. Transmit do not send request for ._*  files/directory.
    I am rejecting request for all hidden file request but they keep on coming in thousands.
    Case 2 : Finder tries to refresh the file listing whenever the Finder window is brought to Focus.
    Any help is appreciated. When I try to use find for such files to delete I see no listing! 
    Used : ls -1aR /Volumes/InquiraWebDAV
    To demostrate the problem please check example log :
    Response XML1:
    </D:multistatus>
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/.DS_Store
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/.DS_Store
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 3
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 3
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 2
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 2
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/
    inside doProfind() path=/CUSTOMER_RELATIONS_VIEW/untitled folder/
    req.getHeader('Depth')=1
    Request XMl:<?xml version="1.0" encoding="utf-8" standalone="no"?><D:propfind xmlns:D="DAV:">
    <D:prop>
    <D:getlastmodified/>
    <D:getcontentlength/>
    <D:creationdate/>
    <D:resourcetype/>
    </D:prop>
    </D:propfind>
    Element Node=#text |
    Element Node=D:prop | null
    Element Node=#text |
    {281950648=[AOS_VIEW, APPLECARE_ALLGEOS, CUSTOMER_RELATIONS_VIEW, EXECUTIVE_RELATIONS_VIEW]}281950648
    temp=/AOS_VIEW/  ###   temp1=/CUSTOMER_RELATIONS_VIEW/UNTITLED FOLDER/  isAllowed=false
    temp=/APPLECARE_ALLGEOS/  ###   temp1=/CUSTOMER_RELATIONS_VIEW/UNTITLED FOLDER/  isAllowed=false
    temp=/CUSTOMER_RELATIONS_VIEW/  ###   temp1=/CUSTOMER_RELATIONS_VIEW/UNTITLED FOLDER/  isAllowed=true
    href/InquiraWebDAV
    rewriteUrl(href)=/InquiraWebDAV/CUSTOMER_RELATIONS_VIEW/untitled%20folder/
    resourceName=untitled folder  type=0
    properties=java.util.Vector$1@2aa05bc3
    property=getlastmodified
    property=getcontentlength
    property=creationdate
    property=resourcetype
    newPath=/CUSTOMER_RELATIONS_VIEW/untitled folder/untitled folder
    newPath=/CUSTOMER_RELATIONS_VIEW/untitled folder/untitled folder 2
    newPath=/CUSTOMER_RELATIONS_VIEW/untitled folder/untitled folder 3
    Response XML1:
    <?xml version="1.0" encoding="utf-8" ?>
    <D:multistatus xmlns:D="DAV:"><D:response><D:href>/InquiraWebDAV/CUSTOMER_RELATIONS_VIEW/untit led%20folder/</D:href>
    <D:propstat><D:prop><D:creationdate>2012-03-06T22:56:52Z</D:creationdate>
    <D:resourcetype><D:collection/></D:resourcetype>
    </D:prop>
    <D:status>HTTP/1.1 200 OK</D:status>
    </D:propstat>
    <D:propstat><D:prop><D:getlastmodified/><D:getcontentlength/></D:prop>
    <D:status>HTTP/1.1 404 Not Found</D:status>
    </D:propstat>
    </D:response>

    Here is a related discussion but no solution.
    https://discussions.apple.com/message/8216700#8216700
    Do any one know if Apple Support has any solution for this problem or How do I get expert help from them?

  • Can ZFS storage pools share a physical drive w/ the root (UFS) file system?

    I wonder if I'm missing something here, because I was under the impression ZFS offered ultimate flexability until I encountered the following fine print 50 pages into the ZFS Administration Guide:
    "Before creating a storage pool, you must determine which devices will store your data. These devices must be disks of at least 128 Mbytes in size, and _they must not be in use by other parts of the operating system_. The devices can be individual slices on a preformatted disk, or they can be entire disks that ZFS formats as a single large slice."
    I thought it was frustrating that ZFS couldn't be used as a boot disk, but the fact that I can't even use the rest of the space on the boot drive for ZFS is aggrivating. Or am I missing something? The following text appears elsewhere in the guide, and suggests that I can use the 7th slice:
    "A storage device can be a whole disk (c0t0d0) or _an individual slice_ (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted."
    Currently, I've just installed Solaris 10 (6/11) on an Ultra 10. I removed the slice for /export/users (c0t0d0s7) from the default layout during the installation. So there's approx 6 GB in UFS space, and 1/2 GB in swap space. I want to make the 70GB of unused HDD space a ZFS pool.
    Suggestions? I read somewhere that the other slices must be unmounted before creating a pool. How do I unmount the root partition, then use the ZFS tools that reside in that unmounted space to create a pool?
    Edited by: MindFuq on Oct 20, 2007 8:12 PM

    It's not convenient for me to post that right now, because my ultra 10 is offline (for some reason the DNS never got set up properly, and creating an /etc/resolv.conf file isn't enough to get it going).
    Anyway, you're correct, I can see that there is overlap with the cylinders.
    During installation, I removed slice 7 from the table. However, under the covers the installer created a 'backup' partition (slice 2), which used the rest of the space (~74.5GB), so the installer didn't leave the space unused as I had expected. Strangely, the backup partition overlapped; it started at zero as the swap partition did, and it ended ~3000 cylinders beyond the root partition. I trusted the installer to be correct about things, and simply figured it was acceptible for multiple partitions to share a cylinder. So I deleted slice 2, and created slice 7 using the same boundaries as slice 2.
    So next I'll have to remove the zfs pool, and shrink slice 7 so it goes from cylinder 258 to ~35425.
    [UPDATE] It worked. Thanks Alex! When I ran zpool create tank c0t0d0s7, there was no error.
    Edited by: MindFuq on Oct 22, 2007 8:15 PM

  • /globaldevices on different file system mounts

    IHAC that has a SC3.2 on a pair of V890's Solaris9-U5 and Veritas Foundation Suite 5 for boot disks.
    I have noticed that the paths to /globaldevices are different:
    /dev/vx/dsk/bootdg/rootdg_16vol 487863 5037 434040 2% /global/.devices/node@1
    /dev/vx/dsk/bootdg/node@2 487863 5035 434042 2% /global/.devices/node@2
    Can I just go on the first name and rename /dev/vx/bootdg/ and /dev/vx/rdsl/bootdg and rename the rootdg_16vol to node@1 or is there a different method.

    Yes, you can just rename as a normal veritas volume using vxedit. Make sure you modify the /etc/vfstab file for node1.
    1. umount the /global/.devices/node@1
    2. renamee
    3. modify vfstab
    4. mount /global/.devices/node@1
    If possible test reboot to verify.
    Actually, it doesnot make any difference ifyou leave it that way.
    -LP

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

  • Lucreate - „Cannot make file systems for boot environment“

    Hello!
    I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
    --($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207  -m /:/dev/md/dsk/d200:ufs
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <d100> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Searching /dev for possible boot environment filesystem devices
    Updating system configuration files.
    The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <S10U6_20081207>.
    Source boot environment is <d100>.
    Creating boot environment <S10U6_20081207>.
    Creating file systems on boot environment <S10U6_20081207>.
    Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
    Mounting file systems for boot environment <S10U6_20081207>.
    Calculating required sizes of file systems              for boot environment <S10U6_20081207>.
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
    Well - why's that?
    I can do a "newfs /dev/md/dsk/d200" just fine.
    When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
    /bin/nawk: can't open file /etc/lu/ICF.2
    Quellcodezeilennummer 1
    Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
    lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
    Would anyone maybe have an idea about what's broken there?
    --($ ~)-- LC_ALL=C metastat
    d200: Mirror
        Submirror 0: d20
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d20: Submirror of d200
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s0          0     No            Okay   Yes
    d100: Mirror
        Submirror 0: d10
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d10: Submirror of d100
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s0          0     No            Okay   Yes
    d201: Mirror
        Submirror 0: d21
          State: Okay        
        Submirror 1: d11
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 2097414 blocks (1.0 GB)
    d21: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s1          0     No            Okay   Yes
    d11: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s1          0     No            Okay   Yes
    hsp001: is empty
    Device Relocation Information:
    Device   Reloc  Device ID
    c1t1d0   Yes    id1,sd@THITACHI_DK32EJ-36NC_____434N5641
    c1t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
    --($ ~)-- /bin/df -k | grep md
    /dev/md/dsk/d100     15490539 10772770 4562864    71%    /Thanks,
    Michael

    Hello.
    (sys01)root# devfsadm -Cv
    (sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
    --($ ~)-- sudo env LC_ALL=C LANG=C lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    d100                       yes      yes    yes       no     -        
    --($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
                   boot environment name: d100
                   This boot environment is currently active.
                   This boot environment will be active on next system boot.
    Filesystem              fstype    device size Mounted on          Mount Options
    /dev/md/dsk/d100        ufs       16106660352 /                   logging
    /dev/md/dsk/d201        swap       1073875968 -                   -In the rebooted system, I re-did the original lucreate:
    <code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
    Copying.
    *{color:#ff0000}Excellent! It now works!{color}*
    Thanks a lot,
    Michael

  • Solaris 7/8 - journaled file systems?

    Do I need to do anything to enable this feature? I don't see any references to file system journaling on any man pages. Can I convert ufs file systems?, etc, etc ...

    Yes, you have to mount the filesystems with option 'logging', see the
    mount_ufs(1M) man page. Using the 'remount' option in combination
    with 'logging' or 'nologging' you can turn it on/off on a mounted
    filesystem. To enable it across a reboot, add the 'logging' option
    into the last (= mount options) column for the relevant line in
    /etc/vfstab.

Maybe you are looking for