Problems mounting global file system

Hello all.
I have setup a Cluster using two Ultra10 machines called medusa & ultra10 (not very original I know) using Sun Cluster 3.1 with a Cluster patch bundle installed.
When one of the Ultra10 machines boots it complains about being unable to mount the global file system and for some reason tries to mount the node@1 file system when it is actually node 2.
on booting I receive the message on the macine ultra10
Type control-d to proceed with normal startup,
(or give root password for system maintenance): resuming boot
If I use control D to continue then the following happens:
ultra10:
ultra10:/ $ cat /etc/cluster/nodeid
2
ultra10:/ $ grep global /etc/vfstab
/dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@2 ufs 2 no global
ultra10:/ $ df -k | grep global
/dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
medusa:
medusa:/ $ cat /etc/cluster/nodeid
1
medusa:/ $ grep global /etc/vfstab
/dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@1 ufs 2 no global
medusa:/ $ df -k | grep global
/dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
Does anyone have any idea why the machine called ultra10 of node ID 2 is trying to mount the node ID 1 global file system when the correct entry is within the /etc/vfstab file?
Many thanks for any assistance.

Hmm, so for arguments sake, if I tried to mount both /dev/md/dsk/d50 devices to the same point in the filesystem for both nodes, it would mount OK?
I assumed the problem was because the device being used has the same name, and was confusing the Solaris OS when both nodes tried to mount it. Maybe some examples will help...
My cluster consists of two nodes, Helene and Dione. There is fibre-attached storage used for quorum, and website content. The output from scdidadm -L is:
1 helene:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 helene:/dev/rdsk/c0t1d0 /dev/did/rdsk/d2
3 helene:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
3 dione:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
4 dione:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 dione:/dev/rdsk/c0t1d0 /dev/did/rdsk/d5
This allows me to have identical entries in both host's /etc/vfstab files. There are also shared devices under /dev/global that can be accessed by both nodes. But the RAID devices are not referenced by anything from these directories (i.e. there's no /dev/global/md/dsk/50). I just thought it would make sense to have the option of global meta devices, but maybe that's just me!
Thanks again Tim! :D
Pete

Similar Messages

  • Unable to remount the global file system

    Hello All,
    I am facing problem when i am remounting the global file system on one of the nodes in cluster.
    Here are my system details:
    OS: SunOS sf44buce02 5.10 Generic_141414-01 sun4u sparc SUNW,Sun-Fire-V440
    SunCluster Version:3.2
    The problem details:
    The following entry i have in my /etc/vfstab file
    dev/md/cfsdg/dsk/d10 /dev/md/cfsdg/rdsk/d10 /global/TspFt ufs 2 yes global,logging
    and now i wanted to add "nosuid" option to the global file system. I have used the following command to add but i couldn't succeed.
    # mount -o nosuid,remount /global/TspFt i am getting the following error
    mount: Operation not supported
    mount: Cannot mount /dev/md/cfsdg/dsk/d10
    can anyone tell me How to remount the global file system without reboot?
    Thanks in advance.
    Regards,
    Rajeshwar

    Hi,
    Thank you very much for the reply. Please see the below details that you have asked:
    -> The volume manager i am using is *"SUN"*.
    -> In my previous post i missed "*/*" while pasting the vfstab entry. Please have a look at the below vfstab entry.
    */dev/md/cfsdg/dsk/d10 /dev/md/cfsdg/rdsk/d10 /global/TspFt ufs 2 yes global,logging,nosuid,noxattr*
    - Output of ls -al /dev/md/
    root@sf44buce02> ls -al /dev/md/
    total 34
    drwxr-xr-x 4 root root 512 Jun 24 16:37 .
    drwxr-xr-x 21 root sys 7168 Jun 24 16:38 ..
    lrwxrwxrwx 1 root root 31 Jun 3 20:19 admin -> ../../devices/pseudo/md@0:admin
    lrwxrwxrwx 1 root root 8 Jun 24 16:37 arch1dg -> shared/2
    lrwxrwxrwx 1 root other 8 Jun 3 22:26 arch2dg -> shared/4
    lrwxrwxrwx 1 root root 8 Jun 24 16:37 cfsdg -> shared/1
    drwxr-xr-x 2 root root 1024 Jun 3 22:41 dsk
    lrwxrwxrwx 1 root other 8 Jun 3 22:27 oradg -> shared/5
    drwxr-xr-x 2 root root 1024 Jun 3 22:41 rdsk
    lrwxrwxrwx 1 root root 8 Jun 24 16:37 redodg -> shared/3
    lrwxrwxrwx 1 root root 42 Jun 3 22:02 shared -> ../../global/.devices/node@2/dev/md/shared
    - output of ls -al /dev/md/cfsdg/
    root@sf44buce02> ls -al /dev/md/cfsdg/
    total 8
    drwxr-xr-x 4 root root 512 Jun 3 22:29 .
    drwxrwxr-x 7 root root 512 Jun 3 22:29 ..
    drwxr-xr-x 2 root root 512 Jun 24 16:37 dsk
    drwxr-xr-x 2 root root 512 Jun 24 16:37 rdsk
    - output of ls -la /dev/md/cfsdg/dsk/.
    root@sf44buce02> ls -al /dev/md/cfsdg/dsk
    total 16
    drwxr-xr-x 2 root root 512 Jun 24 16:37 .
    drwxr-xr-x 4 root root 512 Jun 3 22:29 ..
    lrwxrwxrwx 1 root root 42 Jun 24 16:37 d0 -> ../../../../../devices/pseudo/md@0:1,0,blk
    lrwxrwxrwx 1 root root 42 Jun 24 16:37 d1 -> ../../../../../devices/pseudo/md@0:1,1,blk
    lrwxrwxrwx 1 root root 43 Jun 24 16:37 d10 -> ../../../../../devices/pseudo/md@0:1,10,blk
    lrwxrwxrwx 1 root root 43 Jun 24 16:37 d11 -> ../../../../../devices/pseudo/md@0:1,11,blk
    lrwxrwxrwx 1 root root 42 Jun 24 16:37 d2 -> ../../../../../devices/pseudo/md@0:1,2,blk
    lrwxrwxrwx 1 root root 43 Jun 24 16:37 d20 -> ../../../../../devices/pseudo/md@0:1,20,blk

  • Sun Cluster 3.2 - Global File Systems

    Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
    In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
    From the documentation that I have read,
    "The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
    The GFS is also known as Cluster File System or Proxy File system.
    Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
    Ryan

    Thank you very much, this helped :)
    And how seamless is remounting of the block device LUN if one server dies?
    Should some clustered services (FS clients such as app servers) be restarted
    in case when the master node changes due to failover? Or is it truly seamless
    as in a bit of latency added for duration of mounting the block device on another
    node, with no fatal interruptions sent to the clients?
    And, is it true that this solution is gratis, i.e. may legally be used for free
    unless the customer wants support from Sun (authorized partners)? ;)
    //Jim
    Edited by: JimKlimov on Aug 19, 2009 4:16 PM

  • Problems with the File System Repository & User Mapping!

    Hi All
    I am having a problem with a file system repository, and setting up user mapping for that repository.
    I have done the following:
    Created a File System Repository
    Created a Network Path
    Created a System (Including the alias)
    Now when I go into User Administration and select my user the is no user mapped systems to select.
    All this system is doing is connecering to a folder on our File System.
    Any help would be great as this is really frustrating!
    Thanks
    Phil

    I am using EP7 Stack 11 and unfortunately the only options I have are:
    user
    admin
    admin,user
    It is currently set to admin, user and does not seem to work!
    Phil
    Message was edited by:
            Phil Wade

  • Installing TREX global file system standalone

    I am installing TREX 7.1.23 as a distributed system with central storage (SAN).  However, according to multiple SAP documents, 7.1 does not yet allow you to install the global file system in a separate manner than the TREX system installation.  Refer to TREX 7.1 central note 1003900 [https://service.sap.com/sap/support/notes/1003900].
    SAP does indicate there is an interim solution with note 1258694 "TREX 7.1:Install TREX with Global File System (Windows). " [https://service.sap.com/sap/support/notes/1258694].
    In this note it says to execute "install. cmd --action=install_cfs --target=<UNC path_to_NAS> --sid=<SAPSID>", but I have downloaded the standalone TREX 7.1 installation and there is no "install.cmd" file to be found!?
    Has anyone seen this note and know how this is done or where this file can be found?
    thanks!
    John

    will be redesigning this installation, so closing thread

  • Can´t mount OCFS2 file system after Public IP modify

    Guys,
    We have an environment with 2 nodes with RAC database version 10.2.0.1. We need to modify the Public IP and VIP of the Oracle CRS. So we did the following steps:
    - Alter the VIP
    srvctl modify nodeapps -n node1 -A 192.168.1.101/255.255.255.0/eth0
    srvctl modify nodeapps -n node2 -A 192.168.1.102/255.255.255.0/eth0
    - Alter the Public IP
    oifcfg delif -global eth0
    oifcfg setif -global eth0/192.168.1.0:public
    - Alter the IP´s of the network interfaces
    - Update the /etc/hosts
    When we start the Oracle CRS, the components starts OK. But when we reboot the second node, the OCFS2 file system didn´t mount. The following errors occurs:
    SCSI device sde: 4194304 512-byte hdwr sectors (2147 MB)
    sde: cache data unavailable
    sde: assuming drive cache: write through
    sde: sde1
    parport0: PC-style at 0x378 [PCSPP,TRISTATE]
    lp0: using parport0 (polling).
    lp0: console ready
    mtrr: your processor doesn't support write-combining
    (2746,0):o2net_start_connect:1389 ERROR: bind failed with -99 at address 192.168.2.132
    (2746,0):o2net_start_connect:1420 connect attempt to node rac1 (num 0) at 192.168.2.131:7777 failed with errno -99
    (2746,0):o2net_connect_expired:1444 ERROR: no connection established with node 0 after 10 seconds, giving up and returning errors.
    (5457,0):dlm_request_join:786 ERROR: status = -107
    (5457,0):dlm_try_to_join_domain:934 ERROR: status = -107
    (5457,0):dlm_join_domain:1186 ERROR: status = -107
    (5457,0):dlm_register_domain:1379 ERROR: status = -107
    (5457,0):ocfs2_dlm_init:2007 ERROR: status = -107
    (5457,0):ocfs2_mount_volume:1062 ERROR: status = -107
    ocfs2: Unmounting device (8,17) on (node 1)
    When we did the command to force the mount occurs the errors:
    # mount -a
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /ocfs2. Check 'dmesg' for more information on this error.
    What occurs is that the OCFS2 is trying to connect with the older Public IP. My question is, how do i change the public IP in the ocfs2 ?
    regards,
    Eduardo P Niel
    OCP Oracle

    Hi, is correct you maybe check the /etc/cluster.conf file, maybe the configuration is wrong, you can also check the /etc/hosts file for verify the correct definition host names.
    Luck
    Have a good day.
    Regards,

  • Problem of Network File System repository

    Hi all:
    I want to set up a File system repository on a remote server.
    The steps are here:
    1.Define a Network Path.
    2.Set up a File system repository.
    Things done when the file system and the portal server are in the same zone of address,like  10.XXX.5.XX and 10.XXX.5.XXX,  but when reference to another file system, which IP is 10.XXX.10.XX, the repository can never start up!
    The message in Component Monitoring is "The localroot does not exist:
    10.XXX.10.XX"
    How can I solve this problem?

    Hi Rani,
    1.I think I have done all the steps setting up a File system repository. And when the server and File system are in the same segment of IP address, the repository works.
    Security Manager:AclSecurityManager
    ACL Manager Cache: not set
    Windows Landscape System: not set
    2.when use servername as the path, the message on monitoring is still on IP address.
    3.I can't see the folder
    4. I can open the path from the server on exploer.

  • RPC Problem while mounting NFS File System

    Hi,
    We have two servers. ca1 and bench
    Currently we moved bench across the firewall in DMZ. Now ca1 and bench are on opposite sides of the firewall. We have made sure that port 111, 2049 and 4045 along with the default telnet and ftp ports are programmed in firewall to provide access.
    We can telnet and ftp from bench to ca1.
    When we try to NFS mount a ca1 filesystem at bench using following command :
    bench:>mount ca1:/u07/export /ca1/u07/export
    we get following error :
    nfs mount: ca1: : RPC: Timed out
    This is happening since we moved the bench server across the firewall.
    What could be the problem?
    Cheers,
    MS

    Hi,
    Thanks for the help.
    It seems like that the nfs service is already started on the server.
    Here are the results :
    # ps -ef | grep nfsd
    root 366 1 0 Jul 12 ? 0:00 /usr/lib/nfs/nfsd -a 16
    root 25791 25789 0 16:16:45 pts/5 0:00 grep nfsd
    # ps -ef | grep mountd
    root 218 1 0 Jul 12 ? 0:05 /usr/lib/autofs/automountd
    root 364 1 0 Jul 12 ? 0:00 /usr/lib/nfs/mountd
    root 25793 25789 0 16:16:53 pts/5 0:00 grep mountd
    # ps -ef | grep statd
    root 201 1 0 Jul 12 ? 0:00 /usr/lib/nfs/statd
    root 25797 25789 0 16:16:59 pts/5 0:00 grep statd
    # ps -ef | grep lockd
    root 203 1 0 Jul 12 ? 0:00 /usr/lib/nfs/lockd
    root 25799 25789 0 16:17:07 pts/5 0:00 grep lockd
    Please let me know what else can be done. This problem started after we moved these machines on different sides of the firewall.
    Cheers,
    MS

  • After boot can't mount SDS file systems on 3300 mount: cannot mount /dev/md

    Hi
    I have a SUNW,Netra-440 with two SE 3300 SCSI each with two controllers. The server was working fine but after a boot, it can not see the metadevices stored in the 3300.
    I have tried restarting the SE3300 and a boot -r from ok prompt but I still can see those disks or partitions.
    Metaset Import
    metaset: log01: setname "appvol": no such set
    mount all SVM filesystems for group appvol
    mount: No such device
    mount: cannot mount /dev/md/appvol/dsk/d101
    mount: No such device
    mount: cannot mount /dev/md/appvol/dsk/d102
    mount: No such device
    mount: cannot mount /dev/md/appvol/dsk/d103
    mount: No such device
    mount: cannot mount /dev/md/appvol/dsk/d104
    mount: No such device
    mount: cannot mount /dev/md/appvol/dsk/d105
    mount: No such device
    mount: cannot mount /dev/md/appvol/dsk/d106
    metaset: log01: setname "dbvol": no such set
    mount all SVM filesystems for group dbvol
    mount: No such device
    mount: cannot mount /dev/md/dbvol/dsk/d107
    mount: No such device
    mount: cannot mount /dev/md/dbvol/dsk/d108
    i can see the devices in /dev/md/appvol but i can't mount them.
    Any ideas what this could be? using metaset shows nothing, metastat -s dbvol hangs.
    Am I missing some patch or am I missing something here? is the SE3300 damaged? or is it a problem with SDS (solstice). I have solaris 9.
    The system was working fine until we had to boot and power off to perform power tests in the rack.
    thanks
    Ryck
    this is the output of prtdiag
    root@log01 >prtdiag
    System Configuration: Sun Microsystems sun4u Netra 440
    System clock frequency: 177 MHZ
    Memory size: 8GB
    ==================================== CPUs ====================================
    E$ CPU CPU Temperature
    CPU Freq Size Implementation Mask Die Amb. Status Location
    0 1593 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 - - online -
    1 1593 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 - - online -
    2 1593 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 - - online -
    3 1593 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 - - online -
    ================================= IO Devices =================================
    Bus Freq Slot + Name +
    Type MHz Status Path Model
    pci 66 PCI5 scsi-pci1000,30 (scsi-2) LSI,1030
    okay /pci@1c,600000/scsi@1
    pci 66 PCI5 scsi-pci1000,30 (scsi-2) LSI,1030
    okay /pci@1c,600000/scsi@1,1
    pci 66 MB pci108e,abba (network) SUNW,pci-ce
    okay /pci@1c,600000/network@2
    pci 66 PCI4 scsi-pci1000,30 (scsi-2) LSI,1030
    okay /pci@1d,700000/scsi@1
    pci 66 PCI4 scsi-pci1000,30 (scsi-2) LSI,1030
    okay /pci@1d,700000/scsi@1,1
    pci 66 PCI2 pci100b,35 (network) SUNW,pci-x-qge
    okay /pci@1d,700000/pci@2/network@0
    pci 66 PCI2 pci100b,35 (network) SUNW,pci-x-qge
    okay /pci@1d,700000/pci@2/network@1
    pci 66 PCI2 pci100b,35 (network) SUNW,pci-x-qge
    okay /pci@1d,700000/pci@2/network@2
    pci 66 PCI2 pci100b,35 (network) SUNW,pci-x-qge
    okay /pci@1d,700000/pci@2/network@3
    pci 33 MB isa/su (serial)
    okay /pci@1e,600000/isa@7/serial@0,3f8
    pci 33 MB isa/su (serial)
    okay /pci@1e,600000/isa@7/serial@0,2e8
    pci 33 MB isa/rmc-comm-rmc_comm (seria+
    okay /pci@1e,600000/isa@7/rmc-comm@0,3e8
    pci 33 PCI0 SUNW,XVR-100 (display) SUNW,375-3290
    okay /pci@1e,600000/SUNW,XVR-100
    pci 33 MB pciclass,0c0310 (usb)
    okay /pci@1e,600000/usb@a
    pci 33 MB pciclass,0c0310 (usb)
    okay /pci@1e,600000/usb@b
    pci 33 MB pci10b9,5229 (ide)
    okay /pci@1e,600000/ide@d
    pci 66 MB pci108e,abba (network) SUNW,pci-ce
    okay /pci@1f,700000/network@1
    pci 66 MB scsi-pci1000,30 (scsi-2) LSI,1030
    okay /pci@1f,700000/scsi@2
    pci 66 MB scsi-pci1000,30 (scsi-2) LSI,1030
    okay /pci@1f,700000/scsi@2,1
    ============================ Memory Configuration ============================
    Segment Table:
    Base Address Size Interleave Factor Contains
    0x0 2GB 4 BankIDs 0,1,2,3
    0x1000000000 2GB 4 BankIDs 16,17,18,19
    0x2000000000 2GB 4 BankIDs 32,33,34,35
    0x3000000000 2GB 4 BankIDs 48,49,50,51
    Bank Table:
    Physical Location
    ID ControllerID GroupID Size Interleave Way
    0 0 0 512MB 0,1,2,3
    1 0 1 512MB
    2 0 1 512MB
    3 0 0 512MB
    16 1 0 512MB 0,1,2,3
    17 1 1 512MB
    18 1 1 512MB
    19 1 0 512MB
    32 2 0 512MB 0,1,2,3
    33 2 1 512MB
    34 2 1 512MB
    35 2 0 512MB
    48 3 0 512MB 0,1,2,3
    49 3 1 512MB
    50 3 1 512MB
    51 3 0 512MB
    Memory Module Groups:
    ControllerID GroupID Labels Status
    0 0 C0/P0/B0/D0
    0 0 C0/P0/B0/D1
    0 1 C0/P0/B1/D0
    0 1 C0/P0/B1/D1
    1 0 C1/P0/B0/D0
    1 0 C1/P0/B0/D1
    1 1 C1/P0/B1/D0
    1 1 C1/P0/B1/D1
    2 0 C2/P0/B0/D0
    2 0 C2/P0/B0/D1
    2 1 C2/P0/B1/D0
    2 1 C2/P0/B1/D1
    3 0 C3/P0/B0/D0
    3 0 C3/P0/B0/D1
    3 1 C3/P0/B1/D0
    3 1 C3/P0/B1/D1
    root@log01 >

    Hi,
    thanks for your answer.
    Yes format displays the physical devices, but metadb -i only shows the internal disks of the netra 440, not the ones in the 3320. metadb -s dbvol -i doesnt show anything.
    metaset which is supposed to show me the metadevices, says the program is not registered.
    cat /etc/lvm/md.tab has the configuration but since the scsi can be seen, the devices aren't mounted.
    Any suggestions?
    Cheers
    Oscar
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> bootdisk
    /pci@1f,700000/scsi@2/sd@0,0
    1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> bootmirr
    /pci@1f,700000/scsi@2/sd@1,0
    2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> spare1
    /pci@1f,700000/scsi@2/sd@2,0
    3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> spare2
    /pci@1f,700000/scsi@2/sd@3,0
    4. c2t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> appvol
    /pci@1c,600000/scsi@1/sd@8,0
    5. c2t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> dbvol01
    /pci@1c,600000/scsi@1/sd@9,0
    6. c2t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> dbvol02
    /pci@1c,600000/scsi@1/sd@a,0
    7. c2t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> dbvol03
    /pci@1c,600000/scsi@1/sd@b,0
    8. c2t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> dbvol04
    /pci@1c,600000/scsi@1/sd@c,0
    9. c2t13d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> dbvol05
    /pci@1c,600000/scsi@1/sd@d,0
    10. c3t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> dbvol06
    /pci@1c,600000/scsi@1,1/sd@8,0
    11. c3t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> free1
    /pci@1c,600000/scsi@1,1/sd@9,0
    12. c3t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> free2
    /pci@1c,600000/scsi@1,1/sd@a,0
    13. c3t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> free3
    /pci@1c,600000/scsi@1,1/sd@b,0
    14. c3t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> hotAPP
    /pci@1c,600000/scsi@1,1/sd@c,0
    15. c3t13d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> hotDB
    /pci@1c,600000/scsi@1,1/sd@d,0
    16. c4t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> APPVOLm
    /pci@1d,700000/scsi@1/sd@8,0
    17. c4t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/scsi@1/sd@9,0
    18. c4t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/scsi@1/sd@a,0
    19. c4t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/scsi@1/sd@b,0
    20. c4t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/scsi@1/sd@c,0
    21. c4t13d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/scsi@1/sd@d,0
    22. c5t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> d6Mirr
    /pci@1d,700000/scsi@1,1/sd@8,0
    23. c5t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> free1M
    /pci@1d,700000/scsi@1,1/sd@9,0
    24. c5t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> free2M
    /pci@1d,700000/scsi@1,1/sd@a,0
    25. c5t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> free3M
    /pci@1d,700000/scsi@1,1/sd@b,0
    26. c5t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/scsi@1,1/sd@c,0
    27. c5t13d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> hotDBm
    /pci@1d,700000/scsi@1,1/sd@d,0
    root@log01 >metadb -i
    flags first blk block count
    a m p luo 16 8192 /dev/dsk/c1t0d0s7
    a p luo 8208 8192 /dev/dsk/c1t0d0s7
    a p luo 16400 8192 /dev/dsk/c1t0d0s7
    a p luo 16 8192 /dev/dsk/c1t1d0s7
    a p luo 8208 8192 /dev/dsk/c1t1d0s7
    a p luo 16400 8192 /dev/dsk/c1t1d0s7
    r - replica does not have device relocation information
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    root@log01 >
    root@log01 >metadb -i
    flags first blk block count
    a m p luo 16 8192 /dev/dsk/c1t0d0s7
    a p luo 8208 8192 /dev/dsk/c1t0d0s7
    a p luo 16400 8192 /dev/dsk/c1t0d0s7
    a p luo 16 8192 /dev/dsk/c1t1d0s7
    a p luo 8208 8192 /dev/dsk/c1t1d0s7
    a p luo 16400 8192 /dev/dsk/c1t1d0s7
    r - replica does not have device relocation information
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    root@log01 >metadb -s appvol -i
    metadb: log01: setname "appvol": no such set
    root@log01 >metaset
    metaset: log01: metad client create: RPC: Program not registered
    root@log01 >cat /etc/lvm/md.tab
    # Copyright 2002 Sun Microsystems, Inc. All rights reserved.
    # Use is subject to license terms.
    # ident "@(#)md.tab 2.4 02/01/29 SMI"
    # md.tab
    # metainit utility input file.
    dbvol/d57 1 1 c2t12d0s0
    dbvol/d58 1 1 c4t12d0s0
    dbvol/d4 -m dbvol/d57 dbvol/d58 1
    dbvol/d55 1 1 c2t11d0s0
    dbvol/d56 1 1 c4t11d0s0
    dbvol/d3 -m dbvol/d55 dbvol/d56 1
    dbvol/d61 1 1 c3t8d0s0
    dbvol/d62 1 1 c5t8d0s0
    dbvol/d6 -m dbvol/d61 dbvol/d62 1
    dbvol/d59 1 1 c2t13d0s0
    dbvol/d60 1 1 c4t13d0s0
    dbvol/d5 -m dbvol/d59 dbvol/d60 1
    dbvol/d53 1 1 c2t10d0s0
    dbvol/d54 1 1 c4t10d0s0
    dbvol/d2 -m dbvol/d53 dbvol/d54 1
    ###dbvol/d51 1 1 c2t9d0s0
    dbvol/d52 1 1 c4t9d0s0
    #dbvol/d1 -m dbvol/d51 dbvol/d52 1
    dbvol/d1 -m dbvol/d52 1
    # SOFT Partitions
    dbvol/d129 -p dbvol/d5 -o 19933376 -b 526336
    dbvol/d128 -p dbvol/d5 -o 19407008 -b 526336
    dbvol/d127 -p dbvol/d5 -o 18880640 -b 526336
    dbvol/d126 -p dbvol/d5 -o 10489952 -b 8390656
    dbvol/d125 -p dbvol/d5 -o 2099264 -b 8390656
    dbvol/d123 -p dbvol/d4 -o 4724864 -b 133120
    dbvol/d122 -p dbvol/d4 -o 4460640 -b 264192
    dbvol/d121 -p dbvol/d4 -o 4196416 -b 264192
    dbvol/d120 -p dbvol/d4 -o 32 -b 4196352
    dbvol/d119 -p dbvol/d3 -o 30972032 -b 526336
    dbvol/d118 -p dbvol/d3 -o 10489952 -b 20482048
    dbvol/d117 -p dbvol/d3 -o 4196416 -b 6293504
    dbvol/d116 -p dbvol/d3 -o 32 -b 4196352
    dbvol/d115 -p dbvol/d2 -o 8925408 -b 526336
    dbvol/d114 -p dbvol/d2 -o 8399040 -b 526336
    dbvol/d113 -p dbvol/d2 -o 7872672 -b 526336
    dbvol/d112 -p dbvol/d2 -o 7346304 -b 526336
    dbvol/d111 -p dbvol/d2 -o 5247072 -b 2099200
    dbvol/d110 -p dbvol/d2 -o 1050688 -b 4196352
    dbvol/d109 -p dbvol/d2 -o 32 -b 1050624
    dbvol/d108 -p dbvol/d1 -o 33554496 -b 1572864
    dbvol/d107 -p dbvol/d1 -o 32 -b 33554432
    dbvol/d124 -p dbvol/d5 -o 32 -b 2099200
    dbvol/d138 -p dbvol/d4 -o 38416608 -b 133120
    dbvol/d137 -p dbvol/d3 -o 31498400 -b 264192
    dbvol/d136 -p dbvol/d4 -o 21637312 -b 16779264
    dbvol/d135 -p dbvol/d4 -o 4858016 -b 16779264
    dbvol/d134 -p dbvol/d6 -o 69212288 -b 23070720
    dbvol/d133 -p dbvol/d6 -o 46141536 -b 23070720
    dbvol/d132 -p dbvol/d6 -o 23070784 -b 23070720
    dbvol/d131 -p dbvol/d6 -o 32 -b 23070720
    dbvol/d130 -p dbvol/d5 -o 20459744 -b 526336
    root@log01 >metadb -s dbvol -i
    metadb: log01: setname "dbvol": no such set
    root@log01 >\

  • Problem:Accessing the file system with servlets ???

    Hi...
    I have a strange problem with my servlets that run on Win2000 with Apache and 2 Tomcat instances.
    I cannot open files through servlets whereas exactly the same code lines work in local standalone java programm.
    It seems to be somehting like a rights problem...but I dont know what to do.
    thanks for any help
    here are my configuration files for Apache and Tomcat:
    Apache: *******************************************************
    ### Section 1: Global Environment
    ServerRoot "D:/Webserver_and_Applications/Apache2"
    PidFile logs/httpd.pid
    Timeout 300
    KeepAlive On
    MaxKeepAliveRequests 100
    KeepAliveTimeout 15
    <IfModule mpm_winnt.c>
    ThreadsPerChild 250
    MaxRequestsPerChild 0
    </IfModule>
    Listen 80
    LoadModule jk_module modules/mod_jk.dll
    JkWorkersFile conf/workers.properties
    JkLogFile logs/mod_jk.log
    JkLogLevel info
    LoadModule access_module modules/mod_access.so
    LoadModule actions_module modules/mod_actions.so
    LoadModule alias_module modules/mod_alias.so
    LoadModule asis_module modules/mod_asis.so
    LoadModule auth_module modules/mod_auth.so
    LoadModule autoindex_module modules/mod_autoindex.so
    LoadModule cgi_module modules/mod_cgi.so
    LoadModule dir_module modules/mod_dir.so
    LoadModule env_module modules/mod_env.so
    LoadModule imap_module modules/mod_imap.so
    LoadModule include_module modules/mod_include.so
    LoadModule isapi_module modules/mod_isapi.so
    LoadModule log_config_module modules/mod_log_config.so
    LoadModule mime_module modules/mod_mime.so
    LoadModule negotiation_module modules/mod_negotiation.so
    LoadModule setenvif_module modules/mod_setenvif.so
    LoadModule userdir_module modules/mod_userdir.so
    ### Section 2: 'Main' server configuration
    ServerAdmin [email protected]
    ServerName www.testnet.com:80
    UseCanonicalName Off
    DocumentRoot "D:/Webserver_and_Applications/root"
    JkMount /*.jsp loadbalancer
    JkMount /servlet/* loadbalancer
    <Directory />
    Options FollowSymLinks
    AllowOverride None
    </Directory>
    <Directory "D:/Webserver_and_Applications/root">
    Order allow,deny
    Allow from all
    </Directory>
    UserDir "My Documents/My Website"
    DirectoryIndex index.html index.html.var
    AccessFileName .htaccess
    <Files ~ "^\.ht">
    Order allow,deny
    Deny from all
    </Files>
    TypesConfig conf/mime.types
    DefaultType text/plain
    <IfModule mod_mime_magic.c>
    MIMEMagicFile conf/magic
    </IfModule>
    HostnameLookups Off
    ErrorLog logs/error.log
    LogLevel warn
    LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
    LogFormat "%h %l %u %t \"%r\" %>s %b" common
    LogFormat "%{Referer}i -> %U" referer
    LogFormat "%{User-agent}i" agent
    CustomLog logs/access.log common
    ServerTokens Full
    ServerSignature On
    Alias /icons/ "D:/Webserver_and_Applications/Apache2/icons/"
    <Directory "D:/Webserver_and_Applications/Apache2/icons">
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
    </Directory>
    Alias /manual "D:/Webserver_and_Applications/Apache2/manual"
    <Directory "D:/Webserver_and_Applications/Apache2/manual">
    Options Indexes FollowSymLinks MultiViews IncludesNoExec
    AddOutputFilter Includes html
    AllowOverride None
    Order allow,deny
    Allow from all
    </Directory>
    ScriptAlias /cgi-bin/ "d:/webserver_and_applications/root/cgi-bin/"
    <Directory "D:/Webserver_and_Applications/root/cgi-bin/">
    AllowOverride None
    Options Indexes FollowSymLinks MultiViews
    Order allow,deny
    Allow from all
    </Directory>
    IndexOptions FancyIndexing VersionSort
    AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip
    AddIconByType (TXT,/icons/text.gif) text/*
    AddIconByType (IMG,/icons/image2.gif) image/*
    AddIconByType (SND,/icons/sound2.gif) audio/*
    AddIconByType (VID,/icons/movie.gif) video/*
    AddIcon /icons/binary.gif .bin .exe
    AddIcon /icons/binhex.gif .hqx
    AddIcon /icons/tar.gif .tar
    AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv
    AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip
    AddIcon /icons/a.gif .ps .ai .eps
    AddIcon /icons/layout.gif .html .shtml .htm .pdf
    AddIcon /icons/text.gif .txt
    AddIcon /icons/c.gif .c
    AddIcon /icons/p.gif .pl .py
    AddIcon /icons/f.gif .for
    AddIcon /icons/dvi.gif .dvi
    AddIcon /icons/uuencoded.gif .uu
    AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl
    AddIcon /icons/tex.gif .tex
    AddIcon /icons/bomb.gif core
    AddIcon /icons/back.gif ..
    AddIcon /icons/hand.right.gif README
    AddIcon /icons/folder.gif ^^DIRECTORY^^
    AddIcon /icons/blank.gif ^^BLANKICON^^
    DefaultIcon /icons/unknown.gif
    IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t
    AddEncoding x-compress Z
    AddEncoding x-gzip gz tgz
    AddLanguage da .dk
    AddLanguage nl .nl
    AddLanguage en .en
    AddLanguage et .et
    AddLanguage fr .fr
    AddLanguage de .de
    AddLanguage he .he
    AddLanguage el .el
    AddLanguage it .it
    AddLanguage ja .ja
    AddLanguage pl .po
    AddLanguage ko .ko
    AddLanguage pt .pt
    AddLanguage nn .nn
    AddLanguage no .no
    AddLanguage pt-br .pt-br
    AddLanguage ltz .ltz
    AddLanguage ca .ca
    AddLanguage es .es
    AddLanguage sv .se
    AddLanguage cz .cz
    AddLanguage ru .ru
    AddLanguage tw .tw
    AddLanguage zh-tw .tw
    AddLanguage hr .hr
    LanguagePriority en da nl et fr de el it ja ko no pl pt pt-br ltz ca es sv tw
    ForceLanguagePriority Prefer Fallback
    AddDefaultCharset ISO-8859-1
    AddCharset ISO-8859-1 .iso8859-1 .latin1
    AddCharset ISO-8859-2 .iso8859-2 .latin2 .cen
    AddCharset ISO-8859-3 .iso8859-3 .latin3
    AddCharset ISO-8859-4 .iso8859-4 .latin4
    AddCharset ISO-8859-5 .iso8859-5 .latin5 .cyr .iso-ru
    AddCharset ISO-8859-6 .iso8859-6 .latin6 .arb
    AddCharset ISO-8859-7 .iso8859-7 .latin7 .grk
    AddCharset ISO-8859-8 .iso8859-8 .latin8 .heb
    AddCharset ISO-8859-9 .iso8859-9 .latin9 .trk
    AddCharset ISO-2022-JP .iso2022-jp .jis
    AddCharset ISO-2022-KR .iso2022-kr .kis
    AddCharset ISO-2022-CN .iso2022-cn .cis
    AddCharset Big5 .Big5 .big5
    AddCharset WINDOWS-1251 .cp-1251 .win-1251
    AddCharset CP866 .cp866
    AddCharset KOI8-r .koi8-r .koi8-ru
    AddCharset KOI8-ru .koi8-uk .ua
    AddCharset ISO-10646-UCS-2 .ucs2
    AddCharset ISO-10646-UCS-4 .ucs4
    AddCharset UTF-8 .utf8
    AddCharset GB2312 .gb2312 .gb
    AddCharset utf-7 .utf7
    AddCharset utf-8 .utf8
    AddCharset big5 .big5 .b5
    AddCharset EUC-TW .euc-tw
    AddCharset EUC-JP .euc-jp
    AddCharset EUC-KR .euc-kr
    AddCharset shift_jis .sjis
    AddType application/x-tar .tgz
    AddType image/x-icon .ico
    AddHandler type-map var
    BrowserMatch "Mozilla/2" nokeepalive
    BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0
    BrowserMatch "RealPlayer 4\.0" force-response-1.0
    BrowserMatch "Java/1\.0" force-response-1.0
    BrowserMatch "JDK/1\.0" force-response-1.0
    BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully
    BrowserMatch "^WebDrive" redirect-carefully
    BrowserMatch "^WebDAVFS/1.[012]" redirect-carefully
    <IfModule mod_ssl.c>
    Include conf/ssl.conf
    </IfModule>
    ScriptAlias /php/ "d:/webserver_and_applications/php/"
    AddType application/x-httpd-php .php
    Action application/x-httpd-php "/php/php.exe"
    Tomcat:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    <Server port="11005" shutdown="SHUTDOWN" debug="0">
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Tomcat-Standalone">
    <!-- Define an AJP 1.3 Connector on port 11009 -->
    <Connector className="org.apache.ajp.tomcat4.Ajp13Connector"
    port="11009" minProcessors="5" maxProcessors="75"
    acceptCount="10" debug="0"/>
    <!-- Define the top level container in our container hierarchy -->
    <Engine jvmRoute="tomcat1" name="Standalone" defaultHost="localhost" debug="0">
    <!-- Global logger unless overridden at lower levels -->
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="catalina_log." suffix=".txt"
    timestamp="true"/>
    <!-- Because this Realm is here, an instance will be shared globally -->
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    <!-- Define the default virtual host -->
    <Host name="localhost" debug="0" appBase="webapps" unpackWARs="true">
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common"/>
    <Logger className="org.apache.catalina.logger.FileLogger"
    directory="logs" prefix="localhost_log." suffix=".txt"
         timestamp="true"/>
    <!-- Tomcat Root Context -->
    <Context path="" docBase="d:/webserver_and_applications/root" debug="0"/>
    <!-- Tomcat Manager Context -->
    <Context path="/manager" docBase="manager"
    debug="0" privileged="true"/>
    <Context path="/examples" docBase="examples" debug="0"
    reloadable="true" crossContext="true">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="localhost_examples_log." suffix=".txt"
         timestamp="true"/>
    <Ejb name="ejb/EmplRecord" type="Entity"
    home="com.wombat.empl.EmployeeRecordHome"
    remote="com.wombat.empl.EmployeeRecord"/>
    <Environment name="maxExemptions" type="java.lang.Integer"
    value="15"/>
    <Parameter name="context.param.name" value="context.param.value"
    override="false"/>
    <Resource name="jdbc/EmployeeAppDb" auth="SERVLET"
    type="javax.sql.DataSource"/>
    <ResourceParams name="jdbc/EmployeeAppDb">
    <parameter><name>user</name><value>sa</value></parameter>
    <parameter><name>password</name><value></value></parameter>
    <parameter><name>driverClassName</name>
    <value>org.hsql.jdbcDriver</value></parameter>
    <parameter><name>driverName</name>
    <value>jdbc:HypersonicSQL:database</value></parameter>
    </ResourceParams>
    <Resource name="mail/Session" auth="Container"
    type="javax.mail.Session"/>
    <ResourceParams name="mail/Session">
    <parameter>
    <name>mail.smtp.host</name>
    <value>localhost</value>
    </parameter>
    </ResourceParams>
    </Context>
    </Host>
    </Engine>
    </Service>
    <!-- Define an Apache-Connector Service -->
    <Service name="Tomcat-Apache">
    <Engine className="org.apache.catalina.connector.warp.WarpEngine"
    name="Apache" debug="0">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="apache_log." suffix=".txt"
    timestamp="true"/>
    </Engine>
    </Service>
    </Server>
    *** and here is my workers.properties : *******************************
    # workers.properties
    # In Unix, we use forward slashes:
    ps=/
    # list the workers by name
    worker.list=tomcat1, tomcat2, loadbalancer
    # First tomcat server
    worker.tomcat1.port=11009
    worker.tomcat1.host=localhost
    worker.tomcat1.type=ajp13
    # Specify the size of the open connection cache.
    #worker.tomcat1.cachesize
    # Specifies the load balance factor when used with
    # a load balancing worker.
    # Note:
    # ----> lbfactor must be > 0
    # ----> Low lbfactor means less work done by the worker.
    worker.tomcat1.lbfactor=100
    # Second tomcat server
    worker.tomcat2.port=12009
    worker.tomcat2.host=localhost
    worker.tomcat2.type=ajp13
    # Specify the size of the open connection cache.
    #worker.tomcat2.cachesize
    # Specifies the load balance factor when used with
    # a load balancing worker.
    # Note:
    # ----> lbfactor must be > 0
    # ----> Low lbfactor means less work done by the worker.
    worker.tomcat2.lbfactor=100
    # Load Balancer worker
    # The loadbalancer (type lb) worker performs weighted round-robin
    # load balancing with sticky sessions.
    # Note:
    # ----> If a worker dies, the load balancer will check its state
    # once in a while. Until then all work is redirected to peer
    # worker.
    worker.loadbalancer.type=lb
    worker.loadbalancer.balanced_workers=tomcat1, tomcat2
    # END workers.properties
    thanks again

    Hi joshman,
    no I didn't get error messages as the relevant lines for reading/writing where between try statements, but you were where right it was/is just a simple path problem.
    I expected the refering directory without using a path to be the directory where the servlet is in, but it is not !!??
    Do you know if I set this in the setclasspath.bat of tomcat ?
    *** set JAVA_ENDORSED_DIRS=%BASEDIR%\bin;%BASEDIR%\common\lib ***
    thanks again
    Huma

  • Cluster with global file system

    Hi
    I setup Cluster 3.2 and all working fine
    I follow the SUN doc of creating a global filesystem ( 1. newfs ... 2. mount under /global/foo etc). however I cannot mount under /global
    say
    mount /dev/global/dsk/d3s1 /global/foo ( will say "no such file or directory" )
    But I can mount on say "/a"
    It can be mounted only one layer beneath /
    Why ?
    Appreciate any hints
    Thanks in advance
    Brian

    Well, assuming you are doing:
    # mount -g /dev/global/dsk/d3s1 /global/foo
    You'll need /global/foo to exist on all cluster nodes before you issue the mount command. The mount is an 'all or nothing' mount. It can't succeed on just a subset of cluster nodes, assuming they are all up. It must succeed on all.
    Tim
    ---

  • Please HELP!!!!!!!!   Mount a File System in Sun One Studio

    Hi,
    I've been trying to get my Java Documentation (on my hard drive...v.1.4.2) to work with Sun One Studio 5 update 1. I have been completely unsuccessful at mounting the documentation so that I could get reference on specific classes. Can somebody PLEASE tell me how I can mount the Javadoc correctly so that it will work and integrate with Sun One Studio 5?!?!?
    Thank you very much in advance!!

    The JDK documentation is automatically picked up for code completion. For explicit JavaDoc visibility in the Studio, select Tools | JavaDoc Manager and mount the api directory of the JDK documentation root.

  • Cannot mount NTFS file system on USB drive

    I plug in external disk via USB drive. I attempt to mount drive. I've downloaded several NTFS rpms.
    I believe the issue is that I do not yet have the correct NTFS rpm. Any help?
    [root@kclinux1 media]# mount -t ntfs /dev/sdc1 /media/usbdrive/
    mount: unknown filesystem type 'ntfs'
    [root@kclinux1 media]# cat /proc/filesystems |grep ntf
    [root@kclinux1 media]# uname -a
    Linux kclinux1 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:41:04 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
    [root@kclinux1 media]# rpm -qa|grep ntfs
    ntfsprogs-1.9.4-1.2.el4.rf
    kernel-module-ntfs-2.6.18-128.1.1.el5xen-2.1.27-0.rr.10.11
    kernel-module-ntfs-2.6.18-128.el5xen-2.1.27-0.rr.10.11
    [root@kclinux1 media]#

    try
    step 1)
    #  Red Hat Enterprise Linux 5 / i386:
    rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
    # Red Hat Enterprise Linux 5 / x86_64:
    rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS//rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
    # Red Hat Enterprise Linux 4 / i386:
    rpm -Uhv http://apt.sw.be/redhat/el4/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el4.rf.i386.rpm
    # Red Hat Enterprise Linux 4 / x86_64:
    rpm -Uhv http://apt.sw.be/redhat/el4/en/x86_64/rpmforge/RPMS/rpmforge-release-0.3.6-1.el4.rf.x86_64.rpmstep 2)
    apt-get updatestep 3)
    > yum install ntfs-3g
    > ntfs-3g /dev/sdxy /<mount directory>
    rgrds

  • UFS file system mount options

    I'm doing some performance tuning on a database server. In mounting a particular UFS file system, I need to enable the "forcedirectio" option. However, the "logging" option is already specified. Is there any problem mounting this file system with BOTH "logging" and "forcedirectio" at the same time? I can do it and the system boots just fine but I'm not sure if it's a good idea or not. Anybody know?

    Direct IO bypasses the page cache. Hence the name "direct".
    Thus, for large-block streaming operations that do not access the same data more than once, direct IO will improve performance while reducing memory usage - often significantly.
    IO operations that access data that could otherwise be cached can go MUCH slower with direct IO, especially small ones.

  • Limitations: Mounting File systems

    Hi,
    We have a requirement to communicate 30 odd application system through XI.
    Mostly they are File to SAP or viceversa scenario.
    Instead of using FTP Transport protocol, we are planning to use NFS as FTP goes heavy on performance.
    Is there any limitation on mounting maximun file systems on XI(on unix) server.
    Also suggest the best practice(NFS or FTP) for these kind of scenarios where volume of data and number of interfaces are very high.
    Best Regards,
    Satish

    ..."I understand the use of /etc/fstab is now deprecated."...
    This is true - but they have been saying that since at least 10.3. Well, "/private/etc/fstab" still working fine in 10.5.1 so I don't imagine there will be a problem continuing to use it for the time being.
    In Leopard, it looks like the information is automatically imported into "DirectoryService" under "/mounts" so it might also be possible to configure the mountpoints from there...

Maybe you are looking for