Mounting a UFS file system using NFS on Solaris 10
Quick question: If I have a Solaris 10 server running ZFS can I just mount and read a UFS partition without any issues?
NFS is filesystem agnostic.
Your not mounting a UFS filesystem, you mounting an NFS filesystem.
So the answer is yes.
Similar Messages
-
I'm doing some performance tuning on a database server. In mounting a particular UFS file system, I need to enable the "forcedirectio" option. However, the "logging" option is already specified. Is there any problem mounting this file system with BOTH "logging" and "forcedirectio" at the same time? I can do it and the system boots just fine but I'm not sure if it's a good idea or not. Anybody know?
Direct IO bypasses the page cache. Hence the name "direct".
Thus, for large-block streaming operations that do not access the same data more than once, direct IO will improve performance while reducing memory usage - often significantly.
IO operations that access data that could otherwise be cached can go MUCH slower with direct IO, especially small ones. -
Mounting the Root File System into RAM
Hi,
I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself. At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
I have now succeeded, at least it seems, in performing such a task. There are still some issues.
For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
A custom kernel hook is used to (overall):
Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
Copy the non-volatile content into the ramdisk.
Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
To integrate this hook into a initramfs, a preset is needed.
I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
Some prerequisites exist (by default):
Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
Referencing paths:
The variables:
source=
temporary=
...should have the same value in all of these files:
"/etc/initcpio/hooks/ram"
"/root/Documents/rootfs/unmount-root-fs.sh"
"/root/.rsync/exclude.txt" -- Should correspond.
This is needed to sync the RAM disk back to the hard disk.
I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
Here are all the accepted and used parameters:
Parameter Allowed Values Default Value Considered Values Description
root Default (UUID=+,/dev/disk/by-*/*) None Any string The source root
rootfstype Default of "-t <types>" of "mount" "auto" Any string The FS type of the source root.
rootflags Default of "-o <options>" of "mount" None Any string Options when mounting the source root.
ram Any string None "1" If this hook sould be run.
ramfstype Default of "-t <types>" of "mount" "auto" Any string The FS type of the RAM disk.
ramflags Default of "-o <options>" of "mount" "size=50%" Any string Options when mounting the RAM disk.
ramcleanup Any string None "0" If any left-overs should be cleaned.
ramcleanup_source Any string None "1" If the source root should be unmounted.
ram_transfer_tool cp,find,cpio,rsync,unison unison cp,find,cpio,rsync What tool to use to transfer the root into RAM.
ram_unison_fastcheck true,false,default,yes,no,auto "default" true,false,default,yes,no,auto Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
ramdisk_cache_use 0,1 None 0 If unison should use any available cache. Relevant if ram_transfer_tool=unison.
ramdisk_cache_update 0,1 None 0 If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
This is the basic setup.
Optionally:
I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
Of course, use what you need from the provided files.
Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
find+cpio: 2:10s (2:12s on slower hardware)
unison: 3:10s - 4:00s
cp: 4 minutes (31 minutes on slower hardware)
rsync: 4:40s (55 minutes on slower hardware)
Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
These are the remaining issues:
find+cpio option does not create any destination files.
(On some older hardware) When booting up, the source disk is not always detected.
When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
Inconveniences:
Unison needs to perform an update detection at each startup.
initramfs' ash does not parse wild characters to use "cp".
That's about what I can think of for now.
I will gladly try to answer any questions.
I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
Last edited by AGT (2014-05-20 23:21:45)How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
Last edited by AGT (2014-05-20 16:49:35) -
More than 1 million files on multi-terabyte UFS file systems
How do you configure a UFS file system for more than 1 million files when it exceeds 1 terabyte? I've got several Sun RAID subsystems where this is necessary.
Thanks. You are right on. According to Sun official channels:
Paula Van Wie wrote:
Hi Ron,
This is what I've found out.
No there is no way around the limitation. I would suggest an alternate
file system if possible suggest ZFS as they would get the most space
available as inodes are no longer used.
Like the customer noted if the inode values were increased significantly
and an fsck were required there is the possibility that the fsck could
take days or weeks to complete. So in order to avoid angry customers
having to wait a day or two for fsck to finish the limit was imposed.
And so far I've heard that there should not be corruption using zfs and
raid.
Paula -
Upload file in to file system using Apex
Hi...
I have tried UTL_FILE to upload file in Directory. But First i have to upload file in Table in BLOB datatype and then copy it to file system using BFILE.
I have to store files in file system as per requirements.
I want to upload file directly to File System. Suggest me the right way...
Thank You in advance........
Edited by: user639262 on Aug 28, 2008 2:11 PMApex only supports upload into a table.
If you want to upload directly to the file system you should look into the possibilities of your application server. Maybe you could find a piece of code in Java or PHP that performs an upload.
good luck, DickDral -
After deleting all the partitions on my intel Pentium III computer.
I boot from the first (1/2) CD to start Solaris 8 installation. But
I got the following message:
not UFS file system
Then the computer halted.
Please help me to overcome the problem
Thank you!
MichaelEver figure this out? I just DL'ed the CDs and get the same thing, even created a 2G DOS partition with same results!
-
Trying to view Ipad file system using a computer running Windows 7?
I'm trying to view my Ipad file system using my computer but i can only see the DCIM folder why is that?
iDevices do not have an accessible file management system for security reasons.
-
File system used space monitoring rule
All, I'm trying to change the way OC monitors disk space usage. I don't want it to report on nfs filesystems, except from the server from which they are shared.
The monitored attribute is FileSystemUsages.name=*.usedSpacePercentage.
I'd like it to only report on ufs and zfs filesystems. I've tried to create new rules using the attribures:
FileSystemUsages.type=ufs.usedSpacePercentage
FileSystemUsages.type=UFS.usedSpacePercentage
FileSystemUsages.type=zfs.usedSpacePercentage
FileSystemUsages.type=ZFS.usedSpacePercentage
But I don't get any alerts generated on system which I know violate the thresholds I've specified.
Has anybody successfully set up rules like these? Am I on the right track? do ufs/UFS/zfs/ZFS need to be single or double quoted? Documentation with various examples is non-existent as far as I can tell.
Any help is greatly appreciated
Timdo you get any answers for this question? It seems like OEM12c has file system space usage monitoring setup for nfs mounted file systems, however I could not find a place to specify the threshold for those nfs mounted file sytems except root file system. does anybody know how to setup threshold of nfs mounted file systems(Netapp storage)? thank you very much
-
Mounting FAT32 windows file system
Hello
I'm new to solaris.
i am using solaris 10 on x86 machine. i'm using 2 hard disks. i installed solaris in one separate hard disk. now i want to mount my windows FAT 32 file system to my solaris.it is installed in another hard disk. any one can help me to use windows file system in solaris.
And i also know how to configure internet setting in solaris.. that is where i can enter the IP address ,Access point name. because i'm using internet using my PDA that is connected to my pc. please help me...
Thanks in Advance.
Prakash.MPPP and PPPoE configuration are described in the [Network Administration Guide|http://docs.sun.com/app/docs/doc/816-4555/modemtm-1] .
A WWAN device should appear as a serial device in the */dev/term* directory. The [Wireless Wide Area Network|http://www.opensolaris.org/jive/forum.jspa?forumID=134] discussion group is probably the best place to get specific questions answered regarding your setup. -
My Backup.dmg is not mounting: No mountable file systems
I had Mac OS X Leopard running with Boot Camp (Windows Vista). I needed extra space for my Mac OS so I deleted the Windows partition using the Boot Camp Assistant. Then, I shut down the computer, but after I turned it on, there was the flashing question mark, which meant that the computer couldn't recognize the Macintosh operating system that was installed. Therefore, I put in the Mac OS Install Disk, and tried to repair the disk but it said invalid BS_jmpBoot in block 000000. Seeing this, I decided to reinstall Mac OS X, but backing up with Disk Utility: Create New Compressed .DMG Image before doing so. After a successful installation, I tried to mount the image, but it says No Mountable File System. I feel hopeless, because all of my precious data was backed up in that disk image. When I try to verify or repair that disk using Disk Utility, it says Unrecognized File System. Are all the files on that image corrupt and impossible to recover, or is it possible to recover those files (but of course, losing some files, since the image appears to be corrupt).
Again, if you cannot mount the image then there's no way to recover the files. If the image would mount then you could try repairing it with Disk Utility or try using recovery software to recover files from it. But if it doesn't mount then there's nothing you can do with it.
If you want to try you can see if recovery software can find files on it. See the following:
Basics of File Recovery
Files in Trash
If you simply put files in the Trash you can restore them by opening the Trash (left-click on the Trash icon) and drag the files from the Trash to your Desktop or other desired location. OS X also provides a short-cut to undo the last item moved to the Trash -press COMMAND-Z.
If you empty the Trash the files are gone. If a program does an immediate delete rather than moving files to the Trash, then the files are gone. Recovery is possible but you must not allow any additional writes to the hard drive - shut it down. When files are deleted only the directory entries, not the files themselves, is modified. The space occupied by the files has been returned to the system as available for storage, but the files are still on the drive. Writing to the drive will then eventually overwrite the space once occupied by the deleted files in which case the files are lost permanently. Also if you save a file over an existing file of the same name, then the old file is overwritten and cannot be recovered.
General File Recovery
If you stop using the drive it's possible to recover deleted files that have not been overwritten with recovery software such as Data Rescue II, File Salvage or TechTool Pro. Each of the preceding come on bootable CDs to enable usage without risk of writing more data to the hard drive.
The longer the hard drive remains in use and data are written to it, the greater the risk your deleted files will be overwritten.
Also visit The XLab FAQs and read the FAQ on Data Recovery.
You can download trial versions of the software. The trials can be used to see if the software can recover anything, but then you must purchase the software to actually do any file recovery. -
File to File scenario using NFS.
Hi All,
I am learning XI and got stick in a relatively simple issue.
I am creating a scenario wherein a file from a directory on local machine say (D:/xi_input) is transferred to a directory on local machine only (say D:/xi_output).
I guess this scenario is feasible using NFS, but i m getting an error in communication channel -
"Configured source directory 'D:/xi_input' does not exist"
After reading some documents and threads on sdn, i could understand that we have to replace '\' with '/'. But even with this same error is coming. Can you feel throw an insight where could the problem be,
Thanks in advnace,
ShreyaHi Shreya,
PI will not pick up the file directly stored on your local machine.
You have to put this file on the PI server directory.
For this, you can use the transaction sxda_tools and put the file in the required directory of PI server,
and then start the communication channel.
-Supriya. -
How to mount a CIFS file system in fstab?[SOLVED]
I want a CIFS share to be auto mounted on linux startup.
If I add a line to /etc/rc.local, it works well:
mount -t cifs //192.168.0.10/alien /home/alien -o user=alien,passowrd=alien,uid=1000,gid=1000
But, if I append a line to /etc/fstab, it doesn't work:
//192.168.0.10/alien /home/alien cifs _netdev,users,user=alien,passowrd=alien,uid=1000,gid=1000 0 0
I think may be fstab is before network available. But I am not sure, since I notice some other
linux system has NFS entry in fstab and works well.
Any ideas?
Last edited by vistastar (2011-12-23 11:00:29)Please mark solved thread as [SOLVED]. Thanks.
https://wiki.archlinux.org/index.php/Fo … ow_to_Post -
Ocfs2 can not mount the ocfs2 file system on RedHat AS v4 Update 1
Hi there,
I installed ocfs2-2.6.9-11.0.0.10.3.EL-1.0.4-1.i686.rpm onto RedHat linux AS v4 update 1. Installation looks OK. And configure ocfs2 (At this stage i only added 1 node in the cluster), load and start accordingly. Then paritition the disk and mkfs.ocfs2 the partition. Everything seems OK.
[root@node1 init.d]# ./o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking cluster ocfs2: Online
Checking heartbeat: Not active
But here you can check if the partition is there:
[root@node1 init.d]# fsck.ocfs2 /dev/hda12
Checking OCFS2 filesystem in /dev/hda12:
label: oracle
uuid: 27 74 a6 70 32 ad 4f 77 bf 55 8e 3a 87 78 ea cb
number of blocks: 612464
bytes per block: 4096
number of clusters: 76558
bytes per cluster: 32768
max slots: 2
/dev/hda12 is clean. It will be checked after 20 additional mounts.
However, mount -t ocfs2 /dev/hda12 just does not work.
[root@node1 oracle]# mount -t ocfs2 /dev/hda12 /oradata/m10g
mount.ocfs2: No such device while mounting /dev/hda12 on /oradata/m10g
[root@node1 oracle]# mount -L oracle
mount: no such partition found
Looks like mount just can not see the ocfs2 partition somehow.
I cannot find much info in metalink and anywhere else, does anyone here come across this issue before?
Regards,
EricI have been having a similar problem.
However, when I applied your fix I ended up with another problem:
(20765,0):ocfs2_initialize_osb:1179 max_slots for this device: 4
(20765,0):ocfs2_fill_local_node_info:851 I am node 0
(20765,0):dlm_request_join:756 ERROR: status = -107
(20765,0):dlm_try_to_join_domain:906 ERROR: status = -107
(20765,0):dlm_join_domain:1151 ERROR: status = -107
(20765,0):dlm_register_domain:1330 ERROR: status = -107
(20765,0):ocfs2_dlm_init:1771 ERROR: status = -12
(20765,0):ocfs2_mount_volume:912 ERROR: status = -12
ocfs2: Unmounting device (253,7) on (node 0)
Now the odd thing about this bit of log output (/var/log/messages)
is the fact that this is only a 2 node cluster and only one node has
currently mounted the file system in question. Now, I am running
the multipath drivers with my qla2xxx drivers under SLES9-R2.
However, at worst that should only double everything
(2 nodes x 2 paths through the SAN).
How can I get more low level information on what is consuming
the node slots in ocfs2? How can I force it to "disconnect" nodes
and recover/cleanup node slots? -
Years of data gone, backup DMG won't mount "No mountable file systems"
Could someone help me here, I have this problem.
The reason for this was that I ran into some complications with my boot camp windows install and wanted to start over. After returning the boot camp FAT partition space to Mac OS X it froze while attempting to create a new boot camp partition. "Boot Camp Assistant" suggested I backup my "Macintosh HD" and re-partition my HD because "some files could not be moved." Ok so, I backed-up my "Macintosh HD" volume to a DMG using Disk Utility selecting the volume and clicking new image icon, it is compressed, it is Leopard. I then booted with Leopard install DVD and began the restore process. Everything was working smoothly until the progress bar stopped animating, waited awhile and forced restart. When I restarted my internal HD had only an "Applications" folder which ended its contents with a half copied iDVD.app preceded by 73 other apps successfully restored. Now worst of all my Backup DMG produces the "No mountable file systems" error. Help, oh please help!
hdiutil imageinfo produces this:
Format: UDRW
Backing Store Information:
Name: Macintosh HD.dmg
URL: file://localhost/Volumes/Macintosh%20HD/Macintosh%20HD.dmg
Class Name: CBSDBackingStore
Format Description: raw read/write
Checksum Type: none
partitions:
appendable: false
partition-scheme: none
block-size: 512
burnable: false
partitions:
0:
partition-length: 92073985
partition-synthesized: true
partition-hint: unknown partition
partition-name: whole disk
partition-start: 0
Properties:
Partitioned: false
Software License Agreement: false
Compressed: no
Kernel Compatible: true
Encrypted: false
Checksummed: false
Checksum Value:
Size Information:
Total Bytes: 47141880320
Compressed Bytes: 47141880320
Total Non-Empty Bytes: 47141880320
Sector Count: 92073985
Total Empty Bytes: 0
Compressed Ratio: 1
Class Name: CRawDiskImage
Segments:
0: /Volumes/Macintosh HD/Macintosh HD.dmg
Resize limits (per hdiutil resize -limits):
92073985 92073985 92073985
Message was edited by: Marcus SOk, here is the output. It shows there is a problem with the resource fork XML, just like yours Shad Guy. If only their were some specs available I could try to fix it myself. An Apple engineer could most likely fix this in 20 min.
Also, thanks for your help man, having people tell me to give up is the worst.
hdiutil udifxmldet
"Macintosh HD.dmg" has 1365301792551680312 bytes of embedded XML data.
hdiutil: udifxmldet: unable to read XML data at offset 2841561349885781055 from "Macintosh HD.dmg": 29 (Illegal seek).
hdiutil: udifxmldet failed - Illegal seek
hdiutil udifderez
hdiutil: udifderez: could not get resource fork of "Macintosh HD.dmg": Function not implemented (78)
hdiutil: udifderez failed - Function not implemented
------------------------ -
Want to access network file system using JFileChooser
Hi all,
I want to access and display the drives of a network file system in the JFileChooser dialog. For example by specifying a machine name, I want to access the drives(C, E, F....) of that machine. Can anyone please guide me on how to go forward with this.
Thanks in advanceAny links or guidance provided would bhi helpful
Maybe you are looking for
-
I was on a boat ride when some water fell on it but it was just a little bit then when i tried to control the sound and volume while playing and listening to music it didn't showed anything and just with headphones it appeared again and i could liste
-
ITunes Crashes During/at End of Download
(I also posted this in Using iTunes for Mac, but since a tech support rep suggested it was more a Store problem as opposed to a general iTunes problem, I realized it might be worth posting it here, too. I hope that's okay.) This past Wednesday, I upg
-
"USB Device Not Recognized. One of the USB devices has malfunctioned..."
I'm running Windows XP with external hard drive. When hard drive is connected, iPod will not sync and I get message stating "USB Device Not Recognized. One of the USB devices has malfunctioned..." When disconnect the external drive, he iPod appears i
-
I read in a lot of places that people are having the ipod sync with windows 7. Here is the easy fix. With itunes closed. Attach ipod cord in the ipod and then into the computer... Go to computer(where it shows your c drive, dvd drive, etc) You will n
-
Hello, could you advise me if it is possible an RFC Content Server to be used with ITS for the archive instead of HTTP Content Server? Thank you in advance for your reply.