Root file system full
Hai,
Thanks to ll for their comments.
I am getting frequent root file system full message.
I have been deleting messages,pacct files from /var.
But it still shows the same msg.
But when I am restarting the system again it comes to 85%.
what could be the reason. And why does this happen.where are the files getting created
or added
Thank u very much in anticipation.
sreerama
Also, if you are running with crash dumps enabled check the /var/crash/<hostname> (will only exist if crash dumps are enabled) directory and see if there are any big files in here (vmcore is a bugger), that's usually a good place to check too.
Similar Messages
-
/dev/root file system full
Hello.
We can't to login to system by telnet, ftp,rlogin, console, because recieved:
<b> messages msgcnt 142 vxfs: mesg 001: vx_nospace - /dev/root file system full (1 block extent) </b>
Instance's of Oracle and SAP are working and we are afraid to reboot server.
We working on HP-UX
is there any solution for this problem?
regards
DenisHey Denis
why dont you try to extend your /dev/root File system?
if your files system is already 1005 full and 0 bits space left, then try to move some files to other location where space available and try to extend your files system, that will resolve your space issue.
But one thing I can tell you is there is no harm in deleting core file from /usr/sap/<SID>/<DEVMBG00>/work.
-- Murali. -
Root File system is reporting that it's full [SOLVED]
My root file system is reporting as full, and I'd like some ideas on how to track the problem. I've tried a number of things like searching for the largest directory, searching for the largest file, and all that jazz. I'm obviously missing something. /dev/sda3 should be at 50%.
One note. The computer started what seemed like normal today. I converted my second hard drive to ext4, rebooted, and started to notice that things that needed the /tmp directory couldn't start. I made some quick space to get operational by removing 56M of stuff from pacman's cache, but that's a quick hack. I don't know if this is related or not. I am running testinskottish wrote:
MoonSwan wrote:
You're a dork who solved this issue and will know better next time. How is this a bad thing? I'm sure someone around here has done worse Skottish, so don't feel too stupid. (Won't name names but I'm sure as well that I've done worse somewhere...)
In the meantime, while you're down...*bonks skottish with the dork-stick*
Thanks for the kind words MoonSwan.
This happened because of the way my system is set up. I have rsync making backups of /home and /etc to /backup on close. It turns out that rsync created the /backup directory instead of using the existing one. Why? Because /dev/sdb1 wasn't mounted when I restarted after the conversion. Doh!
no shame in that. i totally freaked out once when i was still in school because i couldn't find a paper that was due. turned out i had /home unmounted when i saved the file, but had /home mounted when i went looking for it.
it was hiding under the mounted filesystem the whole time! -
Problem in Reducing the root file system space
Hi All ,
The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
I have furnished the df -h output for your reference. Please provide solution as soon as possible.
/dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 30G 1.0M 30G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
/platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 33G 3.5G 30G 11% /tmp
swap 30G 48K 30G 1% /var/run
/dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
/dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
Regards,
R. Rajesh Kannan.I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
Thank you. -
Root ( / ) file system incresing
root@sfms2 # df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d10 30257446 28379345 1575527 95% /
/dev/dsk/c2t0d0s3 8072333 1259615 6731995 16% /usr
/proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
/dev/dsk/c2t0d0s5 8072333 1108327 6883283 14% /var
swap 10070400 104 10070296 1% /var/run
swap 10076632 6336 10070296 1% /tmp
/dev/dsk/c2t0d0s4 8072333 1300420 6691190 17% /opt
/dev/did/dsk/d9s6 482775 4815 429683 2% /global/.devices/node@2
/dev/md/sfms-dg/dsk/d102
74340345 1284885 72312057 2% /oracle
/dev/md/sfms-dg/dsk/d101
132184872 44097490 86765534 34% /sfms_data1
In my root file system ./proc incresing after some intervals. My root dir going to full. Tell me any solution to resolve this problems.???Uh, no. /proc can't increase in your root filesystem because /proc is not part of your root filesystem. 'du' descends and crosses filesystem boundaries by default.
Run this:
du -dk / | sort -n > /tmp/root_du.sort
The bottom few lines of that file will show the largest directories in the filesystem. You may find some sort of log file or some hidden directory you were unaware of. What are they?
Darren -
File system full.. swap space limit.
When i try to install Solaris 8x86 i recieve the following error.
warning:/tmp:file system full, swap space limit exeeded
Copying mini-root to local disk. Warning &pci@0,0&pci/ide@7,1&ide@1 ata:
timeout: abort request, target 0 lun 0
retrying command .. done
copying platform specific files .. done , i have a 46 Gb IBM DTLA45 HD
,the solaris partition was set to 12 Gb , swap to 1,2 Gb .
After a while I recieve Warning:/tmp/:file system full,swap space limit exeeded. , Why?
I have already used the 110202 patch for the Harddrive.
How should I solve this?
Thanks
\DJHi,
Are you installing using the Installation CD?
If so, try booting and installing with the Software 1 of 2 CD.
Hope that helps.
Ralph
SUN DTS -
SOLVED: kernel loads, but doesn't have a root file system
Hi,
The system is an Asus X202E. It does UEFI and has a GPT partition system. I've gotten through that part. And it is clear to me that the kernel loads.
It's the next step that's giving me grief. I've tried this with two bootloaders: gummiboot and rEFInd.
With gummiboot, the kernel panics because it can't mount the root file system. With rEFInd, it gets to the intial ramdisk and then drops me to a shell, apparently because the root file system is set to null, and it obviously can't mount that as "real root".
Here is what I posted on the Arch mailing list, documenting that I have indeed specified the correct root (I'm copying this from the email, eliding the unfortunate line wraps):
bridge-live# cat /boot/loader/entries/arch.conf
Title Arch Linux
linux /vmlinuz-linux
initrc /initramfs-linux.img
options root=PARTUUID=d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
bridge-live# ls -l /dev/disk/by-partuuid
total 0
lrwxrwxrwx 1 root root 10 Apr 15 19:26 0ab4d458-cd09-4bfb-a447-5f5fa66332e2 -> ../../sda6
lrwxrwxrwx 1 root root 10 Apr 15 19:26 3e12caeb-1424-451c-898e-a4ff05eab48d -> ../../sda7
lrwxrwxrwx 1 root root 10 Apr 15 19:26 432a977b-f26d-4e75-b9ee-bf610ee6f4a4 -> ../../sda3
lrwxrwxrwx 1 root root 10 Apr 15 19:26 95a1d2c2-393a-4150-bbd2-d8e7179e7f8a -> ../../sda2
lrwxrwxrwx 1 root root 10 Apr 15 19:26 a4b797d9-0868-4bd1-a92d-f244639039f5 -> ../../sda4
lrwxrwxrwx 1 root root 10 Apr 15 19:26 d5bb2ad1-9e7d-4c75-b9b6-04865dd77782 -> ../../sda8
lrwxrwxrwx 1 root root 10 Apr 15 19:26 ed04135b-bd79-4c7c-b3b5-b0f9c2fe6826 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 15 19:26 f64f82a7-8f2b-4748-88b1-7b0c61e71c70 -> ../../sda5
The root partition is supposed to be /dev/sda8, that is:
lrwxrwxrwx 1 root root 10 Apr 15 19:26 d5bb2ad1-9e7d-4c75-b9b6-04865dd77782 -> ../../sda8
So the correct PARTUUID followed by the one I have specified in
arch.conf is:
d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
I'm guessing that this is really the same problem with both gummiboot and with rEFInd, but don't really know. It's clear to me that the initrd is not being correctly constructed. So I removed /etc/mkinitcpio.conf and did, as per the Arch wiki,
pacman -Syyu mkinitcpio linux udev
No joy.
I don't even know which way to go at this point. If I even knew how to tell it where the real disk is in the initial ram disk shell, that would help. Better of course, would be actually solving the problem.
Thanks!
Last edited by n4rky (2013-04-17 21:41:36)I have made extremely limited progress on this issue.
My previous attempt to specify the root partition in mkinitcpio.conf was insufficient. Furthermore, this is no place--despite the documentation--for the orthodoxy about using UUIDs rather than the straight /dev/sdx. In my case:
root=/dev/sda8
and run
mkinitcpio -p linux
It still drops me into the shell at boot. I can do
mount /dev/sda8 /new_root/
and exit the shell. It still won't believe it has the root device and drops me back in. I just exit.
At this point, for a very brief moment, things look promising. It appears to be starting normally. Then, gdm.service, NetworkManager.service, and dbus.service all fail to start. There may be others but the screen goes by too quickly. At this point, it hangs trying to initialize the pacman keyring and all I can do is CTRL-ALT-DEL.
It occurred to me that this might extend to the rEFInd configuration and so I modified it to also use /dev/sda8 rather than the UUID, but this made no difference. Trying to boot via gummiboot still yields the previously specified kernel panic. -
Change ZFS root dataset name for root file system
Hi all
A quick one.
I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
Can I change it to another name afterward without reinstalling the OS? For example,
zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
Thank you.Renaming the root pool is not recommended.
-
Solaris 10:unable to mount a solaris root file system
Hi All,
I am trying to install Solaris 10 X86 on a Proliant DL385 Server it has a Smart array 6i, I have download the driver from the HP web site, on booting up the installation CD 1, adding the device driver, it sees the device but now says it can���t mount the device. Any clues what I need to do?
Screen Output:
Unable to mount a Solaris root file system from the device
DISK: Target 0, Bios primary drive - device 0x80
on Smart Array 6i Controller on Board PCI bus 2, at Dev 4
Error message from mount::
/pci@0,0/pci1022,7450@7/pcie11,4091@4/cmdk@0,0:a: can't open - no vtoc
any assistence would be appreciated.Hi,
I read the Message 591 (Agu 2003) and the problem is quite the same. A brief description: I have aLaptop ASUS with HDD1 60GB and a USB storage HDD (in next HDD2) 100GB. I installed Solaris 10 x86 on HDD2 (partition c2t0d0s0). At the end of installation I removed the DVD and using BIOS features I switched the boot to HDD2. All ok; I received the SUN Blue Screen and I choose the active Solaris option; but at the beginning of the boot I received the following error message
Screen Output:
Unable to mount a Solaris root file system from the device
DISK: Target 0: IC25N060 ATMR04-0 on Board ....
Error message from mount::
/pci@0,0/pci-ide2,5/ide@1/cmdk@0,0:a: can't open
any assistence would be appreciated.
Regards -
Zerofree: Shrinking ARCH guest VMDK--'remount the root file-system'?
Hi!
[using ZEROFREE]
Getting great results with and extra ARCH install running as a VMDK in Workstation.
REALLY need tips on shrinking the VMDK. obviously have deleted unneeded files
and now rather urgently need to learn what's eluding me so far.
1) zerofree is install IN the virtual machine (VMDK)workstation running on windows 8.
2) Here's the instructions for zerofree:
filesystem has to be unmounted or mounted read-only for zerofree to
work. It will exit with an error message if the filesystem is mounted
writable.
To remount the root file-system readonly, you can first
switch to single user runlevel (telinit 1) then use mount -o remount,ro
filesystem.
As it a VMDK and it's running would the only/best option be to: "remount the root file-system readonly" ??
OR, could i add the VMDK to another running arch system that I do have and NOT mount the VMachine thereby
allowing zero free to run even better on that?
Are both method JUST as efficive at shrinking? My guess would be the remount root file-system as read only
would NOT be as efficient at shrinking.
I could really use a brief walk-through on this as all attempts have failed so far.
I boot the ARCH virtual machine and do what may I ask?
Last edited by tweed (2012-06-05 07:43:41)How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
Last edited by AGT (2014-05-20 16:49:35) -
Unbootable Solaris 10 x86 installed on ZFS root file system
Hi all,
I have unbootable Solaris 10 x86 installed on ZFS root file system. on an IDE HDD
The bios keep showing the msg
DISK BOOT FAILURE , PLEASE INSERT SYSTEM BOOT DISK
please note :
1- the HDD is connected properly and recognized by the system
2- GRUB don't show any messages
is there any guide to recover the system , or detail procedure to boot system again
Thanks,,,It's not clear if this is a recently installed system that is refusing to boot OR if the system was working fine and crashed.
If it's the former, I would suggest you check the BIOS settings to make sure it's booting from the right hard disk. In any case, the Solaris 10 installation should have writting the GRUB stage1 and stage2 blocks to the beginning of the disk.
If the system crashed and is refusing to boot, you can try to boot from a Solaris 10 installation DVD. Choose the single user shell option and see if it can find your system. You should be able to use format/devfsadm/etc to do the actual troubleshooting. If your disk is still responding, try a `zpool import` to see if there is any data that ZFS can recognize (it usually has many backup uberblocks and disk labels scattered around the disk). -
How to add more disk space into / root file system
Hi All,
Linux 2.6.18-128
can anyone please let us know how to add more disk space into "/" root file system.
i have added new hard disk with space of 20GB,
[root@rac2 shm]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 965M 767M 149M 84% /
/dev/hda7 1.9G 234M 1.6G 13% /var
/dev/hda6 2.9G 69M 2.7G 3% /tmp
/dev/hda3 7.6G 4.2G 3.0G 59% /usr
/dev/hda2 18G 12G 4.8G 71% /u01
LABLE=/ 2.0G 0 2.0G 0% /dev/shm
/dev/hdb2 8.9G 149M 8.3G 2% /vm
[root@rac2 shm]#Dude! wrote:
I would actually question whether or not more disks increase the risk of a disk failure. One disk can break as likely as one of two of more disks.
Simple stats. Buying 2 lottery tickets instead of one, gives you 2 chances to win the lottery prize. Not 1. Even though the odds of winning per ticket remains unchanged.
2 disks buy you 2 tickets in The-Drive-Failure lottery.
Back in the 90's, BT (British Telecom) had a 80+ node OPS cluster build with Pyramid MPP hardware. They had a dedicated store of scsi disks for replacing failed disks - as there were disk failure fairly often due to the number of disks. (a Pryamid MPP chassis looked like a Xmas tree with all the scsi drive LEDs, and BT had several)
In my experience - one should rather expect a drive failure sooner, than later. And have some kind of contingency plan in place to recover from the failure.
The use of symbolic links instead of striping the filesystem protects from the complete loss of the enchilada if a volume member fails, but it does not reduce the risk of loosing data.
I would rather buy a single ticket for the drive failure lottery for a root drive, than 2 tickets in this case. And using symbolic links to "offload" non-critical files to the 2nd drive means that its lottery ticket prize is not a non-bootable server due to a toasted root drive. -
Programmatic interface to get zone's root file system
Hi,
I am a newcomer to solaris zones. Is there any programmatic (C API) way to know the path to root file system of a zone given its name, from the global zone?
Thanks!A truss of zoneadm list -cv shows a bunch of zone related calls like:
zone_lookup()
zone_list()
zone_getattr()
Using the truss output as an example and including /usr/include/sys/zones.h and linking to libzonecfg
(and maybe libzoneinfo) seems like a fairly straight-forward path to getting the info you are looking for.
You could also parse /etc/zones/index
which is (on my s10_63 machine) a colon seperated flat file containing [zone:install state:root path] that looks like:
global:installed:/
demo1:installed:/zones/demo1
demo2:installed:/zones/demo2
demo3:installed:/zones/demo3
foo:installed:/zones/foo
ldap1:installed:/zones/ldap1
Neither of these methods are documented, so they are certainly subject to change or removal.
Good luck!
-William Hathaway -
Sol10 u8 installed on a ZFS Root File System have different swap needs?
Does Sol10 u8 installed on a ZFS Root File System have different swap needs/processes?
Information:
I've installed Solaris 10 (10/09 s10s_u8wos_08a SPARC, Assembled 16 September 2009) on a half dozen servers and every one of them no longer mount swap at boot.
The install program commented out the old swap entry and created this one:
# grep swap /etc/vfstab
swap - /tmp tmpfs - yes -
Everything works like a champ. I didn't discover the issue until I tried to install some patches and the install failed. It didn't fail because of lack of swap - it refused to run because it found "No swap devices configured".
Here are the symptoms:
# swap -s
total: 183216k bytes allocated + 23832k reserved = 207048k used, 13600032k available
# swap -l
No swap devices configured
# mount | grep swap
/etc/svc/volatile on swap read/write/setuid/devices/xattr/dev=5ac0001 on Mon Apr 19 08:06:45 2010
/tmp on swap read/write/setuid/devices/xattr/dev=5ac0002 on Mon Apr 19 08:07:40 2010
/var/run on swap read/write/setuid/devices/xattr/dev=5ac0003 on Mon Apr 19 08:07:40 2010
#Hi Nitabills,
I assume that you create a zfs entry for swap with the commande zfs create -V $size
did you launch the command :
swap -a /dev/zvol/dsdk/$ZPOOL/swap
Try this entry below in the vfstab :
/dev/zvol/dsdk/$ZPOOL/swap - - swap - no - -
Archive Repository - Content Server or Root File System?
Hi All,
We are in the process of evaluating a storage solution for archiving and I would like to hear your experiences and recommendations. I've ruled out 3rd-party solutions such as IXOS as over kill for our requirement. That leaves us with the i5/OS root file system or the SAP Content Server in either a Linux partition or on a Windows server. Has anyone done archiving with a similar setup? What issues did you face? I don't plan to replicate archive objects via MIMIX.
Is anyone running the SAP Content Server in a Linux partition? I'd like to know your experience with this even if you don't use the Content Server for archiving. We use the Content Server (currently on Windows) for attaching files to SAP documents (e.g., Sales Documents) via Generic Object Services (GOS). While I lean towards running separate instances of the Content Server for Archiving and GOS, I would like to run them both in the same Linux LPAR.
TIA,
StanHi Stanley,
If you choose to store your data archive files at the file system level, is that a secure enough environment? A third party certified storage solution provides a secure system where the archive files cannot be altered and also provides a way to manage the files over the years until they have met their retention limit.
Another thing to consider, just because the end users may not need access to the archived data, your company might need to be able to access the data easily due to an audit or law suit situation.
I am a SAP customer whose job function is the technical lead for my company's SAP data archiving projects, not a 3rd party storage solution provider , and I highly recommend a certified storage solution for compliance reasons.
Also, here is some information from the SAP Data Archiving web pages concerning using SAP Content Server for data archive files:
10. Is the SAP Content Server suitable for data archiving?
Up to and including SAP Content Server 6.20 the SAP CS is not designed to handle large files, which are common in data archiving. The new SAP CS 6.30 is designed to also handle large files and can therefore technically be used to store archive files. SAP CS does not support optical media. It is especially important to regularly run backups on the existing data!
Recommendation for using SAP CS for data archiving:
Store the files on SAP CS in a decompressed format (make settings at the repository)
Install SAP CS and SAP DB on one server
Use SAP CS for Unix (runtime tests to see how SAP CS for Windows behaves with large files still have to be carried out)
Best Regards,
Karin Tillotson
Maybe you are looking for
-
I have a 2007 MacBook Pro with a 2.2 or 2.4 Core Duo (can't remember which-not on it at the moment). I've been using 10.4 on it with CS3 (ID, Illy, PS, Bridge, Acrobat 9), but now I may need to upgrade my OS (looking at maybe going with 10.6). Would
-
QT X and yFlicks using Snow Leopard this is my last hope.
I have been a user of the software yFlicks from Many Tricks for a long time. It is an iPhoto type app for keeping your movies organized and it has many features just for movies. It worked excellent under Leopard 10.5. Ever since I installed Snow Leop
-
Have a look the Pic attached !!
-
Support for Java Activation Framework?
Hi, Can anyone quickly tell me which version of JAF (java activation framework)in WLS 7.0 ? thanks a lot in advance manoj
-
Cannot change an added approver
Hello, We can no longer change an added approver to a shopping cart. You can go through the steps to change the approver but once you save it and check the approval preview the first approver is still there. We applied note 1318725 Ad hoc approvers