About tmpfs

hello,
i have SRSS 3.1 solaris 10 on a E280R, with 2Gb RAM.
I limit tmpfs with this entry in /etc/vfstab:
swap - /tmp tmpfs - yes size=512m
but sometimes, i got messages:
May 4 17:29:24 mombasa tmpfs: [ID 518458 kern.warning] WARNING: /tmp: File system full, swap space limit exceeded
when i did an lsof on /tmp, i noticed there are some files and some thing like:
utdsd 771 root 5u FIFO 289,2 0t0 2268614 /tmp/utdsd.mbx
Xsun 2339 root 11r VREG 289,2 198 11047699 /tmp (swap)
thunderbi 29727 userA 7u VREG 289,2 361384 7949708 /tmp (swap)
What the impact of limitation of tmpfs on this processes?
Is it a good practice to limit tmpfs on Sunray server, or is it better to have no limitation?
thanks in advance for help,
gerard

These are normal on a default installation and will be there from boot. If they don't need to do much they will just stay empty / won't use much resources.
wwn wrote:tmpfs 3,9G 312K 3,9G 1% /dev/shm
This is a temp storage file system
wwn wrote:tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
https://wiki.archlinux.org/index.php/Li … filesystem
https://wiki.archlinux.org/index.php/cgroups
Last edited by Kartious (2014-03-07 08:29:26)

Similar Messages

  • Special device does not exist, FSTAB Issue

    I set up this arch64 install last night, and till now I've managed all right except for one niggling issue.  For some odd reason, when I installed everything I forgot to add my "sandbox" partition in Fstab.  I thought I could just generate a UUID then add a line to Fstab later, and I did just that.  However, as its not part of my LVM, I'm wondering if thats why I cannot mount my sandbox partition from Fstab.  Ideas appreciated. 
    My fdisk -l output:
    [root@acer ~]# fdisk -l
    Disk /dev/sda: 120.0 GB, 120034123776 bytes
    255 heads, 63 sectors/track, 14593 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x379c7acb
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6428 51528487+ 5 Extended
    /dev/sda3 6429 14593 65585362+ 8e Linux LVM
    /dev/sda5 14 6428 51528456 83 Linux
    Disk /dev/mmcblk0: 7969 MB, 7969177600 bytes
    221 heads, 20 sectors/track, 3521 cylinders
    Units = cylinders of 4420 * 512 = 2263040 bytes
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/mmcblk0p1 2 3522 7778304 b W95 FAT32
    Fstab as it stands now -
    /etc/fstab: static file system information
    (cut out irrelevant lines about tmpfs & commented out optical devices)
    /dev/mapper/vg0-lv_home /home ext3 defaults,noatime 0 1
    /dev/mapper/vg0-lv_root / ext3 defaults,noatime 0 1
    /dev/mapper/vg0-lv_swap swap swap defaults,noatime 0 0
    /dev/mapper/vg0-lv_var /var ext3 defaults,noatime 0 1
    UUID=a7d625c6-0fb9-41f5-bdba-6d306c90739a /boot ext2 defaults 0 1
    UUID=a802d8f4-f70e-4ed5-ab72-65bb0ebdca9b /media/sandbox ext3 defaults,users,noatime 1 2
    ## Memory card at /dev/mmcblk0p1 /media/memorycard
    UUID=238db34b-8bf3-4510-b8ee-1aa46e04f17d /media/memorycard vfat defaults,users,noauto,noatime 0 0
    The error I get when I issue a "mount -a" as root or just let the computer boot up with the "sandbox" in fstab -
    mount: special device UUID=a802d8f4-f70e-4ed5-ab72-65bb0ebdca9b does not exist

    Having the same problem with flash memory set as /dev/sdb1. It refuses to mount claiming the device doesn't exist.
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    none /dev/pts devpts defaults 0 0
    none /dev/shm tmpfs defaults 0 0
    /dev/sda2 / reiserfs defaults,noatime 0 1
    /dev/sda1 /boot ext2 defaults,noatime 0 1
    /dev/sda3 swap swap defaults 0 0
    /dev/sdb1 /media/flash auto ro,user,noauto,unhide 0 0
    /dev/cdrom /media/cdrom auto ro,user,noauto,unhide 0 0
    /dev/dvd /media/dvd auto ro,user,noauto,unhide 0 0
    P.S.
    Yuuhuuuuu post #600

  • Errors with installing catalyst 13.251 with yaourt

    First week with Arch, second with LinuxOSs in general. Hi.
    I'm following this guide to installing the catalyst drivers for Arch (http://bbs.archbang.org/viewtopic.php?id=4630) and I've encountered an error that is preventing me from proceeding that I'd like help finding a solution for.
    Using the command "yaourt catalyst-test," I follow the guide where it says to select no when asked if I want to edit PKGBUILD and edit *.install. Everything goes error-less until it says:
    ==> Starting package()...
    install: error writing '/tmp/yaort-tmp-root/aur-catalyst-test/pkg/catalyst-test/usr/lib/xorg/modules/dri/fglrx_dri.so' : No space left on device
    install: failed to extend '/tmp/yaort-tmp-root/aur-catalyst-test/pkg/catalyst-test/usr/lib/xorg/modules/dri/fglrx_dri.so' : No space left on device
    ==> ERROR: A failure occurred in package(),
           Aborting...
    ==> ERROR: Makepkg was unable to build catalyst-test.
    ==> Restart building catayst-test ? [y/N]
    ==> ---------------------------------------------------
    ==>
    I ran df -h to check to see if anything was full, and I got this:
    /dev/sda1           30G     1.2G     27G       5%   /
    dev                  801M          0   801M       0%   /dev
    run                  804M     436K   804M      1%   /run
    tmpfs               804M          0   804M       0%   /dev/shm
    tmpfs               804M          0   804M       0%   /sys/fs/cgroup
    tmpfs               804M    804M    4.0K    100%   /tmp
    /dev/sda3          197G     60M   187G        1%   /home
    I've got a 3GB Swap and 2GB of ram, as well.
    I looked this up, and I've seen that lots of people have tmpfs with around 3GB. I don't know how they did that and I can't find how they did it and follow properly.
    If any other information is needed, I can give that. Hopefully it's not something too long, because I can't copy and paste.
    Last edited by boucle infinie (2014-01-03 03:10:50)

    Scimmia wrote:Build it manually, STOP BUILDING IN /tmp!
    I am learning to do that right now.
    karol wrote:Read the wiki about tmpfs and yaourt man page how to build somepleace else, not /tmp.
    Doing that.[excuse] Reading disability makes it hard to read stuff in the formats most wikis are in, so i try to avoid them unless it's necessary (guiltily). Doing my best, though. [/excuse]
    HalosGhost wrote:Welcome to the Arch BBS. I have two extra notes adding on to Scimmia and Karol's points:
    You're using ArchLinux, not ArchBang, right? Because, if you're using ArchLinux, then you shouldn't be using ArchBang tutorials; and if you're using ArchBang, then you shouldn't be asking here.
    Please, for the love of all that is holy, use code tags to post terminal output.
    All the best,
    -HG
    I was using it because it describes the way I would install it using pacman. I am definitely using ArchLinux (don't even have a GUI yet). The guide seems universal enough.
    SORRY GUYS I FORGOT ABOUT THAT
    Won't happen again.
    Last edited by boucle infinie (2014-01-03 03:55:35)

  • LVM2 installation with a 2.6 Kernel HowTO

    First, and very rough draft.  Feel free to criticise, correct mistakes and ask questions, maybe I'll even be able to answer them.
    <B>INTRODUCTION</B>
    Many linux users will eventually use multiple partitions for a variety of reasons.  Choosing good partition sizes is a task of trial, error and often luck.
    LVM (logical volume managment) is a nice little method that makes looking after your hard drive space a breeze.  To put it simply, LVM takes over the space you allocate to it and manages the partition data on there for you.  Using LVM you can dynamically change the size of your partitions (though it is easier to grow a partition than to shrink it).
    This document shall hopefully show you how to get a working LVM2 system with a 2.6 kernel.  Things are different with LVM1 and/or pre 2.6 kernels, there is a lot of information out there, I suggest you look elsewhere if you plan on using a an older kernel.
    An example will give a clearer picture than a dry discussion.  To start with we need a working AL system.  For this example we have two scsi discs, sda and sdb.  Firstly 3 partitions were created on sdb, a 75mb ext2 partition for /boot, a 150Mb reiserfs partition for / and the rest of the drive was given over to an LVM partition (set the type to be 8E in cfdisk).  Swap was put on sda along with two reiserfs partitions, each 2GB, to use as /usr and /var until they can be moved to LVM partitions.  It is intended that /usr, /var, /tmp, /home and /opt shall be managed by LVM.
    <B>INSTALLING LVM</B>
    <B><U>Firstly, back up your important data.  </U></B>
    AL was installed, and then upgraded to a 2.6 based system with the latest packages in current.  The kernel was compiled by hand, ensuring that the device-mapper option was enabled.  Then the system was rebooted, and tested to make sure the upgrade was succesful.
    Now the system is ready to be moved over to LVM.  Firstly install lvm using pacman (note we install lvm2, not lvm which is also available):
    #pacman -S lvm2
    This will install device-mapper and lvm2.
    <B>SETTING UP LVM</B>
    Now we should be ready to set our LVM partition up on sdb.  This is listed in /dev as sdb3.  Use ls -l to show what this is linked to:
    lr-xr-xr-x    1 root     root           34 Feb 17 00:13 /dev/sdb3 -> scsi/host0/bus0/target1/lun0/part3
    Now we can use the pvcreate command to initialise the partition as a
    physical volume:
    #pvcreate /dev/scsi/host0/bus0/target1/lun0/part3
    Physical volume "/dev/scsi/host0/bus0/target1/lun0/part3" successfully created
    Now we need to create lvm groups, the first group will be called vg1 and will be used to control the LVM partition on sdb.  Eventually a second group will be created, vg0, and used to control LVM on sda.  You can manage both drives under one group, but I like the extra control that this system gives me.  Use the vgcreate command to create groups:
    #vgcreate vg1 /dev/scsi/host0/bus0/target1/lun0/part3
    Volume group "vg1" successfully created
    To display information about a group, use the vgdisplay command:
    #vgdisplay
    --- Volume group ---
    VG Name vg1
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 1
    VG Access read/write
    VG Status resizable
    MAX LV 255
    Cur LV 0
    Open LV 0
    Max PV 255
    Cur PV 1
    Act PV 1
    VG Size 8.34 GB
    PE Size 4.00 MB
    Total PE 2135
    Alloc PE / Size 0 / 0
    Free PE / Size 2135 / 8.34 GB
    VG UUID PMOmxM-CrhW-UIMd-P1L6-NMbS-UwCy-5CSeje
    Now we can add logical volumes to the volume group using the lvcreate command:
    #lvcreate -L1G -nlvopt vg1
    Logical volume "lvopt" created
    This created a 1GB partition called lvopt in the vg1 group.  lvdisplay will show you information logical volumes:
    #lvdisplay
    --- Logical volume ---
    LV Name /dev/vg1/lvopt
    VG Name vg1
    LV UUID YsDZsm-goB3-BGGG-J4Sg-Z0rZ-Cobf-JcP2Py
    LV Write Access read/write
    LV Status available
    # open 0
    LV Size 1.00 GB
    Current LE 256
    Segments 1
    Allocation next free (default)
    Read ahead sectors 0
    Block device 254:0
    using ls /dev/vg1 should show lvopt too.
    Now we need to create a filesystem on the partition.  This is exactly the same as for standard partitions.
    #mkreiserfs /dev/vg1/lvopt
    LVM partitions are mounted in the normal way, ie:
    #mount /dev/vg1/lvopt /opt
    will mount our new partition under opt.  The new /opt partition was entered into /etc/fstab
    /dev/vg1/lvopt /opt reiserfs defaults 0 0
    If you have problems, you could try /dev/mapper/vg1-lvopt which is the raw entry for /dev/vg1/lvopt.
    Before trusting /usr or /var to LVM, we should check that the system will work properly.  Firstly we need to edit rc.sysinit.  Find the section for mounting partitions and change it to read:
    stat_busy "Mounting Local Filesystems"
    /bin/mount -n -o remount,rw /
    /bin/rm -f /etc/mtab*
    /bin/mount /proc
    /sbin/vgscan
    /sbin/vgchange -a y
    /bin/mount -a -t nonfs,nosmbfs,noncpfs
    stat_done
    This will start LVM up so that any LVM partitions in /etc/fstab will mount.
    Next we need to edit rc.shutdown so that LVM will stop properly.  Find the part that unmounts the filesystems and edit it to read:
    stat_busy "Unmounting Filesystems"
    /bin/umount -a
    /sbin/vgchange --ignorelockingfailure -a n
    stat_done
    LVM2 uses locks in /var, obviously, if /var is umounted the lock won't be accessable, the ignorelockingfailure option overcomes this.
    Now the system should be ready for using LVM.  To be safe run the commands <B>vgscan</B> and </B>vgchange -a y</B> before rebooting.
    If the system boots without errors, we can start moving our /usr and /var partitions over to LVM.  Using the lvcreate command, 2 partitions were created in the vg1 and then formatted with reiserfs, lvusr and lvvar.  They were mounted under /mnt/tmp and the data from /var and /usr copied over to them:
    #mkdir /mnt/tmp
    #mount /dev/vg1/lvvar /mnt/tmp
    #cp -av /var/* /mnt/tmp
    #umount /mnt/tmp
    #mount /dev/vg1/lvusr /mnt/tmp
    #cp -av /usr/* /mnt/tmp
    The entries for /usr and /var in /etc/fstab were suitably modified and the system rebooted to check that it worked.  Once the system was working properly, the temporary partitions used for /var and /usr were deleted and the rest of sda partitioned for LVM.  A new LVM group was created for managing this drive, vg0.  Paritions were created for /home and /tmp and the rest of the system could be installed.
    <u>Note:</u> Since installation, / has stayed at around 50% usage, /tmp hasn't been moved to lvm yet.  As Arch uses
    <B>OTHER IDEAS FOR INSTALLATION</B>
    If you don't have 2 hard drives, several other options may be available to you.
    A minimal AL install needs around 550MB for / (if compiling the kernel yourself), if you don't  mind having a / this big, you could follow this method ignoring the temporary /usr and /var partitions.  After copying over the files and editing fstab, typing "init 1" into the shell should put you in single usr mode, you can safely delete the files in your old /usr and /var (make sure the lvm ones are umounted).
    Create temporary /usr and /var partitions in the space you want to use LVM, Install LVM, but don't create any groups etc.  Tarball /usr and /var and store them somewhere (use the -p option to preserve permissions), i.e. a cd-rw.  Set the system into single user mode and create the 8E partition.  Reboot into single user mode and then create your lvm setup.  untar /usr and /var onto the volumes once they are ready.
    <B>Resizing LVM partitions</B>
    The commands lvextend and lvreduce are used to resize lvm volumes.  After resizing it is necessary to let the filesystem know.  As an example, /dev/vg1/lvopt is initially at 1.1GB, a further 1GB is needed.
    #lvextend -L+1GB /dev/vg1/lvopt
    will add 1GB to the volume. Or you can give the new size directly
    #lvextend -L2GB /dev/vg1/lvopt
    Read the man page for more information.
    Afterwards you need to resize the filesystem.  For reiserfs there is no need to umount.
    #resize_reiserfs -f /dev/vg1/lvopt

    anonycoward wrote:
    Notes on the draft:
    1.  *At this point*, the "make modules modules_install" doesn't appear to be required. If I figure out differently, I'll post asap.
    I was just being ultra safe.  I had a few problems with my initial installs, this is one thing that I found on a mailing list.  The problems were solved shortly after that so I kept the step.  I've removed it for now, if anyone has any problems and this sorts them out I'll put it back in.
    2.  Following these instructions blindly will, at the point where you suggest rebooting to insure LVM is working, reboot to empty /dev/mapper/* partitions you'd previously suggested be changed in fstab. An obvious 3AM mistake; those need to be swapped so that the reboot to test LVM (I assume, to test the initscripts and the validity of the vg's) is clearly done before making any fstab changes. I caught it, and did the reboot as suggested, using my saved (original) fstab. The "test reboot" did as expected, the initscripts reported all was OK, and I had no problems.
    Strange it worked for me, but then the /dev/mapper/ settings were put in when I was having problems and were left over as they worked.  Did you run vgscan and vgchange first?  I've edited that part slightly to try and make it more clearer and changed the fstab entry.
    3.  It was originally stated that /tmp was going to be LVM-ized, then later it's stated that since / usage wasn't high, it wasn't done. Since Arch uses tmpfs, I (and likely others) don't know if making /tmp an LVM voume is possible and/or suggested. I also didn't move it over; I'll watch usage (I have 512MB + 390MB of swap).
    Maybe someone who knows about tmpfs can enlighten us.  I know very little about it.
    4. Note that there is no real need to have multiple drives to follow this procedure - just adequate space. I have a single, 120GB drive - and I imagine most have something larger than 8.4GB today...I made an empty slot (typed as LVM) right behind /, of almost 20GB. I made 2GB partitions for the initial /usr, /opt, and /var behind the LVM slot. Now I have that available (and an LVM partition that should last longer than I do...) .
    I completely agree.  It is most likely possible to reclaim those by resizing the LVM partition.  Not something I can try though.  Sdb is a 9GB scsi drive and sda is an 18GB with the first 10GB given over to windoze.  So for me getting that space back was necessary.
    FWIW, my / is 195 MB, it's reiserfs so 32 MB is eaten by the journal, and
    (this is base only) it's using 30MB (+ the journal) right now. 32%. I don't run a particularly 'lean' system, once filled out (but prior experience w/Arch says 200 MB s/be adequate). Will post later as to whether that's 'adequate', or not. Pour moi, anyway.
    My / is 150Mb and only 50% full,  I did use the method of putting everything over to a 500Mb partition.  With compiling the kernel by hand and cleaning /var as necessary, I found that it was very close, at points I was down to under 10MB. If I screwed up the kernel compile (forgot to add device-mapper at one point) I would run out of space on the next compile, despite make clean and make mrproper in /usr/src/linux.
    What about /tmp ?
    For me its still on /, I still haven't needed to change it. / has constantly sat at 50% as far as I can see.
    Thanks again for a fine document. I know Arch OK, but I'm certainly no expert, and I'd only used LVM for a short period when I had SuSE on my wife's machine. Didn't really get to know it (guess I'm gonna now! LMAO and looking forward to it). That I was able to get through both the BETA install, AND the LVM 'upgrade', successfully, on first try and in one sitting, impressed me, anyway. 
    K
    This is only the second (working) install of LVM I have ever had.  I mainly wrote the how-to (first ever) as everything I could find was on LVM1 and 2.4 kernels.  Things are quite different with LVM2 and 2.6, so I made a few mistakes and just thought I'd try to help other people avoid them.  I'm really happy with my LVM, resizing as and when I need it.  Hopefully I'll get some time soon to write a brief bit on lvm.conf.  There's a default one in the lvm2 tarball.
    Thanks for the comments.

  • How to speed up midori using tmpf?

    Dear list,
    I run archlinux on an Acer Aspire One 110 with an 8gb SSD. Generally it runs smooth enough to be my main computer after using every optimalisation I could find and after hooking it up to an external monitor, mouse and keyboard. I run it with XFCE and relatively lightweight software. The only thing nagging me sometimes is Midori browser. With a few tabs open it seems to come to a standstill and I see the SSD writing a lot of data.  The SSD is very slow in writing data.
    Midori is already much faster than firefox and I like it al lot. I'd rather not use another browser. Therefore I did some testing and found that after moving ~/.config/midori (about 152Kb) to RAM using a tmpfs, things speed up conciderably. However, after a reboot I off course will loose everything in it like browsing history (which I need) and bookmarks and everything else in ~/.config/midori .
    I am planning to write some sort of script to mount ~/.config/midori on a tmpfs when loggin in and to periodicly (5 min?) write the content to some place on the disk so I have a backup of the data in case the computer crashes. Besides that it will have to umount the tmpfs and write the date in it to the same place on the SSD when logging out or shutting down.
    I have little scripting experience but am more than willing to learn. But before I begin it seems wise to first check if there are sollutions avaliable which I can use in this scenario and thereby saving me a lot of time. I haven't been able to find some and hope someone can point me in the right direction.
    Thanks for your attention,
    Rolf Deenen
    Last edited by rolfijn (2010-03-24 21:20:28)

    Thanks for the quick and accurate reply. It seems to be what I need and I have already started adapting the script to my needs. The script basically "rsyncs" a copy of the config on a tmpfs to a directory on disk. By doing it on a regular interval it "saves" the Midori configuration. I would however like to run this script also on log out. Otherwise a situation might occur that I bookmark a page, close Midori and shutdown the pc, before the script was scheduled to run and thereby "losing" my freshly created bookmark. By running it on log out this can be avoided. So, my followup question would be:
    How can I run the created script on log out?
    I run XFCE exclusively, perhaps that makes it easier. I did some quick searching for this information but couldn't find any.
    regards,
    Rolf Deenen

  • Best way to handle a full /tmp tmpfs from python

    Hi, I have this python program that write several files per day to a harddrive.  I check on it now and then and noticed this errors at the end of an attempted write:
    IOError: [Errno 28] No space left on device
    also
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           72G  3.1G   66G   5% /
    udev             10M     0   10M   0% /dev
    /run             10M  136K  9.9M   2% /run
    /dev/hda2        72G  3.1G   66G   5% /
    shm             755M  488K  755M   1% /dev/shm
    tmpfs           755M  722M   34M  96% /tmp
    and finally
    # /etc/fstab: static file system information
    # <file system> <dir>   <type>  <options>       <dump>  <pass>
    tmpfs           /tmp    tmpfs   nodev,nosuid    0       0
    # DEVICE DETAILS: /dev/hda1 UUID=2296a6d7-0915-4660-a822-5aba972b8842 LABEL=swap
    # DEVICE DETAILS: /dev/hda2 UUID=ccfe994d-2a96-4d9d-8d6b-b8f93eb6b637 LABEL=/
    UUID=2296a6d7-0915-4660-a822-5aba972b8842 swap swap defaults 0 0
    UUID=ccfe994d-2a96-4d9d-8d6b-b8f93eb6b637 / ext3 user_xattr,defaults 0 1
    So I am gonna go out on a limb here and assume my program tried to write a file to file and the /tmp was full so there was no space to write it, solution? Needs to be cleared. Question, how should I go about it? Should I just end up writing some catch to the exception that runs a script to remove all the files in the /tmp folder?
    sudo rm -fr /tmp/*
    Is there a way for the system to dump the tmp folder automatically once it reaches a certain size?  If I were going to use the code above, how would I make a script would take my sudo password when it is asked and complete the rest of the process? Any more elegant solutions? Thanks.

    This is my df -h from when I wiped it out with rm -fr /tmp/*
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           72G  3.1G   66G   5% /
    udev             10M     0   10M   0% /dev
    /run             10M  136K  9.9M   2% /run
    /dev/hda2        72G  3.1G   66G   5% /
    shm             755M  476K  755M   1% /dev/shm
    tmpfs           755M   32K  755M   1% /tmp
    I started my program, it did what it does and started loading the system up with files it was made to create. After running for just this period of time this is the new result
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           72G  3.2G   66G   5% /
    udev             10M     0   10M   0% /dev
    /run             10M  136K  9.9M   2% /run
    /dev/hda2        72G  3.2G   66G   5% /
    shm             755M  476K  755M   1% /dev/shm
    tmpfs           755M   31M  725M   5% /tmp
    again nothing but the program is running here, so after so many writes to disk /tmp will fill and that IO error will generate again, making the /tmp folder bigger or not having some automated way to dump /tmp are  band aid and non-solutions.  I am not sitting at this system, the idea is for it to do its work and for me to review the files from time to time. Shutting it down or rebooting are not viable options [that dumps /tmp].
    What I am looking for is an intrinsic way to handle the /tmp file size capping out or I am stuck making a bash script that will remove it for me when my program generates that IO error. The thing I don't know, is when you sudo in a script how do you give your sudo enabling password? If no one can tell me, I suppose i can find it out, just thought I would ask while I have you. Also if you know of a way to just tell lunix to dump /tmp when it is full, that would be great. Thanks

  • Mounting CDROM as normal user about to pull hair out....

    Hi everyone....
    I'm about up to my wits end trying to allow my normal user to mount and unmount my cdrom drives...
    Here's my /etc/fstab
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    none /proc proc defaults 0 0
    none /dev/pts devpts defaults 0 0
    none /dev/shm tmpfs defaults 0 0
    tmpfs /tmp tmpfs defaults 0 0
    sysfs /sys sysfs defaults 0 0
    usbdevfs /proc/bus/usb usbdevfs defaults 0 0
    /dev/cdroms/cdrom0 /mnt/cd1 iso9660 ro,user,noauto,unhide 0 0
    /dev/cdroms/cdrom1 /mnt/cd2 iso9660 ro,user,noauto,unhide 0 0
    /dev/cdroms/cdrom0 /mnt/dvd udf ro,user,noauto,unhide 0 0
    /dev/floppy/0 /mnt/fl vfat users,noauto,unhide 0 0
    /dev/discs/disc0/part2 swap swap defaults 0 0
    /dev/discs/disc0/part3 / reiserfs defaults 0 0
    /dev/discs/disc0/part1 /boot ext3 defaults 0 1
    I also added a new group called "mount" and added both root and my normal user to it.
    I then did
    chgrp mount /bin/*mount    to change mount and umount to the mount group
    then i did a
    chmod 774 /bin/*mount
    I also made sure that the 3 mount directorys in fstab existed and gave them 777 permissions...
    Did a complete reboot...  and the normal user still can't use mount... I get this error...
    mount: must be superuser to use mount
    I'm at my wits end.  I believe fstab is correct.  I believe my permissions are set correctly...  Is there anything else I need to do?
    Thanks in advance for any replies...
    James

    longhornxtreme wrote:man chmod doesn't say what s does.... so I'm kind of at a loss of understanding...
    The man page says exactly what it does...read more carefully
    the chmod manpage wrote:The letters `rwxXstugo' select the new  permissions  for  the  affected
           users:  read  (r),  write (w), execute (or access for directories) (x),
           execute only if the file is a directory or already has execute  permis-
           sion  for  some user (X), set user or group ID on execution (s), sticky
           (t), the permissions granted to the user who owns  the  file  (u),  the
           permissions  granted to other users who are members of the file's group
           (g), and the permissions granted to users that are in  neither  of  the
           two preceding categories (o).

  • Using tmpfs as a ramdrive

    Is it a good idea / really bad idea to mount a folder to tmpfs and allocate an Oracle TEMP tablespace in it - on a server with lots of RAM ? Temp IO is a huge bottleneck for us - so having it be supported by RAM should help.
    Obviously we'll have to deal with the fact that tmpfs goes away upon reboot - I am just questioning the idea for starters.
    Thanks !

    Your I/O scheduler post is very good. We already use the deadline scheduler - which is Oracle's recommendation for DW and offers superior IO for scenarios like ours. We don't have a SAN nor SSD - so we haven't tried noop yet.
    You are correct about buffer cache content being part of the SGA - but for practical purposes - if you have a 50-100TB DB - much of which is being constantly tablescanned - an SGA of a fraction of a TB will not very likely contain the "previous" SQL's data for use by the "next" SQL - defeating the primary value of the SGA - preventing the need to go to disk. Which is why investing much RAM in the SGA isn't that helpful to the cause. PGA is the key buffer. Reading from disk is a foregone conclusion.
    As for the percentages - they are in fact how it works. Linux by default takes 50% for tmpfs - and giving much of this to the DB shortchanges your IO quite a bit. Of the other 50% given to AMM - the PGA/SGA split will typically hover around 50/50 or 60/40. Of the PGA - if you MONITOR your SQL - you will see that once your SQL needs > 50% of the available PGA - it will spill to Temp. Hence the claim that a given SQL is lucky to get 1/8 of total RAM. Yes - you can change it - but you will be short changing linux, and forgoing AMM which is indispensible in an environment where you don't know what the next dynamic SQL will bring to the table.
    Yes, a Ramdrive will be lost upon restart. Hopefully on Linux/Unix this doesn't happen often, and the mount / TEMP TBS creation can be automated at startup. This is challenge #2.
    Challenge #1 is whether OEL will permit Oracle to even create a TEMP TBS on a folder mapped to /dev/shm.

  • [solved] Mounting /media as tmpfs causes breakage on bootup

    Hi,
    I have a little problem when updating to systemd 208-3 and /media as tmpfs: My system won't boot anymore and I end up in an emergency-shell. Everything was and is fine with systemd 208-2 (a Downgrade fixes everything as well as commenting the line in my fstab).
    Related journalentries:
    Dez 21 15:15:33 mymachine systemd[1]: Job tmpfs.device/start timed out.
    Dez 21 15:15:33 mymachine systemd[1]: Timed out waiting for device tmpfs.device.
    Dez 21 15:15:33 mymachine systemd[1]: Dependency failed for File System Check on /tmpfs.
    Dez 21 15:15:33 mymachine systemd[1]: Dependency failed for /media.
    Dez 21 15:15:33 mymachine systemd[1]: Dependency failed for Local File Systems.
    I specify the /media-tmpfs in my fstab as followed:
    #media
    tmpfs /media tmpfs rw,nodev,noexec,nosuid 0 2
    Any ideas why I end up in an emergency shell and how to solve that? I would preferrably keep /media as a tmpfs.
    Thanks in advance!
    Last edited by Ovion (2013-12-22 22:18:28)

    Ovion wrote:#media
    tmpfs /media tmpfs rw,nodev,noexec,nosuid 0 2
    It's about that "2". You can't fsck a tmpfs.
    See also: https://bugs.archlinux.org/task/38210

  • Tmpfs free space versus the "free" command

    I've always thought this possible, so today I tried it:
    I have 2 gigs of physical RAM, and no swap partitions, so my memory limit is a hard 2 gigs.
    I made 2 1-gig tmpfs mounts; nothing complained.
    I filled them up one at a time, and kept checking both free memory (with free) and free space on the tmpfs mounts -- as I started to get near the limit of my system's RAM, free still showed plenty of RAM free in the +/- row.
    Neither free nor df gave me any indication that the system was dangerously low on memory. (I assume from a bit of Googling  that "free" considers the size of tmpfs's to be part of the "cached" column, as nothing else was big enough to contain them.)
    Anyhow, I added a few more files to the tmpfs and the system locked up for a while... After about 3 minutes of hard drive thrashing (which was a bit distressing, but since my files look OK I guess it was desperately running sync again and again to try to free RAM for the tmpfs), a couple programs died and everything went back to normal.
    So, in an effort to have that not happen again (since I use tmpfs heavily; it's a nice way to screw around with big files without waiting on hard drives), I wrote this little shell script:
    #!/bin/bash
    #prints worst-case free memory, from /proc/meminfo
    total=`grep MemTotal /proc/meminfo | gawk '{print $2}'`
    used=`grep Committed_AS /proc/meminfo | gawk '{print $2}'`
    echo "Free RAM in the worst case: " $(($total - $used)) "kB"
    and put this in my .bashrc:
    alias free='free; wcfree'
    alias df='df; wcfree'
    ~Felix.
    PS: If it's not possible to make stuff crash with only a single tmpfs, please let me know -- that would be good knowledge, and make me feel silly for writing this :P

    The advice I got was to shrink the database and reexpand it. And reindex all tables because of defragmentation.
    I don't know if it works, and I would be hesitant to try it, because I believe there is an issue with DBCC SHRINKFILE being awfully slow when there is LOB data.
    I like to believe that your odds would be better if you used varbinary(MAX), and given how easier varbinary(MAX) is to work with your life would be happier two. But I don't know how that would work with your merge-replication scheme. And if you use READ/WRITE/UPDATETEXT
    you need to change that code.
    You say you cannot create a new table and copy data over because of merge replication. Would it be possible to create a new column, copy data over, drop the old column and rename? I have no experience with merge replication myself, so I don't know of
    the repercussions. If you do this, you need to run DBCC CLEANTABLE to get rid of the space.
    For that matter you could try DBCC CLEANTABLE in the state you have now. Not that it should help from you have told me, but you never know...
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Anything-sync-daemon - keep ANYTHING in tmpfs and sync'ed

    Quite a few people like and use profile-sync-daemon.  I have received several requests now asking for a more general version that will allow the flexibility of syncing anything from tmpfs <--> hdd and back again.  I just uploaded the first public release of anything-sync-daemon in response to the requests.  I am currently working on the wiki page, but I have already written a full manpage that is included with the util.  As with anything, BACKUP the data you wish to sync before using it.
    What are some example directories you might want to sync to tmpfs?
    /srv/http
    /foo/bar
    /var/lib/monitorix
    AUR Package: https://aur.archlinux.org/packages.php?ID=58263
    Wiki Page: https://wiki.archlinux.org/index.php/An … ync-daemon
    Gitrepo if you just wanna browse the code (it is VERY simplistic): https://github.com/graysky2/anything-sync-daemon
    Last edited by graysky (2012-09-22 15:00:04)

    I'm having sort of troubles: it doesn't seem to mount anything into /shm...
    I told it to keep sync'ed /var/log and my chromium config (yes i don't like to have 2 daemons running, doing the same thing).
    So this is my asd.conf:
    # /etc/asd.conf
    # For documentation, see: https://wiki.archlinux.org/index.php/Anything-sync-daemon
    # Define where data will reside in tmpfs
    # Think hard about this if using utils like bleachbit as it has a nasty habbit
    # of nuking files it identifies as junk in /tmp
    # A safer location for things is actually /dev/shm
    # This location must be mounted to tmpfs and MUST have permissions of 777
    # Use NO trailing backslash!
    TMPFS="/dev/shm"
    # Define the target(s) directories in the WHATTOSYNC array
    # Do NOT define a file! These MUST be directories!
    # Note that the target directories and all subdirs under them will be included
    # In other words, this is recursive
    # Below is an example to wet your appetite
    #WHATTOSYNC=('/var/log' '/srv/http' '/home/foo/bar')
    WHATTOSYNC=('/var/log' '/home/federico/.config/chromium')
    and this is my /etc/mtab:
    rootfs / rootfs rw 0 0
    /dev/root / ext4 rw,noatime,user_xattr,acl,barrier=1,nodelalloc,data=ordered 0 0
    devtmpfs /dev devtmpfs rw,relatime,size=1027028k,nr_inodes=218985,mode=755 0 0
    proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
    sys /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
    run /run tmpfs rw,nosuid,nodev,relatime,mode=755 0 0
    devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
    shm /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
    tmpfs /tmp tmpfs rw,nosuid,nodev,noatime,size=614400k 0 0
    /var/tmp /var/tmp tmpfs rw,noatime,nodiratime 0 0
    gvfs-fuse-daemon /home/federico/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=100 0 0
    So, don't know where is the problem... my /var/log folder is 65Mb, but if i run df -h, then, this is its output:
    File system Dim. Usati Dispon. Uso% Montato su
    rootfs 231G 31G 189G 14% /
    /dev/root 231G 31G 189G 14% /
    devtmpfs 1003M 0 1003M 0% /dev
    run 1004M 344K 1003M 1% /run
    shm 1004M 152K 1003M 1% /dev/shm
    tmpfs 600M 9,4M 591M 2% /tmp
    /var/tmp 1004M 84K 1004M 1% /var/tmp
    Thanks for the help!

  • HT201272 If I'm not happy with my download (i.e.. my song stops short of its 4:51 duration by about 23 seconds), can I download it again to see if this corrects the problem?

    Can I delete my song purchase and try downloading it again to correct a problem with the song ending prematurely?

    Hey,
    Thanks for the replies.
    Shining:
    The "greedy" mode did make firefox move much faster. Thank you.
    Regarding the "bloated" feeling:
    I did all sort of tweaking to my system, but it still feels not as I expected.
    I expected a different feeling because I have a desktop running Arch,
    and everything there is a lot more responsive: I don't have to wait 5-7 seconds for firefox (or open office) to launch,
    it takes 1-2 seconds. As I mentioned, the glxgears fps is much higher there. Even hard disk performance is a lot better (I am adding hdparm testing,
    please tell me if you see anything special, maybe thats the main problem).
    And I am talking about a 5 years old desktop, in comparison to a 6 months old laptop!
    Every extra tip you can give about accelerating my machine will be appreciated.
    Thanks a lot,
    fiod
    hdparm:
    [**@lg-tux fio]# hdparm -tT /dev/sda
    /dev/sda:
    Timing cached reads: 1562 MB in 2.00 seconds = 780.51 MB/sec
    Timing buffered disk reads: 116 MB in 3.05 seconds = 38.06 MB/sec
    [**@lg-tux fio]#
    fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    none /dev/pts devpts defaults 0 0
    none /dev/shm tmpfs defaults 0 0
    /dev/cdrom /mnt/cd iso9660 ro,user,noauto,unhide 0 0
    /dev/dvd /mnt/dvd udf ro,user,noauto,unhide 0 0
    /dev/fd0 /mnt/fl vfat user,noauto 0 0
    /dev/sda2 /mnt/windows ntfs-3g defaults,users 0 0
    /dev/sda5 swap swap defaults 0 0
    /dev/sda6 / ext3 defaults,noatime 0 0
    PS: Maybe some extra parameters can be added to fstab? Something other than noatime?
    Thanks again
    fiod

  • [SOLVED] Configure tmpfs

    How can I configure tmpfs in a way that it still uses 1/2 of my RAM but also my entire SWAP? For compiling larger packages such as the kernel this would come in handy...
    Last edited by macaco (2014-12-21 01:01:39)

    makepkg doesn't use /tmp unless you have specified it in /etc/makepkg.conf. I suspect you are talking about yaourt which places the build files in /tmp. In that case don't use yaourt for building heavy packages.
    I don't think there is a way to make tmpfs use swap. But if there was, what would be the point? Do you realize if your system would start swapping during the build process your system would practically come to halt and probably crash? Forcing your system to swap is bad...
    Last edited by ugjka (2014-12-18 14:00:52)

  • Question about 11gR2 Grid, RAC, /dev/shm and Automatic Memory Management

    Hello,
    i've recently installed grid and rdbms software 11.2.0.2 on a two node Oracle Linux cluster with 128gb ram each node.
    I'm using ASM to store data and ocr and I'm testing Automatic Memory Management.
    When I finished Grid+RDBMS installation I've seen that /dev/shm size is 64gb (half of my total RAM).
    I've created a database with dbca and when I was asked to choose if I wanted to use AMM I've noticed that I could
    allocate only about 60gb for Oracle. If I chose more than 90gb I got an error saying:
    Using Automatic Memory Management requires 60gb available in my two nodes.
    The current available space in the two nodes is only 30gb and 30gb.
    If you want to use AMM you should either free up some space in /dev/shm
    or reduce the memory allocated to Oracle
    I was wondering when (during the installation or the settings of kernel parameters) did I define the space of /dev/shm ?
    Since I have 128gb of RAM wouldn't it be better to use more than 64gb of ram for my /dev/shm tmpfs partition ?
    Is there a limit or a ratio for best practice for my RAM and the /dev/shm ?
    thanks in advance.

    user9051299 wrote:
    Is the "half of the RAM size" a kernel's default value or Oracle's ? Neither. There are a number of unique factors that determine the best memory size and fit for Oracle - including just how much memory is effectively available (i.e. how much is needed for other services and processes).
    And from what I understand i don't "break" any Oracle's best practice by increasing the /dev/shm right ?Correct. (at least none that I'm aware of, and none that I have read in Oracle's RAC Starter Kit documentation).

  • Linux 10g: large SGA on tmpfs and swap

    hello.
    i'm try to use large (more than 1,7G) SGA with tmpfs. oracle start-up and starting massive swap (in/out) operation (swap usage about 2G). What oracle placed in swap immediatly after startup ?
    PS. SGA size about 2.5G, RAM 4G, swap 4G.
    wbr, Dmitry

    okay. i'm reboot server and this is result of free command immediatly after reboot:
    oxygen:~# free
    total used free shared buffers cached
    Mem: 3948680 270608 3678072 0 4088 214776
    -/+ buffers/cache: 51744 3896936
    Swap: 3212920 0 3212920
    as you see, server have more than 2.5G free RAM for SGA.
    Before buy RHAS, i'm try test Oracle on Debian. /proc/sys/vm/pagecache not found in Debian /proc/sys/vm :(
    X not started on server at all, slocate too.

Maybe you are looking for