Regarding increasing root file system size

Hi folks,
I need urgent help regarding increasing the rool file system size from 40gb to 72 gb by decreasing swap from 64gb to 32gb with out any loss.
please find the current partiton config:
Part Tag Flag Cylinders Size Blocks
0 root wm 6595 - 10787 40.69GB (4193/0/0) 85335936
1 swap wm 2 - 6594 64.00GB (6595/0/0) 134221440
2 backup wm 0 - 14086 136.71GB (14087/0/0) 286698624
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 10788 - 14050 32.01GB (3298/0/0) 67120896
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
i need to change this like below.
Part Tag Flag Cylinders Size Blocks
0 root wm 3299 - 10787 72.69GB (4193/0/0) 152446656
1 swap wm 2 - 3298 32.00GB (3298/0/0) 67110720
2 backup wm 0 - 14086 136.71GB (14087/0/0) 286698624
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 10788 - 14086 32.01GB (3298/0/0) 67120896
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
kin dlyhelp me on this....
Thanks in advance,
Prathap

Sorry, can't be done.
If it was any other filesystem than root, you could do by using SVM.
But root filesystems can't be stripes or concats. Only simple mirrors.
So SVM isnt going to work.
If the spare space was after the partiton. You might have been able to do it by the "dirty" method of manually expanding the partition and using
growfs to expand it. You would have had to do it by netbooting or cd booting since growfs can't be used on the current root.
But in any case, you can't do that either since the space if before the partition not after it.
So, reinstalling is your only option..
Well, If you have a spare disk, you might be able to copy the filesystem over to a larger partition on the other disk, growfs it and boot off that instead.
You'd want to try it on a test systen first...

Similar Messages

  • Required "/" (root) file system size on UNIX for Solution Manager.

    Hello SAP Gurus,
       I am setting up SAP Solution Manager 3.2 on HP-UX. It is asking me about 350MB free sapce on "/" file system for Central Instance installation and about 120MB free sapce on "/" file system for Database Instance installation.
       I am installaing everything on to shared disk which mounted under /usr/sap. Why it needs free sapce in "/" file system. Is there any workaround to get rid of this requirement, as I have very less free sapce on "/" file system and I don't want to take the risks involved in increasing this size.
       Are there any SAP recommended sizes for "/" file system?
       I stuck in the middle of setting up SAP landscape on HP-UX (11.23). I searched through the Installation documents but I couldn't find any thing helpful in this regard. It is urgent requirement to set up this so please let me know any solution or workaround ASAP.
       Any help is greatly appriciated.
    Thanks in advance.
    Regards,
    cvr/

    Hi Vaibhav.
    Normally "canonical path not available for (folder name)" means:
    1. Wrong username/password. Please double check you credentials.
    2. The resource cannot be linked from the portal server. Please be sure that you can connect to the next ports in windows server from the Unix Server:
    a. NetBIOS Session Service TCP 139 This port is used to connect file shares for example.
    b. TCP 445 The SMB (Server Message Block) protocol is used among other things for file sharing in Windows NT/2000/XP. In windows NT it ran on top of NetBT (NetBIOS over TCP/IP), which used the famous ports 137, 138 (UDP) and 139 (TCP). In Windows 2000/XP/2003, Microsoft added the possibility to run SMB directly over TCP/IP, without the extra layer of NetBT. For this they use TCP port 445.
    I hope these things help somebody.
    Best Regards,
    Jheison A. Urzola H.

  • Expanding root file system size

    Hi,
    I am using Solaris 10 in VMware.
    I have increased the virtual disk size in Vmware console from 10 GB to 18GB. What commands should i run to affect this changes in my / file system.
    Below is the df -h output.
    bash-3.00# df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0d1s0 6.4G 3.4G 3.0G 54% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.1G 960K 1.1G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    fd 0K 0K 0K 0% /dev/fd
    swap 1.1G 40K 1.1G 1% /tmp
    swap 1.1G 28K 1.1G 1% /var/run
    /dev/dsk/c0d1s7 2.8G 2.9M 2.8G 1% /export/home
    bash-3.00#
    Thanks in Advance
    Raja Challagulla

    Hi Bob,
    Thanks for giving reply.
    Below is the output.
    bash-3.00# prtvtoc /dev/rdsk/c0d1s2
    * /dev/rdsk/c0d1s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 63 sectors/track
    * 255 tracks/cylinder
    * 16065 sectors/cylinder
    * 1304 cylinders
    * 1302 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 0 1124550 1124549
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    0 2 00 1124550 13687380 14811929 /
    1 3 01 48195 1076355 1124549
    2 5 00 0 20916630 20916629
    7 8 00 14811930 6088635 20900564 /export/home
    8 1 01 0 16065 16064
    9 9 01 16065 32130 48194
    bash-3.00#
    Please let me know if you need any details.
    Thanks
    Raja Challagulla

  • How do increase the file system size

    Hi friends,
    My newly installed solaris 10 system shows Filesysem full error.
    By mistake space allocated for Root is very less and 98% of the space in Root is consumed.
    Do i need to restructure entire filesystem or increase space in Root would be sufficient..?
    Its 160 GB Harddisk on Intel Core 2 duo. What is the best allocation for the same.
    I am very new to Solaris. Trying to learn how does Opensource OS works...
    Pls help me
    Regards
    Bijoy

    If you have more drive space you can mount more space and allocate it to a mount point--this may take care of your problem, but if you are out of space and you are not buying any more drives.... then it's time for the install disks to come back out and configure more appropriately to your use.
    I have my work machine at home on a x64 AMD with 120 GB of drive. I gave 40 GB to Solaris just to be sure I could do basic installations and grow a little.
    Once your allocation is full you have 2 options... buy more and mount it, or reinstall... you choose.
    Well, there is a 3rd... you could unmount one of the other allocations and split it, then mount some of it back to your OS and some back to what it was before, but you are going to be plagued with this your entire lifetime of that install.

  • How to add more disk space into /   root file system

    Hi All,
    Linux  2.6.18-128
    can anyone please let us know how to add more disk space into "/" root file system.
    i have added new hard disk with space of 20GB, 
    [root@rac2 shm]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda1             965M  767M  149M  84% /
    /dev/hda7             1.9G  234M  1.6G  13% /var
    /dev/hda6             2.9G   69M  2.7G   3% /tmp
    /dev/hda3             7.6G  4.2G  3.0G  59% /usr
    /dev/hda2              18G   12G  4.8G  71% /u01
    LABLE=/               2.0G     0  2.0G   0% /dev/shm
    /dev/hdb2             8.9G  149M  8.3G   2% /vm
    [root@rac2 shm]#

    Dude! wrote:
    I would actually question whether or not more disks increase the risk of a disk failure. One disk can break as likely as one of two of more disks.
    Simple stats.  Buying 2 lottery tickets instead of one, gives you 2 chances to win the lottery prize. Not 1. Even though the odds of winning per ticket remains unchanged.
    2 disks buy you 2 tickets in The-Drive-Failure lottery.
    Back in the 90's, BT (British Telecom) had a 80+ node OPS cluster build with Pyramid MPP hardware. They had a dedicated store of scsi disks for replacing failed disks - as there were disk failure fairly often due to the number of disks. (a Pryamid MPP chassis looked like a Xmas tree with all the scsi drive LEDs, and BT had several)
    In my experience - one should rather expect a drive failure sooner, than later. And have some kind of contingency plan in place to recover from the failure.
    The use of symbolic links instead of striping the filesystem protects from the complete loss of the enchilada if a volume member fails, but it does not reduce the risk of loosing data.
    I would rather buy a single ticket for the drive failure lottery for a root drive, than 2 tickets in this case. And using symbolic links to "offload" non-critical files to the 2nd drive means that its lottery ticket prize is not a non-bootable server due to a toasted root drive.

  • Sol10 u8 installed on a ZFS Root File System have different swap needs?

    Does Sol10 u8 installed on a ZFS Root File System have different swap needs/processes?
    Information:
    I've installed Solaris 10 (10/09 s10s_u8wos_08a SPARC, Assembled 16 September 2009) on a half dozen servers and every one of them no longer mount swap at boot.
    The install program commented out the old swap entry and created this one:
    # grep swap /etc/vfstab
    swap - /tmp tmpfs - yes -
    Everything works like a champ. I didn't discover the issue until I tried to install some patches and the install failed. It didn't fail because of lack of swap - it refused to run because it found "No swap devices configured".
    Here are the symptoms:
    # swap -s
    total: 183216k bytes allocated + 23832k reserved = 207048k used, 13600032k available
    # swap -l
    No swap devices configured
    # mount | grep swap
    /etc/svc/volatile on swap read/write/setuid/devices/xattr/dev=5ac0001 on Mon Apr 19 08:06:45 2010
    /tmp on swap read/write/setuid/devices/xattr/dev=5ac0002 on Mon Apr 19 08:07:40 2010
    /var/run on swap read/write/setuid/devices/xattr/dev=5ac0003 on Mon Apr 19 08:07:40 2010
    #

    Hi Nitabills,
    I assume that you create a zfs entry for swap with the commande zfs create -V $size
    did you launch the command :
    swap -a /dev/zvol/dsdk/$ZPOOL/swap
    Try this entry below in the vfstab :
    /dev/zvol/dsdk/$ZPOOL/swap - - swap - no -

  • Archive Repository - Content Server or Root File System?

    Hi All,
    We are in the process of evaluating a storage solution for archiving and I would like to hear your experiences and recommendations.  I've ruled out 3rd-party solutions such as IXOS as over kill for our requirement.  That leaves us with the i5/OS root file system or the SAP Content Server in either a Linux partition or on a Windows server.  Has anyone done archiving with a similar setup?  What issues did you face?  I don't plan to replicate archive objects via MIMIX.
    Is anyone running the SAP Content Server in a Linux partition?  I'd like to know your experience with this even if you don't use the Content Server for archiving.  We use the Content Server (currently on Windows) for attaching files to SAP documents (e.g., Sales Documents) via Generic Object Services (GOS).  While I lean towards running separate instances of the Content Server for Archiving and GOS, I would like to run them both in the same Linux LPAR.
    TIA,
    Stan

    Hi Stanley,
    If you choose to store your data archive files at the file system level, is that a secure enough environment?  A third party certified storage solution provides a secure system where the archive files cannot be altered and also provides a way to manage the files over the years until they have met their retention limit.
    Another thing to consider, just because the end users may not need access to the archived data, your company might need to be able to access the data easily due to an audit or law suit situation. 
    I am a SAP customer whose job function is the technical lead for my company's SAP data archiving projects, not a 3rd party storage solution provider , and I highly recommend a certified storage solution for compliance reasons.
    Also, here is some information from the SAP Data Archiving web pages concerning using SAP Content Server for data archive files:
    10. Is the SAP Content Server suitable for data archiving?
    Up to and including SAP Content Server 6.20 the SAP CS is not designed to handle large files, which are common in data archiving. The new SAP CS 6.30 is designed to also handle large files and can therefore technically be used to store archive files. SAP CS does not support optical media. It is especially important to regularly run backups on the existing data!
    Recommendation for using SAP CS for data archiving:
          Store the files on SAP CS in a decompressed format (make settings at the repository)
           Install SAP CS and SAP DB on one server
           Use SAP CS for Unix (runtime tests to see how SAP CS for Windows behaves with large files still have to be carried out)
    Best Regards,
    Karin Tillotson

  • Problem in Reducing the root file system space

    Hi All ,
    The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
    I have furnished the df -h output for your reference. Please provide solution as soon as possible.
    /dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 30G 1.0M 30G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
    /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 33G 3.5G 30G 11% /tmp
    swap 30G 48K 30G 1% /var/run
    /dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
    /dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
    Regards,
    R. Rajesh Kannan.

    I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
    After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
    Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
    Thank you.

  • Mounting the Root File System into RAM

    Hi,
    I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself.  At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
    I have now succeeded, at least it seems, in performing such a task. There are still some issues.
    For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
    A custom kernel hook is used to (overall):
    Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
    Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
    Copy the non-volatile content into the ramdisk.
    Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
    A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
    To integrate this hook into a initramfs, a preset is needed.
    I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
    The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
    When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
    I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
    Some prerequisites exist (by default):
        Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
        Referencing paths:
            The variables:
                source=
                temporary=
            ...should have the same value in all of these files:
                "/etc/initcpio/hooks/ram"
                "/root/Documents/rootfs/unmount-root-fs.sh"
                "/root/.rsync/exclude.txt"    -- Should correspond.
            This is needed to sync the RAM disk back to the hard disk.
        I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
    Here are all the accepted and used parameters:
        Parameter                       Allowed Values                                          Default Value        Considered Values                         Description
        root                                 Default (UUID=+,/dev/disk/by-*/*)            None                     Any string                                      The source root
        rootfstype                       Default of "-t <types>" of "mount"           "auto"                    Any string                                      The FS type of the source root.
        rootflags                         Default of "-o <options>" of "mount"        None                     Any string                                      Options when mounting the source root.
        ram                                 Any string                                                  None                     "1"                                                  If this hook sould be run.
        ramfstype                       Default of "-t <types>" of "mount"           "auto"                     Any string                                      The FS type of the RAM disk.
        ramflags                         Default of "-o <options>" of "mount"        "size=50%"           Any string                                       Options when mounting the RAM disk.
        ramcleanup                    Any string                                                   None                     "0"                                                  If any left-overs should be cleaned.
        ramcleanup_source       Any string                                                   None                     "1"                                                  If the source root should be unmounted.
        ram_transfer_tool          cp,find,cpio,rsync,unison                            unison                   cp,find,cpio,rsync                           What tool to use to transfer the root into RAM.
        ram_unison_fastcheck   true,false,default,yes,no,auto                    "default"                true,false,default,yes,no,auto        Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_use        0,1                                                              None                    0                                                      If unison should use any available cache. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_update   0,1                                                              None                    0                                                     If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
    This is the basic setup.
    Optionally:
        I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
        I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
    Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
    I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
    Of course, use what you need from the provided files.
    Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
        find+cpio:  2:10s (2:12s on slower hardware)
        unison:      3:10s - 4:00s
        cp:             4 minutes (31 minutes on slower hardware)
        rsync:        4:40s (55 minutes on slower hardware)
        Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
    These are the remaining issues:
        find+cpio option does not create any destination files.
        (On some older hardware) When booting up, the source disk is not always detected.
        When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
    Inconveniences:
        Unison needs to perform an update detection at each startup.
        initramfs' ash does not parse wild characters to use "cp".
    That's about what I can think of for now.
    I will gladly try to answer any questions.
    I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
    Last edited by AGT (2014-05-20 23:21:45)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • Solaris 10 - File System Size / Layout

    Hello, I�m very new to Solaris OS � UNIX world and have a question around the file system.
    I have SUN System with a single 36GB of Hard Disk, and eventually I want to install Oracle 10g on it as a development server for myself.
    My question is around the file system size and layout
    I have installed Solaris 10 and accepted the default file system, which is as below:
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c1t1d0s0 5.0G 3.4G 1.6G 69% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.5G 1.1M 1.5G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 5.0G 3.4G 1.6G 69% /platform/sun4u-us3/lib/libc_psr.so.1
    /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 5.0G 3.4G 1.6G 69% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 1.5G 136K 1.5G 1% /tmp
    swap 1.5G 88K 1.5G 1% /var/run
    /dev/dsk/c1t1d0s7 28G 28M 28G 1% /export/home
    What I don�t understand is when I eventually install Oracle, where will the directory go for it?
    The /home/export folder is aprrox: 28GB which is now taking up most of the space, and I understand that this is for users, therefore my question is:
    1.Should I have allocated a more space to the root file system for oracle during the OS install?
    2. If so what should I do? (its not a production system and I know it will have to go somewhere on the single drive, I�m happy to rebuild as I�m still learning the OS).
    Any suggestions?
    Rgds
    D

    Thanks, I do have this but needed info on the file system layout... but I have the info now...
    If you select the auto file system layout on Solaris 10 then it gives the remaining space to the home/export . I have since re-installed, customised the file system layout and have space for the database.
    D

  • Setting the file system size for the emulator

    Hi,
    I am trying to set the file system size of the emulator so I can test what happens when the system is full and I try to write to the file system. I am using WTK 2.5.1 and I went into preferences -> storage and set Storage size to 10KB. But when I run the emulator and check the available size of the root directories using:
    try {
             Enumeration roots = FileSystemRegistry.listRoots();
             while (roots.hasMoreElements()) {
                String dir = (String) roots.nextElement();
                FileConnection fc = (FileConnection) Connector.open( "file:///" + dir );
                System.out.println( "Available Size on " + dir + " : "+ fc.availableSize());
                fc.close();
          } catch ( IOException ex ) {
             ex.printStackTrace();
    }I get the remaining size of my pc's hard drive, not what is remaining from the 10KB I set.
    Can you tell me the correct way to do this? I don't want to fill up my pc's hard drive in order to test this!!

    Has anyone got an answer to this?

  • /dev/root file system full

    Hello.
    We can't to login to system by telnet, ftp,rlogin, console, because recieved:
    <b> messages msgcnt 142 vxfs: mesg 001: vx_nospace - /dev/root file system full (1 block extent) </b>
    Instance's of Oracle and SAP are working and we are afraid to reboot server.
    We working on HP-UX
    is there any solution for this problem?
    regards
    Denis

    Hey Denis
    why dont you try to extend your /dev/root File system?
    if your files system is already 1005 full and 0 bits space left, then try to move some files to other location where space available and try to extend your files system, that will resolve your space issue.
    But one thing I can tell you is there is no harm in deleting core file from /usr/sap/<SID>/<DEVMBG00>/work.
    -- Murali.

  • Root file system 100%

    Hi,
    We are facing a proble in which our root file system is 100% because of a lot of directories being created in the /proc directory. These directories are created with numbers as their names.
    Please give some advice

    1- It should be nothing to do with proc file system.
    2- Somtimes backup command has an 'o' instead of a 0
    (zero) in /dev/rmt/"0" and a file "o" is created in
    /dev/rmt that fills entire space.
    3- If your slice is really large enough, use du -sh
    /* command in case solaris 9 (du -sk for 8) to see
    which directories are taking space.
    4- If you still find nothing and directory sizes
    don;t seem to pose any problem, umount all filesstems
    and then do a du -sh /*.
    Remember, if you have a directory (say /export/home)
    filled with stuff, and currently it is not mounted,
    It consumes space from / filesystem. When it is
    mounted later, the space shown by df -k would show
    consumed space for entire Inode table. But du -sk
    wont show those files. Indeed those are hidden untill
    you unmount the physical slice from /export/homeI had root file system 100% used
    I deleted all syslog and messages files, the best I could reclaim was 1%
    I did "du -d / |more" to discover sizes of file that may be filling up root file system.
    I discovered that in /dev/rmt the size was unusually very large about
    2.5GB
    There was a file with file name "1" in /dev/rmt which was taking all the space
    I did "rm -r 1" to removed the unwanted file and reclaimed 53% free space

  • Root ( / ) file system incresing

    root@sfms2 # df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 30257446 28379345 1575527 95% /
    /dev/dsk/c2t0d0s3 8072333 1259615 6731995 16% /usr
    /proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    fd 0 0 0 0% /dev/fd
    /dev/dsk/c2t0d0s5 8072333 1108327 6883283 14% /var
    swap 10070400 104 10070296 1% /var/run
    swap 10076632 6336 10070296 1% /tmp
    /dev/dsk/c2t0d0s4 8072333 1300420 6691190 17% /opt
    /dev/did/dsk/d9s6 482775 4815 429683 2% /global/.devices/node@2
    /dev/md/sfms-dg/dsk/d102
    74340345 1284885 72312057 2% /oracle
    /dev/md/sfms-dg/dsk/d101
    132184872 44097490 86765534 34% /sfms_data1
    In my root file system ./proc incresing after some intervals. My root dir going to full. Tell me any solution to resolve this problems.???

    Uh, no. /proc can't increase in your root filesystem because /proc is not part of your root filesystem. 'du' descends and crosses filesystem boundaries by default.
    Run this:
    du -dk / | sort -n > /tmp/root_du.sort
    The bottom few lines of that file will show the largest directories in the filesystem. You may find some sort of log file or some hidden directory you were unaware of. What are they?
    Darren

  • SOLVED: kernel loads, but doesn't have a root file system

    Hi,
    The system is an Asus X202E. It does UEFI and has a GPT partition system. I've gotten through that part. And it is clear to me that the kernel loads.
    It's the next step that's giving me grief. I've tried this with two bootloaders: gummiboot and rEFInd.
    With gummiboot, the kernel panics because it can't mount the root file system. With rEFInd, it gets to the intial ramdisk and then drops me to a shell, apparently because the root file system is set to null, and it obviously can't mount that as "real root".
    Here is what I posted on the Arch mailing list, documenting that I have indeed specified the correct root (I'm copying this from the email, eliding the unfortunate line wraps):
    bridge-live# cat /boot/loader/entries/arch.conf
    Title Arch Linux
    linux /vmlinuz-linux
    initrc /initramfs-linux.img
    options root=PARTUUID=d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
    bridge-live# ls -l /dev/disk/by-partuuid
    total 0
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 0ab4d458-cd09-4bfb-a447-5f5fa66332e2 -> ../../sda6
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 3e12caeb-1424-451c-898e-a4ff05eab48d -> ../../sda7
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 432a977b-f26d-4e75-b9ee-bf610ee6f4a4 -> ../../sda3
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 95a1d2c2-393a-4150-bbd2-d8e7179e7f8a -> ../../sda2
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 a4b797d9-0868-4bd1-a92d-f244639039f5 -> ../../sda4
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 d5bb2ad1-9e7d-4c75-b9b6-04865dd77782 -> ../../sda8
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 ed04135b-bd79-4c7c-b3b5-b0f9c2fe6826 -> ../../sda1
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 f64f82a7-8f2b-4748-88b1-7b0c61e71c70 -> ../../sda5
    The root partition is supposed to be /dev/sda8, that is:
    lrwxrwxrwx 1 root root 10 Apr 15 19:26 d5bb2ad1-9e7d-4c75-b9b6-04865dd77782 -> ../../sda8
    So the correct PARTUUID followed by the one I have specified in
    arch.conf is:
    d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
    d5bb2ad1-9e7d-4c75-b9b6-04865dd77782
    I'm guessing that this is really the same problem with both gummiboot and with rEFInd, but don't really know. It's clear to me that the initrd is not being correctly constructed. So I removed /etc/mkinitcpio.conf and did, as per the Arch wiki,
    pacman -Syyu mkinitcpio linux udev
    No joy.
    I don't even know which way to go at this point. If I even knew how to tell it where the real disk is in the initial ram disk shell, that would help. Better of course, would be actually solving the problem.
    Thanks!
    Last edited by n4rky (2013-04-17 21:41:36)

    I have made extremely limited progress on this issue.
    My previous attempt to specify the root partition in mkinitcpio.conf was insufficient. Furthermore, this is no place--despite the documentation--for the orthodoxy about using UUIDs rather than the straight /dev/sdx. In my case:
    root=/dev/sda8
    and run
    mkinitcpio -p linux
    It still drops me into the shell at boot. I can do
    mount /dev/sda8 /new_root/
    and exit the shell. It still won't believe it has the root device and drops me back in. I just exit.
    At this point, for a very brief moment, things look promising. It appears to be starting normally. Then, gdm.service, NetworkManager.service, and dbus.service all fail to start. There may be others but the screen goes by too quickly. At this point, it hangs trying to initialize the pacman keyring and all I can do is CTRL-ALT-DEL.
    It occurred to me that this might extend to the rEFInd configuration and so I modified it to also use /dev/sda8 rather than the UUID, but this made no difference. Trying to boot via gummiboot still yields the previously specified kernel panic.

Maybe you are looking for

  • How to change MDX  for Universe base on Universe

    Dear All, Our customer has a query about Universe base on MSAS2005. They can created a Universe and used it for Webi report well. The following MDX is right. SELECT          {[Measures].DefaultMember} ON COLUMS ,          ADDCALCULATEDMEMBERS ([Accou

  • Is Web-Services the only way to trigger Workflow on Time-Based conditions?

    <p> Here is the requirement: When a Custom Object's record is set to a specific status and it hasn't been modified in 7 days an email should be sent to the owner to notify them to update the record. Any thoughts on how we can do this? Thanks, Dan </p

  • How do I get custom ringtone off my iPhone and into iTunes

    I recently reformatted my laptop and now I need to get my ringtones off my iphone and into itunes so I can back them up but when i check sync tones, iTunes is telling me it will erase all tones on my iPhone 4 and replace with tones on current iTunes

  • MIGO output determination

    Hello MM Gurus, What is the function of CONDITION RECORDS and ACCESS SEQUENCE for output determination in IM level. Secondly I am trying to define a new output type for label in Tcode NACE . Can you explain me in detail what are the steps I have to c

  • QT "invalid public movie atom" problem - iPod Classic

    I encountered a strange "invalid public movie atom" problem relating to my brand new iPod classic (80GB): I used QTpro 7.4.1(14) to convert several movie/avi files for iPod (the standard procedure via the offered converting methods). After copying th