Increasing file system space and /

Running Solaris 10 SPARC
V445R
I have (2ea) 73GB disk (Mirrored)
I would like to add 2 additional 73GB disk
Mirror 0 against 1 and Mirror 2 against 3
I would like to increase /usr (c0t0d0s4), /var (c0t0d0s3) and /generic (c0t0d0s6)
I believe the file system is limited to 7 partitions (slices s0-6).
slice s6 would be entirely on the second set of disks.
w/Solaris 10 is there an easier way to add space to the file system than backup everything, split the mirror, format, partition, mirror, load backups
Thanks

Assuming you're using SVM, see the following links to see if they help. I don't have a system I can alter just now, but have these bookmarked. Good luck.
http://docs.sun.com/app/docs/doc/816-4520/6manpiek9?a=view
http://docs.sun.com/app/docs/doc/816-5166/6mbb1kq27?a=view
Just in case, I'd make sure to have a backup. ;)
-Marc

Similar Messages

  • Export 500gb database size to a 100gb file system space in oracle 10g

    Hi All,
    Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

    user533548 wrote:
    Hi Linda,
    The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
    For instance :
    dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

  • ZFS File   system space issue

    Hi All,
    Kindly help how to resolve ZFS File system space issue. I have deleted nearly 20 GB but File system size remains the same. Kindly advice on it.
    Thanks,
    Kumar

    The three reasons that I'm aware of that causes deleting files not to return space are 1) the file is linked to multiple names 2) deleting files which are still open by a process and 3) deleting files which are backed up by a snapshot. While it is possible you deleted 20GB in multiply linked and/or open files, I'd guess snapshots to be most likely case.
    For multiple "hard" links, you can see the link count (before you delete the file) in the "ls -l" command (the second field). If it is greater than one, deleting the file won't free up space. You have to delete all file names linked to the file to free up the space.
    For open files, you can use the pfiles command to see what files a process has open. The file space won't be recovered until all processes with a file open close it. Killing the process will do the job. If you like to use a big hammer, a reboot kills everything.
    For snapshots: Use the "zfs -t snapshot" command and look at the snapshots. The space used by the snapshot indicates how much space is held by the snapshot. Deleting the snapshot will free up space unless the space is still being held by another snapshot. To free space held by a file, you have to delete all snapshots which contain that file.
    Hopefully, I got all of this right.

  • Can you upload to a file system directly and not to wwv_flow_file_objects$

    The following link shows how to upload a file. These files that are uploaded are stored in a table called wwv_flow_file_objects$ .
    However what if I want to upload a file but save it to the file system (share) and not put it into wwv_flow_file_objects$ is that possible
    http://download.oracle.com/docs/cd/B31036_01/doc/appdev.22/b28839/up_dn_files.htm#CJAHDJDA
    When you use the file upload item type, the files you upload are stored in a table called wwv_flow_file_objects$

    It has to be loaded to wwv_flow_file_objects$, but you can use the
    UTL_FILE package to write it to a directory the database has access to. Use FOPEN with open_mode wb and use the PUT_RAW procedure to write into the file. Use READ procedure of the DBMS_LOB package to read your BLOB in junks.
    UTL_FILE: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#i1003526
    DBMS_LOB: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i999170
    I think that should work.
    Patrick
    My APEX Blog: http://inside-apex.blogspot.com
    The ApexLib Framework: http://apexlib.sourceforge.net
    The APEX Builder Plugin: http://sourceforge.net/projects/apexplugin/

  • File System backup and restore command

    Hi All,
    I require File System Backup and Restore command in AIX to Trivoli Storage Manager.Please send me the commands.
    Thanks in Advance
    Regards,
    Soumya

    Please check this out
    Restore
    http://www.ahinc.com/aix/backup.htm
    restore -xvf/dev/rmt0 Restores all the files located on the tape device that were backed up using the backup command.
    Backup
    http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds1/backup.htm
    Hope this will help you.
    Regards
    dEE

  • Dfc: Display file system space usage using graph and colors

    Hi all,
    I wrote a little tool, somewhat similar to df(1) which I named dfc.
    To present it, nothing better than a screenshot (because of colors):
    And there is a few options available (as of version 3.0.0):
    Usage: dfc [OPTIONS(S)] [-c WHEN] [-e FORMAT] [-p FSNAME] [-q SORTBY] [-t FSTYPE]
    [-u UNIT]
    Available options:
    -a print all mounted filesystem
    -b do not show the graph bar
    -c choose color mode. Read the manpage
    for details
    -d show used size
    -e export to specified format. Read the manpage
    for details
    -f disable auto-adjust mode (force display)
    -h print this message
    -i info about inodes
    -l only show information about locally mounted
    file systems
    -m use metric (SI unit)
    -n do not print header
    -o show mount flags
    -p filter by file system name. Read the manpage
    for details
    -q sort the output. Read the manpage
    for details
    -s sum the total usage
    -t filter by file system type. Read the manpage
    for details
    -T show filesystem type
    -u choose the unit in which
    to show the values. Read the manpage
    for details
    -v print program version
    -w use a wider bar
    -W wide filename (un truncate)
    If you find it interesting, you may install it from the AUR: http://aur.archlinux.org/packages.php?ID=57770
    (it is also available on the archlinuxfr repository for those who have it enabled).
    For further explanations, there is a manpage or the wiki on the official website.
    Here is the official website: http://projects.gw-computing.net/projects/dfc
    If you encounter a bug (or several!), it would be nice to inform me. If you wish a new feature to be implemented, you can always ask me by sending me an email (you can find my email address in the manpage or on the official website).
    Cheers,
    Rolinh
    Last edited by Rolinh (2012-05-31 00:36:48)

    bencahill wrote:There were the decently major changes (e.g. -t changing from 'don't show type' to 'filter by type'), but I suppose this is to be expected from such young software.
    I know I changed the options a lot with 2.1.0 release. I thought it would be better to have -t for filtering and -T for printing the file system type so someone using the original df would not be surprised.
    I'm sorry for the inconvenience. There should not be any changes like this one in the future though but I thought it was needed (especially because of the unit options).
    bencahill wrote:
    Anyway, I now cannot find any way of having colored output showing only some mounts (that aren't all the same type), without modifying the code.
    Two suggestions:
    1. Introduce a --color option like ls and grep (--color=WHEN, where WHEN is always,never,auto)
    Ok, I'll implement this one for 2.2.0 release It'll be more like "-c always", "-c never" and "-c auto" (default) because I do not use long options but I think this would be OK, right?
    bencahill wrote:2. Change -t to be able to filter multiple types (-t ext4,ext3,etc), and support negative matching (! -t tmpfs,devtmpfs,etc)
    This was already planned for 2.2.0 release
    bencahill wrote:Both of these would be awesome, if you have time. I've simply reverted for now.
    This is what I would have suggested.
    bencahill wrote:By the way, awesome software.
    Thanks I'm glad you like it!
    bencahill wrote:P.S. I'd already written this up before I noticed the part in your post about sending feature requests to your email. I decided to post it anyway, as I figured others could benefit from your answer as well. Please forgive me if this is not acceptable.
    This is perfectly fine Moreover, I seem to have some troubles with my e-mail addressee... So it's actually better that you posted your requests here!

  • Increase file system size

    Hello,
    I have solaris9 system where on disk c0t0d0, I have 7 partitions. The first partition shows file system mounted=yes with 10GB space. This file system is near full ( 95% used). I need to increase the size. How can I do that?
    I have another partition that shows 30GB and file system mounted=no. Can I create a file system to use this 30 GB? Does it disturb existing file system?
    Appreciate your help.

    I agree with the guest poster a few posts up. This could be a unique opportunity to think about Bike Trials something that she would really love to do.
    Instead of just looking at what's available, think first about what you really want - what would be Bike Trial Seller fulfilling and enjoyable. Is there anything creative she has always wanted to do?
    If she loves being a nurse, look at other opportunities within this field that would not involve physical work.
    If nursing is just a job, and Bike Store she doesn't enjoy it that much, then look at this as a pointer towards doing something different.
    Start making Bike Shop brainstorming lists about all the jobs she would love to and could do - you will come up with something that she can feel excited about.
    Edited by: cocosan on May 12, 2009 1:39 AM

  • Solaris file system space

    Hi All,
    While trying to use df -k command in my solaris box, I am getting output shown as below.
    Filesystem 1024-blocks Used Available Capacity Mounted on
    rpool/ROOT/solaris-161 191987712 6004395 140577816 5% /
    /devices 0 0 0 0% /devices
    /dev 0 0 0 0% /dev
    ctfs 0 0 0 0% /system/contract
    proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    swap 4184236 496 4183740 1% /system/volatile
    objfs 0 0 0 0% /system/object
    sharefs 0 0 0 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1 146582211 6004395 140577816 5% /lib/libc.so.1
    fd 0 0 0 0% /dev/fd
    swap 4183784 60 4183724 1% /tmp
    rpool/export 191987712 35 140577816 1% /export
    rpool/export/home 191987712 32 140577816 1% /export/home
    rpool/export/home/123 191987712 13108813 140577816 9% /export/home/123
    rpool/export/repo 191987712 11187204 140577816 8% /export/repo
    rpool/export/repo2010_11 191987712 31 140577816 1% /export/repo2010_11
    rpool 191987712 5238974 140577816 4% /rpool
    /export/home/123 153686630 13108813 140577816 9% /home/12
    My question here is why /usr/lib/libc/libc_hwcap1.so.1 file system is having same size as that of / root filesystem? and what is the significance of /usr/lib/libc/libc_hwcap1.so.1 file system..
    Thanks in Advance for your help..

    You must have a lot of small files on the file system.
    There are couple of ways, the simplest is to increase the size of the filesystem.
    Or if you can create a new filesystem, but increase the inode count so you can utilize the space and still have enough inodes. Check out the man page mkfs_ufs and the option nbpi=n
    my 2 bits

  • Problem in Reducing the root file system space

    Hi All ,
    The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
    I have furnished the df -h output for your reference. Please provide solution as soon as possible.
    /dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 30G 1.0M 30G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
    /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 33G 3.5G 30G 11% /tmp
    swap 30G 48K 30G 1% /var/run
    /dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
    /dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
    Regards,
    R. Rajesh Kannan.

    I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
    After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
    Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
    Thank you.

  • QFS don�t update File System space after a failover

    When I do a fail-over (get down a master node of device QFS) while I delete a file the space on it (FS) is inconsistent (df command show mayor space than du command) and I have to do a file System Check on it to get free space.

    Thanks,
    The version of QFS is VERSION.4.6
    SUN Cluster version 3.2
    Solaris 10 8/07
    The file "mcf" is
    # Equipment Eq Eq Family Device Additional
    # Identifier Ord Type Set State Parameters
    kml 1 ms kml on shared
    /dev/did/dsk/d7s0 10 md kml on
    /dev/did/dsk/d8s0 11 md kml on
    /dev/did/dsk/d9s0 12 md kml on
    /dev/did/dsk/d10s0 13 md kml on
    /dev/did/dsk/d11s0 14 md kml on
    /dev/did/dsk/d12s0 15 md kml on
    /dev/did/dsk/d13s0 16 md kml on
    /dev/did/dsk/d14s0 17 md kml on
    /dev/did/dsk/d15s0 18 md kml on
    /dev/did/dsk/d16s0 19 md kml on
    /dev/did/dsk/d21s0 20 md kml on
    /dev/did/dsk/d22s0 21 md kml on
    /dev/did/dsk/d23s0 22 md kml on
    /dev/did/dsk/d24s0 23 md kml on
    /dev/did/dsk/d25s0 24 md kml on
    /dev/did/dsk/d26s0 25 md kml on
    /dev/did/dsk/d27s0 26 md kml on
    /dev/did/dsk/d28s0 27 md kml on
    /dev/did/dsk/d29s0 28 md kml on
    /dev/did/dsk/d30s0 29 md kml on
    # samfsinfo kml
    samfsinfo: filesystem kml is mounted.
    name: kml version: 2 shared
    time: Thursday, April 10, 2008 4:48:05 PM PYT
    count: 20
    capacity: 000000003d064400 DAU: 64
    space: 000000003d04e480
    ord eq capacity space device
    0 10 00000000030d1d00 00000000030d0580 /dev/did/dsk/d7s0
    1 11 00000000030d1d00 00000000030d1c00 /dev/did/dsk/d8s0
    2 12 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d9s0
    3 13 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d10s0
    4 14 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d11s0
    5 15 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d12s0
    6 16 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d13s0
    7 17 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d14s0
    8 18 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d15s0
    9 19 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d16s0
    10 20 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d21s0
    11 21 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d22s0
    12 22 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d23s0
    13 23 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d24s0
    14 24 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d25s0
    15 25 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d26s0
    16 26 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d27s0
    17 27 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d28s0
    18 28 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d29s0
    19 29 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d30s0
    #df -k |grep kml
    kml 1023820800 5384 1023815416 1% /oradata_kml1
    #clrs status -g qfs-kml-rg +
    Cluster Resources ===
    Resource Name Node Name State Status Message
    qfs-kml-rs m9ka Online Online - Service is online.
    m9kb Offline Offline
    Best regasts,
    Ivan

  • DNFS with ASM over dNFS with file system - advantages and disadvantages.

    Hello Experts,
    We are creating a 2-node RAC. There will be 3-4 DBs whose instances will be across these nodes.
    For storage we have 2 options - dNFS with ASM and dNFS without ASM.
    The advantages of ASM are well known --
    1. Easier administration for DBA, as using this 'layer', we know the storage very well.
    2. automatic re-balancing and dynamic reconfiguration.
    3. Stripping and mirroring (though we are not using this option in our env, external redundancy is provided at storage level).
    4. Less (or no) dependency on storage admin for DB file related tasks.
    5. Oracle also recommends to use ASM rather than file system storage.
    Advantages of DNFS(Direct Network File System) ---
    1. Oracle bypasses the OS layer, directly connects to storage.
    2. Better performance as user's data need not to be loaded in OS's kernel.
    3. It load balances across multiple network interfaces in a similar fashion to how ASM operates in SAN environments.
    Now if we combine these 2 options , how will be that configuration in terms of administration/manageability/performance/downtime in future in case of migration.
    I have collected some points.
    In favor of 'NOT' HAVING ASM--
    1. ASM is an extra layer on top of storage so if using dNFS ,this layer should be removed as there are no performance benefits.
    2. store the data in file system rather than ASM.
    3. Stripping will be provided  at storage level (not very much sure about this).
    4. External redundancy is being used at storage level so its better to remove ASM.
    points for 'HAVING' ASM with dNFS --
    1. If we remove ASM then DBA has no or very less control over storage. He can't even see how much is the free space left as physical level.
    2. Stripping option is there to gain performance benefits
    3. Multiplexing has benefits over mirroring when it comes to recovery.
    (e.g, suppose a database is created with only 1 controlfile as external mirroring is in place at storage level , and another database is created with 2 copies (multiplexed within Oracle level), and an rm command was issued to remove that file then definitely there will be a time difference between restoring the file back.)
    4. Now familiar and comfortable with ASM.
    I have checked MOS also but could not come to any conclusion, Oracle says --
    "Please also note that ASM is not required for using Direct NFS and NAS. ASM can be used if customers feel that ASM functionality is a value-add in their environment. " ------How to configure ASM on top of dNFS disks in 11gR2 (Doc ID 1570073.1)
    Kindly advise which one I should go with. I would love to go with ASM but If this turned out to be a wrong design in future, I want to make sure it is corrected in the first place itself.
    Regards,
    Hemant

    I agree, having ASM on NFS is going to give little benefit whilst adding complexity.  NAS will carrying out mirroring and stripping through hardware where as ASM using software.
    I would recommend DNFS only if NFS performance isn't acceptable as DNFS introduce an additional layer with potential bugs!  When I first used DNFS in 11gR1, I came across lots of bugs and worked with Oracle Support to have them all resolved.  I recommend having read of this metalink note:
    Required Diagnostic for Direct NFS Issues and Recommended Patches for 11.1.0.7 Version (Doc ID 840059.1)
    Most of the fixes have been rolled into 11gR2 and I'm not sure what the state of play is on 12c.
    Hope this helps
    ZedDBA

  • Monitor free file system space in GB

    Hi All,
    i need to check all the FS of our database and see if they are getting to less then some GB
    then get an alert.
    for example my FS has 100Gb in total and because i put there some non oracle stuff (or oracle stuff, it doesnt matter) the free space there is lower than 10GB.
    i want to get an alert if the free space is lower than 10GB?
    i didn't find any metric that can help me with it ?
    can somebody help ?
    10x
    zvika

    Hi ,
    i tried directory size Metric but it just give me the size of the directory.
    i need to check if the FS has less then threshold value in GB.
    also i had some errors trying to get the metric works on directories.
    File or Directory Attribute Not Found: File or Directory Name XXX
    Error in computing size for: XXX
    Angrydot
    you are right but 10% of 100GB is not ok while 10% of 500GB is ok.
    i want to set one threshold to all FS.
    10x
    Zvika

  • File system used space monitoring rule

    All, I'm trying to change the way OC monitors disk space usage. I don't want it to report on nfs filesystems, except from the server from which they are shared.
    The monitored attribute is FileSystemUsages.name=*.usedSpacePercentage.
    I'd like it to only report on ufs and zfs filesystems. I've tried to create new rules using the attribures:
    FileSystemUsages.type=ufs.usedSpacePercentage
    FileSystemUsages.type=UFS.usedSpacePercentage
    FileSystemUsages.type=zfs.usedSpacePercentage
    FileSystemUsages.type=ZFS.usedSpacePercentage
    But I don't get any alerts generated on system which I know violate the thresholds I've specified.
    Has anybody successfully set up rules like these? Am I on the right track? do ufs/UFS/zfs/ZFS need to be single or double quoted? Documentation with various examples is non-existent as far as I can tell.
    Any help is greatly appreciated
    Tim

    do you get any answers for this question? It seems like OEM12c has file system space usage monitoring setup for nfs mounted file systems, however I could not find a place to specify the threshold for those nfs mounted file sytems except root file system. does anybody know how to setup threshold of nfs mounted file systems(Netapp storage)? thank you very much

  • File system getting full and Server node getting down.

    Hi Team,
    Currently we are using IBM Power 6 AIX operating system.
    And in our environment, for development system, file system is getting full and development system is getting slow while accessing.
    Can you please let me know , what exactly the problem & which command is used to see the file system size and how to resolve the issue by deleting the core files or some ting. Please help me .
    Thanks
    Manoj K

    Hi      Orkun Gedik,
    When i executed the command df -lg and find . -name core noting is displayed, but if i execute the command df is showed me the below information. below is an original file which i have modified the sid.
    Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
    /dev/fslv10     52428800  16279744   69%   389631    15% /usr/sap/SID
    Server 0 node is giving the problem. its getting down all the times.
    And if i check it in the /usr/sap/SID/<Instance>/work for the server node "std_server0.out file , the below information is written in the file.
    framework started for 73278 ms.
    SAP J2EE Engine Version 7.00   PatchLevel 81863.450 is running! PatchLevel 81863.450 March 10, 2010 11:48 GMT
    94.539: [GC 94.539: [ParNew: 239760K->74856K(261888K), 0.2705150 secs] 239760K->74856K(2009856K), 0.2708720 secs] [Times: user=0.00 sys=0.36, real=0.27 secs]
    105.163: [GC 105.164: [ParNew: 249448K->80797K(261888K), 0.2317650 secs] 249448K->80797K(2009856K), 0.2320960 secs] [Times: user=0.00 sys=0.44, real=0.23 secs]
    113.248: [GC 113.248: [ParNew: 255389K->87296K(261888K), 0.3284190 secs] 255389K->91531K(2009856K), 0.3287400 secs] [Times: user=0.00 sys=0.58, real=0.33 secs]
    Please advise.
    thanks in advance
    Manoj K

  • CC configuration causes freezing and file system corruption

    I have received no response for:
    http://forums.adobe.com/message/6074118
    It is a major issue, can I file an issue/fix request somehow?
    Basically it appears that the default configuration when installing the complete product suite causes the OS (OSX Mavericks) to freeze unpredictably and render the system unusable unless shut down and rebooted. This may result in file system corruption and data loss requiring complete reinstall of OS and all products.
    Some have mentioned 'drive' is causing this, but I am not familiar with that component and do not see any options to remove or disable such a product in CC.
    I do not want any non-essential add-ons (like Bridge?) - just a reliable production configuration so I can get work done.
    The risk of having the system 'locked up' or file system corrupted is unacceptable.
    Thanks,
    Mike

    Hi mkrjf,
    Please enable root account and try to use Photoshop and let me know if still the same behavior.
    Root: http://support.apple.com/kb/PH14281
    Regards,
    Romit Sinha

Maybe you are looking for