NFS consumes a lot of disk space on server

I have solaris 10 on sparc. I have encountered problem with hdd disk space when running NFS. When Nfs client has mounted nfs resources. On the server I can observe that each day utilisation on mounted filesystem grows. When I unshare NFS resources the file system usage is much smaller. It is strange, because the size of files located on the shared filesytem doesnt grow so much.
Does anybody know how to deal with it ? Is there any nfs cache which makes the reservation of hdd resources ?

I had same problem. when resource was mounted used space grown but returned to initial state when the file system was umounted. That was because nfs daemon created temporaly files when were opened and removed those files when the file system were unmounted.
search files with .nfs extension in the nfs share resource
EXAMPLE:
# pwd
/data
# find . |grep -i nfs
Good luck

Similar Messages

  • Will Lion Install Consume a lot of Disk Space in my Time Capsule?

    I have a 500 GB iMac which is about 2/3 full and a 1T Time Capsule which is a little over half full.  I'm getting ready to install Lion.  As far as by backups go, is this just another backup or will it consume a much bigger part of my TC since it is a major upgrade? 
    Should I just upgrade and not worry about disk space?

    After installing Lion TM will then backup all the new system files which can amount to about 6-8 GBs.

  • In Shared services, Log Files taking lot of Disk space

    Hi Techies,
    I have a question. Its like the Logs in BI+ in Shared Service Server is taking lot of Disk space about 12 GB a day.
    Following files are taking more space
    Shared Service-Security-Client log ( 50 MB )
    Server-message-usage Service.log ( About 7.5 GB )
    why this is happening. Any suggestions to avoid this.
    Thanks in Advance,
    Sonu

    Hi Techies,
    I have a question. Its like the Logs in BI+ in Shared Service Server is taking lot of Disk space about 12 GB a day.
    Following files are taking more space
    Shared Service-Security-Client log ( 50 MB )
    Server-message-usage Service.log ( About 7.5 GB )
    why this is happening. Any suggestions to avoid this.
    Thanks in Advance,
    Sonu

  • Does Spotlight use a lot of disk space?

    I'm new to Macs — in fact, my first one hasn't arrived yet — and I'm wondering if Spotlight will tie up a lot of disk space? I'm getting a MacBook Pro with a 100GB drive (modest by desktop standards), and I've noticed that Windows search products (e.g., Google Desktop, MSN Desktop) can create 3-5 GB of data after indexing even a modest 30GB hard drive. If Spotlight is going to "waste" gigs of space on my limited laptop drive, I might look into disabling it. Thanks.

    Do not worry about it. On my book disk (100Gb with 60Gb occupied) spotlight uses the following space:
    big:~ mtsouk$ ls -l /.Spotlight-V100/
    total 278972
    -rw------- 1 root admin 219443200 Apr 15 07:23 ContentIndex.db
    -rw------- 1 root admin 238 Feb 28 18:56 _IndexPolicy.plist
    -rw------- 1 root admin 304 Apr 3 23:42 _exclusions.plist
    -rw------- 1 root admin 378 May 28 2005 _rules.plist
    -rw------- 1 root admin 66211840 Apr 15 07:23 store.db
    big:~ mtsouk$
    Mihalis.
    Dual G5 @ 2GHz   Mac OS X (10.4.6)  

  • Report consuming a lot of temp Space (BIP)

    Hi Experts,
    I am facing an issue.
    Some BIP reports consuming a lot of temp space (5-30 gb temp directory), which is causing service down (BIP, RMS, ReIM and RPM). BIP, RMS, ReIM and RPM are installed on same server.
    Please help to troubleshoot this issue.
    Thanks in Advance

    plz see
    Troubleshooting Oracle BI Publisher Enterprise 11g [ID 1387643.1]
    Troubleshooting Oracle BI Publisher Enterprise 10g [ID 412232.1]

  • ORA-00257  - lots of disk space available

    I am trying to load about 14 GBs of data, totaling ~13.5 million records.
    The raw data is about 14GBs, and the table space is set to 30GBs, its all text data.
    The data is from about 200 different site files ranging in size from 10-180 MBs.
    This is (real) test data I'm using to try to optimize my indexes and such before I start using the table for actual reports.
    A DBA, I am not, proficient at databases and programming, I am, so please bare with me :)
    My process is to use sqlldr to on an Oracle 10 DB to load the data into a records_load table, then use a procedure to basically do an insert into records (...) select (...) from records_load, delete from records_load.
    I did a couple sites one at a time, and they work fine, so I have 500,000 records in my records table.
    But when I try to transfer the entire 13,000,000 records from the load table using the SP, it eventually hangs.
    If I log into sqlplus from another window, I get
    ERROR:
    ORA-00257: archiver error. Connect internal only, until freed.
    HOWEVER, everything I find online is telling me that a disk is full.
    Oracle is setup on a Windows Server, as follows:
    Drive Total Free
    oracle-c (c:) 50.0 GB 39.3 GB
    oracle-apps (e:) 99.9 GB 91.2 GB
    oracle-logsA (f:) 195 GB 140 GB
    oracle-logsB (g:) 83.5 GB 83.3 GB
    oracle-data (h:) 278 GB 150 GB
    I ran into this problem on as well on Friday, and ran:
    RMAN> delete expired backup;
    and this "seemed" to fix the issue, which doesn't make much sense to me.
    So, I modified my SP somewhat today, to use a cursor, and loop through each site, and do the
    "*loop* insert into records (...) select (...) from records_load *where site=cur_row.site*, delete from records_load *where site=cur_row.site*"
    Now my db is hung again, with PLENTY of disk space.
    I'm really not sure what to do and would really appreciate any suggestions?

    As you can see, I didn't get any results from the "show parameter archive_log_dest" command.
    I'm also showing the output of "show parameter log_archive" and "archive log list" since these are other commands I've stumbled upon to try to find a solution, in hopes that it helps you.
    Just let me know if there's a typo in that command, and I'll re-run it.
    Thanks
    $ sqlplus user/pass@oracle as sysdba
    SQL*Plus: Release 11.1.0.6.0 - Production on Mon Nov 2 21:37:48 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    Connected to:
    Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
    SQL> SHOW PARAMETER ARCHIVE_LOG_DEST
    SQL>
    SQL> show parameter log_archive
    NAME TYPE VALUE
    log_archive_config string
    log_archive_dest string
    log_archive_dest_1 string
    log_archive_dest_10 string
    log_archive_dest_2 string
    log_archive_dest_3 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    NAME TYPE VALUE
    log_archive_dest_9 string
    log_archive_dest_state_1 string enable
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string enable
    log_archive_dest_state_3 string enable
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    NAME TYPE VALUE
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%T
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 2
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 6730
    Next log sequence to archive 6730
    Current log sequence 6732
    SQL>

  • Importing from iphoto requires a lot of disk space

    Hi,
    I just purchased Aperture and am in the process of importing my iPhoto library. I followed Apple's video suggestion but was concerned as I started doing it on my own computer.
    I was going to import from Iphoto and leave the files in the same location. But I immediately got a warning that this would take up 9gb of space.
    1. is that permanent or just requiring 9gb while the import takes place?
    2. if I'm using the iPhoto library and am leaving them there, why is it going to cost me so much disk space? (my laptop is limited on space, hence my concern
    3. Am I better off doing a 1-time import to a new location and deleting the iphoto library - yes I already backed up (I'll be using Aperture here on out)?
    thanks!

    TD,
    thanks for that. I was also asking if it was common and correct that the import and leaving the library where it is would cost 9gb?

  • Does a gmail account use a lot of disk space?

    I have recently moved my domain from a pop account to a gmail hosted domain, and my employer has us using gmail. I have made IMAP accounts to get my gmail content in Apple Mail. After a couple of months, my available disk space has gone from 30G to 5, without noteworthy additions of applications or files. It's all mail. Is a gmail account a disk hog? If so, are there ways of containing it?

    I think I've got my arms around this problem. I've actually recovered about 65G of disk space in the process.
    Google's version of IMAP combined with Apple Mail is a chatty, noisy, wasteful system. Google creates many folders that replicate a message (or are pointers to the same message). When Apple Mail syncs with Gmail, those folders are created on your hard drive and have actual copies (not pointers) of the messages. Caches are also spawned and the whole bloated mess is constantly syncing and spawning more temp, cache, and envelope files.
    A solution is to go into gmail (on the web) look at settings, and then under the Label tab, turn off IMAP syncing with all those extra directories (like All Mail) Just turn them off.
    There are two benefits:
    It won't expand out of control on your hard drive (1 have .5G on gmail which bloated to 65G on my hard drive)
    The constant synching and passing back and forth of files for these various folders will be choked off and you won't risk getting cut off for excessive bandwidth usage.

  • XI takes a lot of disk space in the non sap partition

    Hello
    We use xi to transfer data from the non sap partition to the production partition.
    In the non sap partition we have now more than 30 jobs by the name QZDASOINIT
      of the xi user.
    In the job log of these jobs there are many messages that says:
    " Additional 16 megabytes of storage allocated for this job."
    Message ID : PWS0083
    Right now the xi takes about 8 percent of our non sap partition's disk space (not cpu) and it's growing all the time.
    From time to time we manually end these jobs.
    We use ecc 6.0  v5r4m0.
    Any idea how to solve this problem ?
    Thanks
    benny

    Hi Benny,
    For the growing XI you might want to consider archiving as described in [http://help.sap.com/saphelp_nw70/helpdata/EN/3b/505a4232616255e10000000a155106/frameset.htm|http://help.sap.com/saphelp_nw70/helpdata/EN/3b/505a4232616255e10000000a155106/frameset.htm]
    Check if it is one of the described tables, e.g. XI_AF_MSG, that causes most of the growth. If not you should in any case find out/ give information what objects are growing so fast.
    The QZDASOINIT jobs cover the database access of your application.
    HTH,
    Thomas

  • /Volumes using lots of disk space

    Doing a get info on my server boot drive shows its using 53Gb of space which seemed very high to me considering what is on there
    So I ran the command
    sudo du -chxd 1 /
    it reported that /Volumes was using 39Gb
    Obviously I don't just want to remove what is in there as it is all the mounted volumes and I don't want to risk deleting data on them
    how can I clear out this phantom data?

    /Volumes contains entries for every mounted volume on your computer, so if you have any external hard disks, CDs, DVDs, or additional partitions they are show up in /Volumes and will therefore be included in that figure. What's more the actual boot drive itself also shows up in /Volumes. However normally this should not be an issue because the du (diskspace usage) command is clever enough to know the difference between files on one drive/volume/partition or another.
    What I have seen happen however on a server providing network home directories is a crash to upset entries in the /Volumes directory. What happens is that normally you have an additional volume mounted on your server, lets say it is called Users this will therefore show up in /Volumes as /Volumes/Users this is then shared by the server and network login accounts store their home directories in this. Lets say there is then a crash on the server, because there has been a crash the normal shutdown process has not occurred and as a result the normal steps that 'unmount' this volume have not occurred. Therefore when the server is rebooted the folder /Volumes/Users still exists. What can happen then is that the auto-mount process will 'mount' the volume against a new entry called /Volumes/Users-1 even though it still shows in the Finder as just having the name Users. In fact if you connected two volumes at the same time with the same name the same thing happens.
    At this point we now have two entries in /Volumes one called /Volumes/Users which is a left over from before the crash, and the new live entry which is called /Volumes/Users-1 the problem is that Open Directory is telling the users to access and store their network home directories in /Volumes/Users and that left over folder is being automatically shared by the server. Therefore when a user logs in they start automatically putting files in that leftover folder which is actually on the boot drive and no longer corresponds to the second volume.
    Potentially several Gigabytes of data could end up in their incorrect entry.
    You need to see what stuff in /Volumes is actually making up your disk space use. You can do this by the following command
    sudo du -achx /Volumes
    This should list all files/folders on the boot drive that are in /Volumes but exclude files stored on mounted drives.

  • Disappearing disk space Windows Server 2012 R2 with SharePoint Server 2013 Enterprise

    I've got an interesting problem with a virtual machine in our VMWare environment.  It is Windows Server 2012 R2 with SharePoint Server 2013 Enterprise installed.  I started out with a 60GB disk and it started running out of space, so I increased
    it in VMWare and extended the partition to 100GB.  Well, that lasted for a bit and so I extended it again.  I've done this 3 or 4 times and now I've got a 160GB disk with about 2-3GB of space remaining (and it started with 10GB remaining). 
    WinDirStat shows 105GB of <Unknown> space being used, which is probably my issue.  However, I can't determine what this is and it keeps growing like a tapeworm.  The context menu on the <Unknown> files has all the options disabled,
    so WinDirStat doesn't appear to have access to whatever the file(s) is/are.  I've performed several chkdsk /f on the C: drive and nothing bad is reported.  I don't have any restore points and am not running VSS (that I'm aware of).  The pagefile
    reports as being about 4.9GB, so that's not the issue.  No large files are shown anywhere and my explorer settings are set to show me all files, including system files.
    When I try to run WinDirStat with elevated permissions, it hangs and becomes unresponsive. 
    I've even resorted to running CCleaner to see if it found anything, but it simply found the standard temp files and such...about 1GB. 
    I'm pulling my hair out...and I don't have much to start with.  Anyone have any ideas?
    Thanks
    Russ

    It appears that somehow, Microsoft Fusion Assembly binding logging was turned on and many of the temp folders located at c:\users\username\AppData\Local\Microsoft\Windows\InetCache\IE were filling up with hundreds of thousands of Fusion HTM log files. 
    This is controlled by an entry in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion\ForceLog which was set to 1.  Hopefully, setting it back to zero will fix the issue.  As a result of figuring this out, I have recovered almost 80GB of disk
    space occupied by the log files.
    I thought WinDirStat would show me what I needed to know, but it turns out TreeSize (which I've used in the past) works much better.
    Russ

  • Date and Name Edits Consume Lots of Disk Space

    I am adjusting clip dates on my videos. Also changing the name of the clip in the Events Library. This seems to consume a huge amount of additional storage. I assume iMovie is duplicating the clip? Is there a way to either stop this from occurring or to delete duplicated files? Thanks.

    As you can see, I didn't get any results from the "show parameter archive_log_dest" command.
    I'm also showing the output of "show parameter log_archive" and "archive log list" since these are other commands I've stumbled upon to try to find a solution, in hopes that it helps you.
    Just let me know if there's a typo in that command, and I'll re-run it.
    Thanks
    $ sqlplus user/pass@oracle as sysdba
    SQL*Plus: Release 11.1.0.6.0 - Production on Mon Nov 2 21:37:48 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    Connected to:
    Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
    SQL> SHOW PARAMETER ARCHIVE_LOG_DEST
    SQL>
    SQL> show parameter log_archive
    NAME TYPE VALUE
    log_archive_config string
    log_archive_dest string
    log_archive_dest_1 string
    log_archive_dest_10 string
    log_archive_dest_2 string
    log_archive_dest_3 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    NAME TYPE VALUE
    log_archive_dest_9 string
    log_archive_dest_state_1 string enable
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string enable
    log_archive_dest_state_3 string enable
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    NAME TYPE VALUE
    log_archive_duplex_dest string
    log_archive_format string ARC%S_%R.%T
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 2
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 6730
    Next log sequence to archive 6730
    Current log sequence 6732
    SQL>

  • System Update Repository - Using a lot of disk space

    Hi
    Looking at the Repository - it seems to be using a lot of space with multiple iterations of similar files
    It also appears to be keeping large Setup files as well as the Applications.  
    Are all these necessary?
    Is there anyway to remove them?  CAN I delete old files/setup files?  (do you need the old version if a new application is present and functioning)
    Many thanks

    What version of System Update are you running? The bloating of the repositiory was corrected with SU v5, which also redesigned the internals of SU. A side affect is that when V5 is installed, all installation history is lost (from old SU v4). Uninstall your current SU, Reply when YES you see the screen below during uninstall .  Install latest SU 5.006.16.
    =
    This is a boring stale message.  Reply yes, it deletes:
    c\programdata/lenovo\Systemupdate

  • T-SQL for finding the unused tables which are consuming maximum disk space.

    Hi,
    Need help in writing a T-SQL that can return me the unused or least used tables in a database and are consuming a lot of disk space.
    Thanks  

    Refer
    http://gallery.technet.microsoft.com/SQL-List-All-Tables-Space-baf0bbf9
    create table #TableSize (
    Name varchar(255),
    [rows] int,
    reserved varchar(255),
    data varchar(255),
    index_size varchar(255),
    unused varchar(255))
    create table #ConvertedSizes (
    Name varchar(255),
    [rows] int,
    reservedKb int,
    dataKb int,
    reservedIndexSize int,
    reservedUnused int)
    EXEC sp_MSforeachtable @command1="insert into #TableSize
    EXEC sp_spaceused '?'"
    insert into #ConvertedSizes (Name, [rows], reservedKb, dataKb, reservedIndexSize, reservedUnused)
    select name, [rows],
    SUBSTRING(reserved, 0, LEN(reserved)-2),
    SUBSTRING(data, 0, LEN(data)-2),
    SUBSTRING(index_size, 0, LEN(index_size)-2),
    SUBSTRING(unused, 0, LEN(unused)-2)
    from #TableSize
    select * from #ConvertedSizes
    order by reservedKb desc
    drop table #TableSize
    drop table #ConvertedSizes
    --Prashanth

  • Datapump import consuming disk space

    Hi all,
    Windows server 2003
    Oracle 10g
    I'm trying to import a schema using datapump:
    impdp username/password schemas=siebel directory=dpdir dumpfile=schema_exp.dmp status=60
    It's importing but it's consuming massive amounts of disk space, over 40 gig now. Why is it consuming so much space just to import. I'm doing a reorg\, so I dropped the biggest production schema, and did the import. I didn't resize the datafiles, so where is all this extra disk space going?
    Please help, this is a production site and I'm running out of time (and space)

    Tablespace SEGS FRAGS SIZE FREE USED
    PERFSTAT : 0 1 1000 1000 0%
    SBL_DATA : 3448 4 30400 12052 60%
    SBL_INDEX : 7680 3 8192 4986 39%
    SYSAUX : 3137 39 2560 490 81%
    SYSTEM : 1256 1 2048 1388 32%
    TEMP : 0 0%
    TOOLS : 2 1 256 256 0%
    UNDOTBS1 : 11 13 5120 4754 7%
    USERS : 169 14 1024 424 59%
    Great thanks, the tablespaces are not the problem, it is the archivelogs. About a 100m a minute!
    My issue started with the first import, not all the data was imported, meaning there where rows missing, I saw this when I did a row count. So I decided to do the import again (followed the same steps as mentioned earlier) If after this import the rows are still missing, I want to do a point in time recovery with rman, because then my export dump file must be corrupted, am I correct in saying this?

Maybe you are looking for