Disk Size Increasing very Fast

I am facing very critical issue, disk size, where E2k13 is installed reducing its free space daily near 1 GB and on other hand database file is not taking too much size over the Disk. please suggest me some good option to sort it .
BRAT

Hi,
Based on my knowledge, circular logging is not recommended in a normal Exchange production environment. Also, enabling circular logging is not a long-term option.
I recommend you disable it and do a full backup to solve your issue.
For more information, here is a thread for your reference.
enable circular logging (Note: Though it is Exchange 2010, I think it also applies to Exchange 2013 about this issue)
http://social.technet.microsoft.com/Forums/en-US/a01579af-8cdc-40d3-aef4-b5f569833553/enable-circular-logging?forum=exchange2010
Hope it helps.
Best regards,
Amy
Amy Wang
TechNet Community Support

Similar Messages

  • BPM data increase very fast and want to get suggestion about BPM capacity

    Dear BPM Experts:
    I meet a problem with BPM capacity problem. My customer using BPM 11g and every day they
    Have 1000 new process,every process have 20-30 tasks,they find the data increase very fast,about 1G/day.
    We have done a test about BPM capacity, I create a new simple process named simpleProcess.
    which only have three input field, I use API to initiate the task and submit to the next
    person.
    we using dev_soainfra tablespace, and we set the default audit level, after insert 5000 task, we find dev_soainfra is reach 362.375M,
    So as assume 30000 task will using 362*6=2G database spaces,and because in next phases,my customer want
    To push BPM platform to more customers, which means more and more customer will using this platform,so
    I want to ask is it data increase reasonable? Do you have capacity planning guide for BPM 11g? and If I want to reduce
    Lower The data increase, how can we do?
    We have try to turn the audit log off, but it seems useless, it only save 8% spaces.
    Thanks for your help!
    Eric

    It looks like you are writing your data to disk every so often.  For that reason, I recommend making it based on the number of samples you have instead of the time.  With that you can preallocate your arrays with constants going into the shift registers.  You then use Replace Array Subset to update your arrays.  When you write to the file, make sure you go back to overwriting the beginning of your array.  This will greatly reduce the amount of time you spend reallocating memory and will reduce your memory usage.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • RSBATCHDATA  table in BI increasing very fast

    Hi All,
    Our BI Production server is installed on Windows 2003 with MAXDB as DB and SP level of 15.
    In out DB, RSBATCHDATA table is increasing very fast.
    Is there any way to reduce the size of this RSBATCHDATA table in the DB.
    SAP Note checked : 1292051
    Any Suggestion is welcome.
    Regards,
    Sharib Tasneem

    Hi Naveed/Jaun,
    I have used the TX RSBATCH, and the I selected to delete Msgs older than 7 days but it executed for 80 seconds.
    And under setting for "parallel processing" I have kept "Select Process" empty.
    Before running this the Size if RSBATCHDATA was 11GB and now also the size is same after executing the job.
    Is there any thing else to be specified?
    Any suggestion is welcome.
    Regards,
    Sharib Tasneem

  • Mac Mini 10.6 Server startup disk size increase problem.

    Mac Mini 10.6 Server shows system startup disk sized 50GB with 20GB free. Disk utility shows partition sized 500GB with 470GB free. How can I increase size of startup volume without complete reinstall?

    Is the disk partitioned?
    Boot the operating system DVD, use Disk Utility from the Utilities menu (second screen of the installation process, just before you start the actual install) to ake a complete backup to an external disk, make a second backup (in case there's a problem with the first), wipe and repartition the internal disk (using Disk Utility), and reload.
    You'll want to consider whether you want to mirror the two disks within the box, too; to set up RAID-1 across both of the 500 GB drives in the box.  That mirroring can be established when you're wiping and repartitioning the disks.

  • Disk size increasing

    Hai all,
    disk where the application is increases by 1% every 10 days ..
    is this normal ? if not, please do tell me how to resolve and troubleshoot this..
    We are not applying patches regularly..
    Thanks
    Yusuf

    Nothing unusal, let oracle to write something on its own :-)
    It will be just a log file increase on apps node and admin node, check $APPLCSF folder.
    Thanks
    Sundeep
    http://troubleshootingappsdba.blogspot.com

  • GVD_SEGSTAT table increasing very fast

    Hi Gurus,
    Our GVD_SEGSTAT table contains more than 900 million record and now increasing 30-50 million record per day. But why? I found two notes (867162 and 1080813), but I think those notes are not contains my answers, just a solution for this symptom.
    Can you help me, what is the reason of the accelerate of the table growing?
    Thanks for your helps!

    Looks like there is some tuning to be done in your oracle instance with regard to statistics collection and online backup which could be the reason why snapshots of statistics are getting collected in your tables...
    not sure if tuning alone is the reaosn but mayb e the statistics for backups are not being overwritten with the latest version but being stored as new records...?
    guessing ... not sure if I am anywhere near the answer...
    The best person to answer this would be a DBA...
    Edited by: Arun Varadarajan on May 7, 2009 8:50 AM

  • Hyper-V checkpoint disk size growth out of control

    Hi,
    i have hyper-v with exchange server installed on VM. i have used checkpoint in vm production. but i'v got trouble with my disk space. my vm is suddenly pause-critical because of out of disk space that used by checkpoint.
    based on microsoft recommendation it's not worth to use point in production environment. if so you have to separate disk from vhd and checkpoint path.
    now, im planning to delete all checkpoints. but the current condition of my local disk size is very low.
    my vm size now is 300 GB then my free space only 50 GB. please tell me how much free space needed to merge while process merging the disk ? does it possible to perform delete check point with my current condition ?
    please give me advice, im in very horrible situation right now. 
    thanks.

    Hi,
    Using checkpoints can result to unwanted beheviours like the one you are encoutering.
    Aaik, the merge process will need a random free space depending on the avhd sizes and content.
    But in your case, you VM is a production VM and the process may miss-behave.
    So i highly recommand you do export your VM to another location before deleting checkooints.
    Connect a disk to your hyper-v server
    Shutdown your VM
    Right click the VM then choose Export, and browse the place where you want to export
    The exported VM will be a mix of your VHD and AVHDs (checkpoints
    Now, try delete checkpoints one by one. Delete the first one, wait for merge, delete the second wait for merge... Do nkt forget, you are using windows server 2012 so kepp the VM stopped for the merge process to run.
    Regards, Samir Farhat Infrastructure and Virtualization Consultant || Virtualization, Cloud, Azure ? Follow and Ask here https://buildwindows.wordpress.com

  • Share fails when I try to share a large movie file (6.3g) to a very fast SD chip. Works fine sharing to a hard disk. Any ideas

    Share fails when I try to share a very large project file (6.3g) to a very fast SD chip. Works fine sharing to a hard disk. Using iMovie 10.0.5. Any ideas?

    If the volume's formatted as FAT32, it can't hold files which are 4GB or larger regardless of how much free space it has.
    (124768)

  • Increase disk size for virtual server

    Hi,
    A virtual server system drive is currently at 90%full. How to safely allocate more disk space to the system drive of that virtual server without causing issues to the installed Operating systems on that drive.
    I read through the Virtual Iron documentation and nothing mention about increasing disk space for virtual server.
    Thanks in advance for response.

    Hi,
    I was hoping to increase the virtual storage of the drive attached to virtual servers and not the actually physical storage.
    I was looking around in it and found was able to increase the disk space by by editing the virtual storage disk.
    I tested creating a new disk (5GB), increase the disk space(10GB) and login to virtual server and extend the new disk with the unallocated disk space.
    It does seem to work and appear as 15GB.
    I just concern if windows operating is installed on that disk, will it cause any issue if I extend it with the unallocated disk space?
    Or it's safe to do so?
    Many thanks.

  • Time Machine backup disk size - total capacity of disk or just files used?

    Hi folks,
    After upgrading to Leopard, I'm trying to set up my Time Machine. My main HD is 175 Gig and all the OS and other files take up 37 Gig of that. The drive I want to use for Time Machine (a spare internal hard drive) is a 75 Gig drive with 74 Gig of space available. Time machine says this drive is too small to use.
    According to the Time Machine documentation, Time Machine takes the _total size of the files_ to be backed up and multiplies that by 1.2. So in my case, since the total files on my 175 Gig drive take up 37 Gig, then I would need only 42 Gig for my Time Machine back up. So, in theory, my 75 Gig spare drive should work just fine.
    The problem is that Time Machine is taking the total size of the entire HD and using that to calculate the size of the back up drive, which would be 210 Gig. Does anyone know why this problem is occurring? It seems like Time Machine is not calculating the needed back up disk size properly and is incorrectly including the unused disk space on my main HD.

    Not sure exactly, but your drive really is too small. Yes, 37 gb plus workspace would do for your initial Full Backup, but subsequent incrementals could fill it up pretty fast. That would depend, of course, on how you use your Mac -- how often you add or update files of what sizes.
    If you change your habits and, say, download a multi-gb video, then work on editing it for a few hours, you could eat up the remaining space very, very quickly.
    Just to be sure, how are you determining space used? Via right-click (or control-click) and Get Info on your HD icon?
    Also, do you have any other HDs connected? If so, exclude it/them, as TM will include them by default.
    Three possible workarounds:
    First, get a bigger drive. HDs have gotten ridiculously cheap -- 500 gb (or even some 1 tb) for not much over $100.
    Second, use CarbonCopyCloner, SuperDuper, or a similar product instead of TM. CCC is donationware, SuperDuper about $30, I think. Either can make a full bootable "clone", and CCC has an option to either archive previous versions of changed files or delete them. CCC can be set to run automatically hourly, daily, etc. (I suspect SD can, too, but I don't know it's details). An advantage is, of course, if your HD fails you can boot and run from the "clone" until you get it replaced, then reverse the process and clone the external to the internal.
    Note that these will take considerably longer, as unlike TM, they don't use the OSX internals to figure out what's been added or changed, but must look at every file and folder. In my case, even smaller than yours, TM's hourly backup rarely runs over 30 seconds; CCC's at least 15 minutes (so I have it run automatically at 3 am). And, if you don't keep previous versions, of course, you lose the ability to recover something that you deleted or changed in error, or got corrupted before the last backup.
    Third (and NOT recommended), continue with TM but limit it to your home folder. This means if you lose your HD, you can't restore your whole system from the last TM backup. You'd have to reload from your Leopard disk, the apply all OS updates, and reload any 3rd party settings, then restore from TM. As a friend of mine used to say, "un-good"!

  • New HDD Load / Unload Cycle Count increasing extremely fast !

    Hi all
    I just upgrade my Pavilion dv5 HDD. The new model is Hitachi Travelstar 7K500 (HTS725050A9A364). However, I found my new HDD's Load / Unload Cycle Count increasing extremely fast!
    Until now the number of Load / Unload Cycle is 12,127. However, the new HDD only power on 137hours. (About 1 week of my use.)  The data below is my new HDD's SMART data (by EVEREST 5.50):
    ID   
    01    Raw Read Error Rate    62    100    100    0   
    02    Throughput Performance    40    100    100    0   
    03    Spinup Time    33    159    159    2  
    04    Start/Stop Count    0    100    100    14   
    05    Reallocated Sector Count    5    100    100    0  
    07    Seek Error Rate    67    100    100    0  
    08    Seek Time Performance    40    100    100    0  
    09    Power-On Time Count    0    100    100    137  
    0A    Spinup Retry Count    60    100    100    0   
    0C    Power Cycle Count    0    100    100    14  
    BF    Mechanical Shock    0    100    100    0   
    C0    Power-Off Retract Count    0    100    100    1 
    C1    Load/Unload Cycle Count    0    99    99    12127   
    C2    Temperature    0    152    152    19, 36  
    C4    Reallocation Event Count    0    100    100    0   
    C5    Current Pending Sector Count    0    100    100    0  
    C6    Offline Uncorrectable Sector Count    0    100    100    0  
    C7    Ultra ATA CRC Error Rate    0    200    200    0  
    DF    Load/Unload Retry Count    0    100    100    0   
    I'm pretty sure that the data is correct, because I can hear the HDD's Load / Unload sound very frequently. The C1 row increase about 2,000 per day.  However my previous Hitachi 5K320 has no problem. The running operating system are both Windows 7.
    I'm so worry about this. As you know, laptop HDD's L / UL Cycle is designed at 600,000. If my HDD continue increasing like this, it will reach this number in a very short time. Any one can help me?

    It's a feature of modern 2.5" harddisks , the disk parks the head when its been inactive for a while, to avoid damage , uses less power , and reduces the heat etc gives the harddisk a better looking spec sheet , its meant to park its head after being inactive for a long time around an hour with proper management from the bios/os of the head parking feature, because there is nothing to manage this feature in hp's bios or os the disk parks the head about every minute for no reason then after 1 second the head goes back on the disk thats 1 load/unload cycle , it also effects performance as the head has to find its place back on the disk , load times will decrease by alot , videos will stutter the first few secs, i have the same problem but not as bad after 130 hours i had a count of around 1,250 load/unloads , different brands of disk have different idle timers plus i use utorrent which stops the harddisk from idling so much, there are a couple of solutions, you can find a tool to update/mod the harddisk firmware to increase the idle timer ( i decided against this as you can permently brake the harddrive and void the warrenty) , contact hp and request a bios update with proper management or the harddisk head parking feature i.e tell the harddisk it is idle after an hour, the temperary solutions are download hd tune and run it all the time this stops the harddisk from parking its head as it doesn't let it go idle , or the solution i am using at the moment download a programme called HDD scan , everytime you turn on/off  you have to run the prog go to tasks/features/ide features set the advance power management from its default value to 254 then press set, no more unnessary load/unloads, downside is the disk runs a couple of 0c hotter mine still never goes above 40c even under full load thanks to a active cooling pad, also its less well protected against shock.

  • Problem When Take Dump - Image Size Increase

    Dear developers
    I Am Developing An E-Archive Program Using Oracle Developer 6i Based On Oracle 9i (Intermedia - Ordimage Column To Save Document Image ) .
    And I Have A Problem With The Size Of The Dump File Whenever I Try To Take A Backup For The Database The File Size Is Very Huge.
    The Origin Of That Is The Increase Of The Images Size After Have Been Saved On The Table ( If The Image Is About 12 kb On Hard Disk , Its Size Become 100 kb At The Database - I Used Database Studio To Know The Increase At The Table space - The Table Which Have the Images Have Its Owen Table space ).
    Did Any One Know Why That Happen ? And How To Mange That Increase Of Size? How To Make The Dump File With The Minimum Size ?
    Thanks
    Bassem Halawa

    Have you followed the performace suggestions in the interMedia documentation? I don't see how you could have obtained the increase you claim, but perhaps a database that is very poorly tuned for images? Or, more likely, the file size does not indicate actual disk space used, and instead indicates a extent. I will ask the performance folks if this increase is even possible.

  • Very fast growing STDERR# File

    Hi experts,
    I have stderr# files on two app-servers, which are growing very fast.
    Problem is, I can't open the files via ST11 as they are to big.
    Is there a guide, which explains what is it about and how I can manage this file (reset, ...)?
    May it be a locking-log?
    As I have a few entries in SM21 about failed locking.
    I also can find entries about "call recv failed" and "comm error, cpic return code 020".
    Thx in advance

    Dear Christian,
    Stderr* are used to record syslog and logon check, when the system is up, there should be only one being used, you can delete the others. for example, if the stderr1 is being used, then you can delete the stderr0.
    stderr2,stderr3... Otherwise only shutting down the application server will allow deletion. When deleted the files will be created
    again and only increase in size if the original issue causing it still exists, switching is internal and not controlled by size.
    Some causes of 'stderr4' growth:
    In the case of repeated input/output errors of a TemSe object (in particular in the background), large portions of trace information are written to stderr. This information is not necessary and not useful in this quantity.
    Please review carefully following Notes :
       48400 : Reorganization of TemSe and Spool
      (here delete old 'temse' objects)
    RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
    RSPO1043 and RSTS0020 for the consistency check.
    1140307 : STDERR1 or STDERR3 becomes unusually large
    Please also run a Consistency Check of DB Tables as follows:
    1. Run Transaction SM65
    2. Select Goto ... Additional tests
    3. Select "Consistency check DB Tables" and click execute.
    4. Once you get the results check to see if you have any inconsistencies
       in any of your tables.
    5. If there are any inconsistencies reported then run the "Background
       Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
       This time check both the "Consistency check DB Tables" and the
       "Remove Inconsistencies" option.
    6. Run this a couple of times until all inconsistencies are removed from
       the tables.
    Make sure you run this SM65 check when the system is quiet and no other batch jobs are running as this would put a lock on the TBTCO table till it finishes.  This table may be needed by any other batch job that is running or scheduled to run at the time SM65 checks are running.
    Running these jobs daily should ensure that the stderr files do not increase at this rate in the future.
    If the system is running smoothly, these files should not grow very fast, because most of they just record the error information when it happening.
    For more information about stderr please refer to the following note:
       12715: Collective note: problems with SCSA
              (the Note contains the information about what is in the  stderr and how it created).
    Regards,
    Abhishek

  • File size is very large in 7.0 coamparing to 3.X

    Hi,
    When saving the query in 7.0 the file size become very large as compare to saving the same query with the same amount of data in 3.X. Is there any solution to take care of this problem and reduced the file size in 7.0.  Please advice.
    Thanks
    Isac

    Hi,
    The file size increases due to the high formatting options in BI 7.0.
    Remove the formatting option from the work book setting in the Bex analyzer in design mode.
    REDDY

  • Maximum disk size for Z61m?

    I want to replace the original 80 MB HDD of my Z61m with a faster and larger one. What't the maximum disk size the controller/BIOS can cope with?
    Gurk 
    Thinkpad Tablet
    Thinkpad T431s
    ThinkPad Yoga S240 with OneLink Dock
    Solved!
    Go to Solution.

    Any SATA laptop drive will do just fine, be it 160GB or 320GB. You know your needs as well as your budget the best...
    Hope this helps.
    Cheers,
    George
    In daily use: R60F, R500F, T61, T410
    Collecting dust: T60
    Enjoying retirement: A31p, T42p,
    Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
    Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you.

Maybe you are looking for

  • My hp deskjet 1510 series does not print when connected to my hplap top (windows Vista),

    My hp deskjet 1510 series does not print when connected to my hp lap top (windows Vista), it says the printer is not active, although it is default printer, connected and on. However, it does work when connected to the (also HP) lap top (windows 8) o

  • Cannot burn CDs-Calibration Error

    I get this error when trying to burn CDs from Itunes, "The device failed to calibrate the laser power level for this media." Any suggestions? Is it broken? Thanks Bill

  • Compiling servlet class:javac gives 'bad command'

    I have a servlet class in the path: E:\Tomcat324\jakarta-tomcat-3.2.4\webapps\examples\WEB-INF\classes I am using Tomcat 3.2.4 and Just Go (the tomcat launcher) provides this log of my settings: CLASSPATH: .;E:\Tomcat324\jakarta-tomcat-3.2.4\lib\serv

  • Adobe AIR for HTML/JS custom Javascript Scrollbars

    Hi, I would like to know if it is possible to have custom Javascript scrollbars in an Adobe AIR application. I have tried many jQuery plugins but nothing can style the scrollbars. I can only style them via CSS. Adobe AIR supports javascript scrollbar

  • CF search using "LIKE" SQL operator

    I've been doing limited keyword searches of selected fields in both MS Access and SQL Server for several years now. My searches have always been in this format: "WHERE Title LIKE '%#sKeyword#%'." They seem to always work. The only downside to this me