[SOLVED]Log files getting LARGE

I ran a pacman -Syu for the first time in several months yesterday, and my computer has become almost useless due the fact that everything.log, kernel.log and messages.log gets extremely large (3.8 GB) after a while, causing / to become 100% full.
I've located the following in kernel.log:
Mar 14 15:06:44 elvix attempt to access beyond end of device
Mar 14 15:06:45 elvix attempt to access beyond end of device
Mar 14 15:06:45 elvix sda5: rw=0, want=1812442544, limit=412115382
Mar 14 15:06:45 elvix attempt to access beyond end of device
Mar 14 15:06:45 elvix sda5: rw=0, want=1812442544, limit=412115382
Mar 14 15:06:45 elvix attempt to access beyond end of device
Not sure what it means, but the last two lines are repeated XX times and are the reason why log files grow beyond limits. Anyone got ideas to what can be done to fix this?
Last edited by bistrototal (2008-03-14 16:27:15)

logrotate works really well:
http://www.archlinux.org/packages/14754/
There's quite a few threads about configuration floating around.

Similar Messages

  • LMS 4.0.1 ani log file too large

    My LMS 4.0.1 platform is installed since several days and I have a very large ani.log file (> 300 MB after 4 days of daemons running).
    In this file, I saw many Ani Discovery errors or warnings like for example:
    2011/11/28 11:08:46 Discovery ani WARNING DcrpSMFUDLDDisabledOnPorts: Unable to fetch device details for container(Device,10.11.101.241 hostname: 10.11.101.241)
    2011/11/28 11:08:46 Discovery ani WARNING DcrpSMFUDLDDisabledOnPorts: Unable to fetch device details for container(Device,10.74.101.245 hostname: 10.74.101.245)
    2011/11/28 11:08:46 Discovery ani ERROR DcrpSMFPortBPDUFilterDisabled: Unable to get span tree device info for the devicecontainer(Device,10.14.101.5 hostname: 10.14.101.5)
    2011/11/28 11:08:46 Discovery ani ERROR DcrpSMFPortBPDUFilterDisabled: Unable to get span tree device info for the devicecontainer(Device,10.3.101.12 hostname: 10.3.101.12)
    2011/11/28 11:18:45  Discovery ani WARNING DcrpSMFCDPAccessPort: Unable to get CDP information for  the devicecontainer(Device,10.1.101.17 hostname: 10.1.101.17)
    2011/11/28  11:18:45 Discovery ani WARNING DcrpSMFCDPAccessPort: Unable to get CDP  information for the devicecontainer(Device,10.1.101.9 hostname:  10.1.101.9)
    2011/11/28 11:18:45 Discovery ani WARNING DcrpSMFSTPBackboneFast:  Unable to get span tree device info for the devicecontainer(Device,10.1.101.85  hostname: 10.1.101.85)
    2011/11/28 11:18:45 Discovery ani WARNING  DcrpSMFSTPBackboneFast: Unable to get span tree device info for the  devicecontainer(Device,192.168.12.51 hostname:  192.168.12.51)
    2011/11/28 11:25:11 EvalTask-background-41 ani ERROR StpSMFGetStpInstance: unable to get stp device information
    These errors are not focused on specific devices (many devices are concerned).
    However, all seems to be working fine on the platform (layer2 maps, data collection, inventory, config backup, UT, DFM, ...).
    For informations, recently, I was in contact with TAC because Data Collection was always in running state.
    They provided a new PortDetailsXml.class file to replace the original one.
    It has fixed the problem.
    I suspect now that ani database could be corrupted and need to be reinitialized.
    I would to be sure of that and if possible to avoid this solution.
    Thanks for your help.

    Hi ,
    Found some errors and exception in the log..
    We need to follow the below steps to fix the issue::
    -we need to re-initialize the ANIdatabase ::
    1.stop the daemon manager :
    /etc/init.d/dmgtd stop
    net stop crmdmgtd
    2.Go to : /opt/CSCOpx/bin/ and issue the command :
    Run the below command:    /opt/CSCOpx/bin/perl dbRestoreOrig.pl dsn=ani dmprefix=ANI
    windows::
    NMSROOT\bin\perl.exe NMSROOT\bin\dbRestoreOrig.pl dsn=ani dmprefix=ANI
    3.Start the Daemon manager :: 
    /etc/init.d/dmgtd start
    net start crmdmgtd
    ***IMP***   Re-initialize the ANI database will not lose any of the history information of the device.  Also ANI database does not contain any historical information as soon as the above steps are complete you need to run and new DATA COLLECTION followed by User tracking Acquisition and then check the issue.
    Data collection:: Go to  Admin > Collection Settings > Data Collection > Data Collection Schedule   under "start Data Collection" > For "All Device"  >> click "START"
    User tracking :: Inventory > User Tracking Settings > Acquisition Actions
    hope it will help
    Thanks-
    Afroz
    ***Ratings Encourages Contributors ****

  • Database Log File getting full by Reindex Job

    Hey guys
    I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
    file space.  Any suggestions?

    Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
    need to take log backup after changing back to Full recovery
    I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • System.log file are large, and cause a kernel panic when deleting

    Hello everyone,
    I am trying to solve a dillema.  I discovered that over 60GB of my 120GB SSD drive are taken up by system.log files.  All 3 are around 20GB.  I left my laptop running so none of them are the current one.  They are system.log.0, system.log.3 and system.log.4.  I have tried several ways to delete them:
    Using Disksweeper
    At a terminal prompt using sudo rm
    At a terminal prompt using sudo su - (to get root access) and then rm -f ....
    Rebooting the system in single user mode, mounting the drive and then attempting to delete them
    Each time I do, the system reboots with a kernel panic.  I have not looked at the panic log yet, but can post that if you need to.
    I did look at the end of the current log file to discover there were some issues with programs, and I have fixed it so that these size log files "shouldn't" be generated anymore.
    Looking for advice as to how to delete these files!  This is driving me nuts.  It shouldn't be this hard.
    Thanks for any help or insight you can give.

    OK, I actually was able to solve this myself.  My problem was that ther kernel panic was caused by the journaling.  So here is what I had to do:
    I temporarily disabled journaling, see here: http://forums.macrumors.com/showthread.php?t=373067
    Then I was able to delete the files by starting a terminal session and using sudo rm to delete the files.
    Re-enabled journaling (I don't know if it is needed, but it was on before).
    Hope this can help someone else in the future.

  • BPC log files very large

    Hi All, 
    our BPC log files are very large compared to the data files. our
    we have separate disks for data and log files:
    DATA D:\ = 550GB total
    LOG E:\ = 278GB total
    D:\ICTSI_BPC.mdf = 5.6GB
    E:\ICTSI_BPC_log.ldf = 185GB
    is this correct?  i have read that Log file should roughly be 25% of the
    size of the total amount of data. Example: a 4GB database should have a
    1GB log file.  How can we adjust the size of the SQL log file?
    we are using a multiserver set up, SAP BPC 7 SP04 (32bit)
                                       MS SQL 2008 Server (64 bit)
    thanks in advance!

    Hi Jeb,
    Do you already backup the database from the SQL Server ? Because in my server there is once when the log file is bigger then my database and that's because the backup schedule is not working.
    So what i do is , do the backup manual from the sql server and the log file will shrink automatic.
    After that , i create the automatic backup schedule again.
    Hope this information helps.
    Suprapto

  • Does the windowserver log files get rotated?

    Hi,
    I have been looking at my /var/log directory and found the two windowserver related log files: windowserver.log and windowserver_last.log
    I am wondering if these get rotated on a regular basis. If so, how often? (seeing that there is a windowserver_last.log gives me the impression that these files are rotated.
    Does anyone know?
    Thanks,
    Steve
    Power Mac G5/2Ghz Dual   Mac OS X (10.4.2)  

    Hi Stephen,
       I agree with Michael. I could find no reference to "windowserver" in /etc or in the StartupItems directory that would explain its rotation. I have a few machines at work that I can check and it's my guess that the WindowServer simply creates a new one when it starts.
       Thus, if you shutdown regularly, the log file shouldn't get very big. I don't shutdown unless I have to so I added the windowserver.log to my own log rotation script. All you really have to do is to duplicate Apple's /etc/periodic/weekly/500.weekly file. Rename it with a new name and number and cut out everything but the log rotation code and the initial stuff that sets up the environment. Then simply put your own filenames into the rotation code. You can also move it to a different directory if you don't want it to run weekly.
       I went ahead and altered their code so that it uses a loop instead of seven lines that differ only by a number. However, I did that mostly because I enjoy scripting. Apple's code is simpler and a little faster.
    Gary
    ~~~~
       The alarm clock that is louder than God's own belongs to
       the roommate with the earliest class.

  • How to recover the database when some of the archive log file get deleted.

    I am facing a problem with Oracle database, which is related to archivelogs.
    Our development database is running in archivelog mode, but we don't have backups scheduled and have no recovery catalog.
    When the database was in running condition, disk got full, so some archivelogs were deleted manually.
    After this they restarted the DB, and now DB is not coming up. Errors are as follows:
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1444383504 bytes
    Fixed Size 731920 bytes
    Variable Size 486539264 bytes
    Database Buffers 956301312 bytes
    Redo Buffers 811008 bytes
    Database mounted.
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: '/export/home/oracle/dev/ADVFRW/ADVFRW.system'
    SQL> recover datafile '/export/home/oracle/dev/ADVFRW/ADVFRW.system'
    ORA-00283: recovery session canceled due to errors
    ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
    SQL> recover database using backup controlfile;
    ORA-00279: change 215548705 generated at 09/02/2008 17:06:10 needed for thread
    1
    ORA-00289: suggestion :
    /export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC
    ORA-00280: change 215548705 for thread 1 is in sequence #1107
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC
    ORA-00308: cannot open archived log
    '/export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    Media recovery cancelled.
    SQL>
    1. How to recover the database and bring it online
    Any help will be highly appreciated.
    With Regards
    Hemant Joshi
    Edited by: hem_Kec on Sep 7, 2008 9:07 AM

    Hi,
    Archive log files are the copies of redolog files.As redo log files are circularly overwritten,oracle generates archive log file of the corresponding redo logfiles being overwritten.So if you have a backup that dates back to 10 am in the morning and if your database creashed at 3 pm,you cannot use the redo log files alone as they have incomplete information.To completely recover the database upto 3 pm,you need archive log files generated between 10 am to 3 pm. In your case since you are missing one archive log file,you cannot perform complete recovery and hence would suffer data loss.

  • Sync Services Log file very large

    user>library>application support>sync services>local
    My Sync Services log file was over 111 Gb. It appears to grow in size every day. I have reset SyncServices per Apple. http://support.apple.com/kb/TS1627
    I even deleted the contents of the local folder (not recommended per apple, but seems fine) and still my syncservices.log grows and is taking over my hard drive.
    I use Entourage and Sync Services is used to Sync iCal/Address book (then for my iPhone sync).
    Any suggestions to stop this out of control log?!?

    Hi,
    Increasing the size of the redo logfile size may reduce the number of switches as it takes time to fill the file up but your system's redo generatin will stil be the same. Reduce the frequent commits.
    Use the following notes to further narrow down the possible root cause.
    WAITEVENT: "log file sync" Reference Note [ID 34592.1]
    WAITEVENT: "log file parallel write" Reference Note [ID 34583.1]

  • [Solved] log files

    Hi everyone,
    I would like to know what log file I must check to find out why my laptop is shutting down unexpectedly.  It is an 8 year old laptop, so it may have a hardware problem. If it is a hardware problem (like a bad power supply or overheating),  I would like to know if these type of things are logged?
    Thanks in advance!
    Last edited by fwin (2011-09-15 04:30:08)

    fwin wrote:
    Thank you karol and lifeafter2am,
    I checked dmesg and kernel.log, and I could only understand about 10 percent of the output, I will spend a few days trying to learn this (googling) and then I will report back if I still have any question.
    I don't think it is overheating, it just feels warm (normal). This laptop is my home server so it is usually not running many applications, it does not even have a GUI. Anyway, I will try the lm_sensor thing just to learn about that, it sounds like fun!
    Thanks again!
    Feel free to post some of it either here or on pastebin if you're too lost. 
    Last edited by lifeafter2am (2011-09-13 04:18:20)

  • Mail file getting large. How to backup and remove?

    My Mail file is 500 MB and I want to move, say, anything older than two years to a different location so I can search it and find attachments. The archive function seems to just create a copy of the mailbox. Is there some way to move messages like this?
    I have deleted the trash, but the size of my mail file and attachments seems to be bogging Mail down and maybe the system a bit.
    Benjamin Barrett
    Seattle, WA

    When I archive my e-mail, it's about 350 MB, and it seems like it's a bit of a burden on Mail.
    It shouldn't be.  Like I said, I'm almost at 350 MB and it's not a problem.  I was able to make a smart mailbox showing everything more than 1 year old, and within seconds had a snappily-scrolling list of nearly 9,000 messages.  Are all these e-mails still in your In box, or some other mailbox you use every day?  Are they still on the server?
    It's possible that I just need a new Mac--this one is about three years old
    You're more accustomed to Windows, I bet.  Three years isn't much in the Mac world.  I've got a nearly 8-year-old Mac that was still in frequent use - and far more functional than any of the 3-year-old Windows machines at my wife's small business - until the hard drive failed a couple days ago.  I'll probably replace the drive and let it chug right along for another few years.  The only Mac I've ever owned, and used, for less than 7 years was the one I dropped on a concrete floor!
    It's clear that Mail won't work for this purpose
    Not at all...  I'm sure there's something you can do to improve performance.  Perhaps there's something wrong with your system or with your Mail data.  It's hard to give more specific advice at this point, but the fact is, Mail does work for this purpose for me.

  • AVI File gets larger

    I created a presentation in captivate . Since captivate doesnt seems to have a option to reduice hissing noise, i imported captivate avi file into soundbooth and applied audio edits. But the resulting avi file was very huge.
    What is wrong?
    Than k you
    Roy

    My guess would be the audio codec and a mismatched sample rate (12 bit when 16 is expected).
    http://en.wikipedia.org/wiki/ResourceInterchange_FileFormat

  • Internal delivery chnl, file doesn't get picked up, no trace log file gener

    I have defined an internal File delivery channel as part of a trading agreement between host and remote partner (Custom doc over Internet - AS2), configuration including agreement has been deployed, but the file doesn't get picked up. I have made sure that the directory specified in the delivery channel does exist and all permissions are set up (Windows 2003 server).
    A have specified directory for oracle.tip.adapter.b2b.transportTrace property in ip.properties, however no log file gets generated (restarted b2b).
    I am sure it must be something basic I have overlooked, any ideas, folks?

    Thanks again,
    There is an exception in the log file, I guess that explains why the file doesn't get picked up. Not very informative, though :
    2007.11.05 at 12:40:54:856: B2BStarter thread: B2B - (DEBUG) B2BStarter - configuration obtained
    2007.11.05 at 12:40:54:856: B2BStarter thread: B2B - (DEBUG) B2BStarter - clear global cache
    2007.11.05 at 12:40:54:856: B2BStarter thread: Repository - (DEBUG) CacheServiceManager.clearGlobalCache()
    2007.11.05 at 12:40:54:856: B2BStarter thread: B2B - (ERROR) Error -: AIP-50055: Error in configuration file
         at oracle.tip.adapter.b2b.init.Repository.b2bEngineConfiguration(Repository.java:611)
         at oracle.tip.adapter.b2b.init.Repository.initialize(Repository.java:552)
         at oracle.tip.adapter.b2b.init.B2BServer.readRepository(B2BServer.java:432)
         at oracle.tip.adapter.b2b.init.B2BServer.initialize(B2BServer.java:164)
         at oracle.tip.adapter.b2b.init.B2BStarter.startB2B(B2BStarter.java:217)
         at oracle.tip.adapter.b2b.init.B2BStarter.run(B2BStarter.java:104)
         at java.lang.Thread.run(Thread.java:534)

  • Reader 10.1 update fails, creates huge log files

    Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
    I clicked it to allow the install.
    Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
    It seemed to still be doing something and was showing that it was copying file icudt40.dll.  It still displayed the same thing ten minutes later.
    I went to bed, and this morning it still showed that it was copying icutdt40.dll.
    There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
    Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
    They are so big I can't even look at them to see what's in them.  (Too big for Notepad.  When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
    You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
    There doesn't seem to be much point to creating log files that are too big to be read.
    The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
    Maybe I should go back Adobe Reader 9.
    Reader X never worked very well.
    Sometimes the menu bar showed up, sometimes it didn't.
    PDF files at the physics e-print archive always loaded with page 2 displayed first.  And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
    And I liked the user interface for the search function a lot better in version 9 anyway.  Who wants to have to pop up a little box for your search phrase when you want to search?  Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.

    Hi Ankit,
    Thank you for your e-mail.
    Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
    MSI38934.LOG file.
    So I can't upload them.  I expect I would have had a hard time doing so
    anyway.
    It would be nice if the install program checked the size of the log files
    before writing to them and gave up if the size was, say, three times larger
    than some maximum expected size.
    The install program must have some section that permits infinite retries or
    some other way of getting into an endless loop.  So another solution would be
    to count the number of retries and terminate after some reasonable number of
    attempts.
    Something had clearly gone wrong and there was no way to stop it, except by
    going into the Task Manager and terminating the process.
    If the install program can't terminate when the log files get too big, or if
    it can't get out of a loop some other way, there might at least be a "Cancel"
    button so the poor user has an obvious way of stopping the process.
    As it was, the install program kept on writing to the log files all night
    long.
    Immediately after deleting the two huge log files, I downloaded and installed
    Adobe Reader 10.1 manually.
    I was going to turn off Norton 360 during the install and expected there
    would be some user input requested between the download and the install, but
    there wasn't.
    The window showed that the process was going automatically from download to
    install. 
    When I noticed that it was installing, I did temporarily disable Norton 360
    while the install continued.
    The manual install went OK.
    I don't know if temporarily disabling Norton 360 was what made the difference
    or not.
    I was happy to see that Reader 10.1 had kept my previous preference settings.
    By the way, one of the default settings in "Web Browser Options" can be a
    problem.
    I think it is the "Allow speculative downloading in the background" setting.
    When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
    problem. 
    I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
    University Library) and I got banned from the site because "speculative
    downloading in the background" was on.
    [One gets an "Access denied" HTTP response after being banned.]
    I think the default value for "speculative downloading" should be unchecked
    and users should be warned that one can lose the ability to access some sites
    by turning it on.
    I had to figure out why I was automatically banned from arXiv.org, change my
    preference setting in Adobe Reader X, go to another machine and find out who
    to contact at arXiv.org [I couldn't find out from my machine, since I was
    banned], and then exchange e-mails with the site administrator to regain
    access to the physics e-print archive.
    The arXiv.org site has followed the standard for robot exclusion since 1994
    (http://arxiv.org/help/robots), and I certainly didn't intend to violate the
    rule against "rapid-fire requests," so it would be nice if the default
    settings for Adobe Reader didn't result in an unintentional violation.
    Richard Thomas

  • Log file shrinking

    hi every one
    i m very happy getting reply from u all
    here my doubt regarding log file shrink.in our environment all high avaliabilities are using (log shippimg,mirroring,replication) now my doubt is if i shrink log file of a database which is these H A what will be the reaction?does i ve to reconfigure or
    not?
    can i make snapshot database  for restoring database?
    waiting for replies with anxiety
    thanks&regards
    chetan.tk

    Hi chetan.kt,
    What is the purpose of shrinking log file? It is recommend that backup the log file frequently as a period of some minutes if the log file is large. In case of more space is required for operation in SQL Server, you may consider to increase the disk space.
    Befor shrinking, you should insight into the reason which leads to the log file grows unexpectedly: A transaction log grows unexpectedly or becomes full on a computer that is running SQL
    Server.
    For log shipping and database mirroring, you can shrink the log file on primary server with non-truncate, and the shrink operation will be log shipped to the secondary servers.
    As for replicated database, you cannot be able to shrink the log file if replication is not completed. You may have a try to mark all replicated transactions as completed by stopping log reader agent and restart after shrinking. For more information:
    Unable to shrink transaction log on replicated database - SQL 2008
    It is not a good idea to shrink the log file. You may have a look at Tibor’s blog about the problem of shrinking log file:
    Why you want to be restrictive with shrink of database files
    Best Regards,
    Stephanie Lv
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Can anyone make sense of these log files?

    Hey Guys,
    I'm getting ~100MB log files every day in private/var/log/DiagnosticMessages. Mostly it seems to be something resembling the following over and over and over again:
    {And here is where the log file gets eaten every time I try to post it - mostly messages from com.apple.message.signature, com.apple.message.domain0, my-Computer-4mDNSResponder, com.apple.message.uuid, and com.apple.mDNSResponder.autotunnel.domainstatus}
    Can anyone help based on the above? Is there any way to post log code with lots of weird binary characters without it getting eaten? Or maybe a screenshot of the log code?
    These logs do not appear when I am out of town, which makes me think that it's one of my Airport Extremes, but I've done factory restores on both of them and the logs are still being generated. I've also had to reinstall my OS due to an unrelated issue and the logs are still being generated.
    Any ideas of what might be generating these giant logs would be really helpful. Thanks!
    Message was edited by: loudguitars81

    I see seven logs in private/var/log, which IS normal. There isn't anything repeating in those beyond the aforementioned Epson thing, which was not what was causing the logs to be generated in private/var/log/DiagnosticMessages. The Epson errors in system.log and its daily predecessors stopped when I got rid of a bunch of Epson cruft, but the giant log files in the DiagnosticMessages logs continue.
    The private/var/log/DiagnosticMessages logs don't seem to be clearing out - I had them going all the way back to when I first installed Snow Leopard before I reinstalled the OS recently (the reinstall wiped out the old logs, obviously). This was actually how I discovered this weird logging problem in the first place - I couldn't figure out what was taking up so much space on my drive and I ran DiskUtilityX to find I had 10 gigs of log files in the DiagnosticMessages folder dating back from my move on 7/3.
    Everything before my move was well under 1MB daily. Everything after my move (and when my computer wasn't staying somewhere besides my own house) was between 75-100MB daily.
    Even after I reinstalled my OS 2/16, the private/var/log/DiagnosticMessages logs are not clearing out - until I ran @LincDavis's terminal commands, I had logs going back to that date, which was obviously more than 7 days ago.
    Does any of that detail help you guys in trying to pinpoint what's happening here?

Maybe you are looking for

  • Streaming from ReadyNAs Duo

    Hi there, Now that the Apple TV (2nd Generation) doesn't have any storage capability - does anyone know if it is possible to stream music, video and photo media directly from my ReadyNas Duo to AppleTV?

  • Style Sheets

    Hi All Apart from the built in style classes, I want to be able to use additional style classes within my Portal page. Is there a way to extend the oracle portal style with custom classes? Any other way to accomplish this? Regards Harry

  • Error after setting the iTunes prefereces.

    Hi, I have istalled the latest version of iTunes6.0.4.2. Now after setting the iTunes preferences i clicked OK, Then i got an error message saying .... "An error occured while updating the default platyer for audio file types. You do not have enough

  • Getting message: Cannot complete your request because it's not the right type of document

    I am taking a CS6 Photoshop class, I downloaded the data files from cengagebrain.com, my task in the book is to click File on the Menu bar, then click Open, then navigate to the drive and folder where you store you Data Files.  When I click the corre

  • Cant add new contacts to phonebook.

    having reloaded software into my 9630, when I press "Add Contact", nothing happens. Has anyone any experience with this?