Large log files

I had a bit of a panic a few days ago when my available hard drive space went below 700MB. Last I had looked it was around 10GB. So I spent the last few days clearing out old applications and files that I don't need and have made some progress, but not enough.
I downloaded OmniDiskSweeper to help and found that my private/var/log space is pretty bloated: private is at 31.5GB. Within private/var/log the two biggest items are the 'asl' folder and 'system.log.1'
I went into Console to look up system.log.1 and found 7.6 gigs of the following lines repeating:
Feb 28 09:55:33 Dwend-MacBookPro [0x0-0x3af3af].com.apple.Safari[3499]: * set a breakpoint in mallocerrorbreak to debug
Feb 28 09:55:33 Dwend-MacBookPro [0x0-0x3af3af].com.apple.Safari[3499]: Safari(3499,0xb0baa000) malloc: * error for object 0x2238c4f4: Non-aligned pointer being freed
Can I just delete this log? I thought about reinstalling Safari just to make sure this won't happen again, but it looks like a one time thing on Feb 28, so maybe that's not necessary? Help!??

Any idea what causes this and how to prevent it from recurring? I deleted over 200 GB of logs yesterday after running completely out of space.
These boards were very helpful in identifying the problem, so thanks!

Similar Messages

  • Large log file to add to JList

    Hello,
    I have a rather large log file (> 250 000 lines) which I would like to add to a JList -- I can read the file okay but, obviously, can't fine a container which will hold that many items. Can someone suggest how I might do this?
    Thanks.

    NegativeSpace13579 wrote:
    I can't add that many items to a Vector or Array to pass to the JList.You fail to describe the problem you run into. Why can't you add that many items? Do you get any error message?
    Please read http://catb.org/~esr/faqs/smart-questions.html to learn how to ask questions the smart way.

  • Data Services Designer 14 - Large Log files

    Hello,
    we're running several jobs with the Data Services Designer 14, all works fine.
    But today a problem occur:
    The Data Designer on a client produces after finishing a big job a very large log file in the Data Services Designer folder with 8 GB.
    Is it possible to delete these log files automatically or restrict the maximum size of the created log files in the designer?
    What's the best way?
    Thanks!

    You can set to automatically delete the log files based on number of days.
    I have done this in XI 3.2, but as per the document, this is how it can be done in DS 14.0.
    In DS 14.0, this is handled in CMC.
    1. Log into the Central Management Console (CMC) as a user with administrative rights to the Data Services application.
    2. Go to the u201CApplicationsu201D management area of the CMC. The u201CApplicationsu201D dialog box appears.
    3. Right-click the Data Services application and select Settings. The u201CSettingsu201D dialog box appears.
    4. In the Job Server Log Retention Period box, enter the number of days that you want to retain the following:
    u2022 Historical batch job error, trace, and monitor logs
    u2022 Current service provider trace and error logs
    u2022 Current and historical Access Server logs
    The software deletes all log files beyond this period. For example:
    u2022 If you enter 1, then the software displays the logs for today only. After 12:00 AM, these logs clear and the software begins saving logs for the next day.
    u2022 If you enter 0, then no logs are maintained.
    u2022 If you enter -1, then no logs are deleted.
    Regards,
    Suneer.

  • Ridicliously large log file

    I was wondering why /library was using 3GB, and I found the culprit. /Library/Logs/Console/501/console.log.6 is an 848.6 MB file! I have all good intentions of deleting it, but I am wondering how a log file got to be that big. I am also wondering what is in it, but I cannot find a program that will not crash on opening such a big file.

    maciscool
    This terminal command will reset that log to zero length, which will not upset any log rotation. Copy and paste the following into the Terminal window, followed by a return:
    >| /Library/Logs/Console/501/console.log.6I don't think running MacJanitor is going to help, since the console logs are not rotated by the normal periodic tasks. Rather they appear to be rotated on a new login.
    Edit: I can confirm this. I just logged out (not Fast User Switching, but properly) and logged back in. My console logs are rotated.

  • Resources For Large Log File

    I wrote a LogFile class that opens a BufferedWriter on the specified file and writes to it when called. The constructor is as follows:
    m_bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(m_strLogFile, true)));
    m_logFile = new File(m_strLogFile);
    I have a max log file size set to 2 megs (at which point the file is archived and reset). This limit has worked great. However, now that my system is seeing more usage, I'd like to raise the max limit to 5 megs. I've done that and all appears ok. My question is whether the JVM uses more resources for the 5 meg file than the 2 meg, or does it only keep track of a pointer to the end of the file and append to that. I'm hoping the latter is the case, so that the file size is irrelevant. However, if there might be performance difficulties writing to a file that size, I'll keep the limit at 2 megs.
    Michael

    I have a max log file size set to 2 megs (at which
    point the file is archived and reset).I suppose it really depends on how you are doing that.
    However, unless you explicitly set a buffer size somewhere that is 2 megs, then normal file io is going to use about 512 or 1k, regardless of the file size. If you use a buffered writer in java it will use 512 bytes (I believe) and the low level os will use about 512. The actual number of bytes might be 256 up to about 4k on the OS side, but I think most buffers are around 512 these days.

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • MS_SQL Shrinking Log Files.

    Hi Experts,
    We have checked the documentation which have been received from SAP (SBO_Customer portal), Based on the Early Watch Alert.
    AS per SAP requsition we have minimized the size of 'Test Database Log' file  through MS_SQL Management Studio.(Restricted file size 10 percent, size 10 MB)
    Initially it was 50 Percent, 1000 MB
    Doubt is:
    Is that any problem will occur in future regarding this changes of 'LOG FILES'.
    Kindly help me.
    Based on your reply ...
    i will update in live production database....
    By
    kart

    The risk to shrink log file is fairly small.  Current hardware and software has much better reliability than before.  When you shrink your log file, you just lose some history which nobody even know there is any value in it.
    On the contrary, if you keep very large log file, it may cause more troubles than doing anything good.
    Thanks,
    Gordon

  • "Share over Wan" - passworded but log files say differently?

    In a desperate attempt to get backup features to work on my TC, I enabled "Share over Wan". Thinking that I've got more than enough security with disk passwords, I didn't automatically think there'd be a problem.
    I then looked at my log files on my TC a day later and saw successful logins from IP's other than mine - but all within the same subdomain.
    Does "Share over Wan" supersede the disk passwords? I've tried accessing from other subdomains (my work) and always get prompted for passwords. Should I be worried about these successful logins or ignore them as successful pings (or the like?)
    I've, of coarse, now turned off "Share over Wan".

    awkwood wrote:
    Cheers omp!
    I have one suggestion: your count_lines method will be quite slow on large log files.
    Rather than use readlines you can optimize the read operations like so:
    line_count = 0
    File.open(identifier.to_cache_path) do |f|
    while block = f.read(1024)
    line_count += block.count("\n")
    end
    end
    The speed boost makes it comparable to shelling out to wc.
    Thanks for the suggestion; I just committed it.

  • Tracking down .log file creators?

    I have had disk space getting eaten up and discovered using "WhatSize" a number of files in the private/var folder set (see discussion on similar in Nov 2008) The response to the query then was to track down what was writing the files in the first place.
    I can locate the files - a series called swapfile which build exponentially to a size of 1 GB, but there is no indication of what app is creating the file.
    I have found a similar set with suffix .asl which ostensibly are created by Adobe Photoshop CS3 - great, except I don't have that installed on my iBook. I do have Photoshop CS, but the recent creation dates (last week) of the files don't match up with my last use of Photoshop (two months ago).
    Has anyone any suggestions?

    Hmm, Niel may have some ideas, but several things strike me as being odd with what is happening on your machine. First, those swap files should have been cleared out with a simple restart, you should not have had to remove them by hand. In fact, from what I've read you really aren't supposed to.
    Second, if you are running 10.5.7 and have Activity Monitor set to show all processes, you really ought to see aslmanager at some point--it is supposed to run "as needed" when called by syslogd. For more information about it, see this note at MacFixIt:
    http://www.macfixit.com/article.php?story=20090122213555897
    And if everything is running along normally, I don't think kernel_task should be using too much of your system resources. At the moment, with only two programs running of my own, it is using 1.2% of the CPUs and 71MBs of RAM. If kernel-task is hogging your resources I would think something is wrong, perhaps a bad driver or other system level extension of something.
    If you haven't done so, you might try launching Console from your Utilities folder, make sure the Log List is showing (there's an icon in the top left corner to hide and show it), expand the Log Database Queries at the top, and select All Messages. You'll see a lot of messages about what your computer did to get started, then other messages will start to appear, they'll all be time-stamped, see if you start to get a lot messages about something that isn't working (which would account for the too large log files), or else you might see that the kernel is working very hard at something or other, and what it might be (which would tell you why the kernel-task is using a lot of resources).
    Francine
    Francine
    Schwieder

  • Log File Subscriptions

    Does anyone have any experiences to share regarding the duplication (or more accurately… the over duplication) of log subscriptions?  I currently have in place two Access Log subscriptions headed to two different log servers but would like to create third Access Log subscription in order to allow me to utilize the on-box grep option for live support.  Anyone have any pros/cons to share?

    Per documentation
    "More detailed settings create larger log files and have a  greater impact on system performance. More detailed settings include all the  messages contained in less detailed settings, plus additional messages. As the  level of detail increases, system performance decreases."
    This will be the same for multiple log files, the more there is, the more effect it has on system performance.
    I hope this helps.
    Regards.

  • Raw log files reachable only through CF Administrator?

    I can't reach some large log files through WS_FTP. Perhaps I
    could get to them through CF Administrator?
    Do I need to go through CF Administrator if I want to make
    those files smaller?

    I can't reach some large log files through WS_FTP. Perhaps I
    could get to them through CF Administrator?
    Do I need to go through CF Administrator if I want to make
    those files smaller?

  • Log File dev_server0 excessively growing over time

    Hello everybody,
    I was facing an interesting crash of the XI system recently. The hard disk of the server went full and nothing worked anymore. After some investigation I found out that the file dev_server0 was growing excessively up to several GB of size.
    After stopping the engine, deleting the file and starting again, everything worked fine but the file continues to grow again.
    Can anybody tell me, what could be wrong with this file? Is there some kind of logging I accidentally enabled that fills up this file - or does this look like a defect rather?
    Your help is appreciated!
    regards,
    Peter

    Hi Peter,
    below is part of the mail i got from our basis person regarding the log file growing(i had to then switch the trace  off)...it is in the same directory as the file you are having...
    <i>"Disk space utilization for /usr/sap/XID on sapxid grew to 97% on Sunday and I found a very large log file in /usr/sap/XID/DVEBMGS61/work/ directory. It was still open so I had to stop XID before I could move it out of the files system. I put the file in /tmp/dev_server0.old."</i>
    Thanks,
    Renjith

  • Problem to send result from log file, the logfile is to large

    Hi SCOM people!
    I have problem when monitoring a log file on a Red Hat system, I get a alert that tells me that the log file is too large to send (see the alert context below).I guess that the problem is that the server logs to much between the 5 minutes that SCOM checks.
    Any ideas how to solve this?
    Date and Time: 2014-07-24 19:50:24
    Log Name: Operations Manager
    Source: Cross Platform Modules
    Event Number: 262
    Level: 1
    Logging Computer: XXXXX.samba.net
    User: N/A
     Description:
    Error scanning logfile / xxxxxxxx / server.log on values ​​xxxxx.xxxxx.se as user <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser>; The operation succeeded and cannot be reversed but the result is too large to send.
    Event Data:
    < DataItem type =" System.XmlData " time =" 2014-07-24T19:50:24.5250335+02:00 " sourceHealthServiceId =" 2D4C7DFF-BA83-10D5-9849-0CE701139B5B " >
    < EventData >
      < Data > / xxxxxxxx / server.log </ Data >
      < Data > ​​xxxxx.xxxxx.se </ Data >
      < Data > <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser> </ Data >
      < Data > The operation succeeded and cannot be reversed but the result is too large to send. </ Data >
      </ EventData >
      </ DataItem >

    Hi Fredrik,
    At any one time, SCX can return 500 matching lines. If you're trying to return > 500 matching lines, then SCX will throttle your limit to 500 lines (that is, it'll return 500 lines, note where it left off, and pick up where it left off next time log files
    are scanned).
    Now, be aware that Operations Manager will "cook down" multiple regular expressions to a single agent query. This is done for efficiency purposes. What this means: If you have 10 different, unrelated regular expressions against a single log file, all of
    these will be "cooked down" and presented to the agent as one single request. However, each of these separate regular expressions, collectively, are limited to 500 matching lines. Hope this makes sense.
    This limit is set because (at least at the time) we didn't think Operations Manager itself could handle a larger response on the management server itself. That is, it's not an agent issue as such, it's a management server issue.
    So, with that in mind, you have several options:
    If you have separate RegEx expressions, you can reconfigure your logging (presumably done via syslog?) to log your larger log messages to a separate log file. This will help "cook down", but ultimately, the limit of 500 RegEx results is still there; you're
    just mitigating cook down.
    If a single RegEx expression is matching > 500 lines, there is no workaround to this today. This is a hardcoded limit in the agent, and can't be overridden.
    Now, if you're certain that your regular expression is matching < 500 lines, yet you're getting this error, then I'd suggest contacting Microsoft Support Services to open an RFC and have this issue escalated to the product team. Due to a logging issue
    within logfilereader, I'm not certain you can enable tracing to see exactly what's going on (although you could use command line queries to see what's happening internally). This is involved enough where it's best to get Microsoft Support involved.
    But as I said, this is only useful if you're certain that your regular expression is matching < 500 lines. If you are matching more than this, this is a known restriction today. But with an RFC, even that could at least be evaluated to see exactly the
    load > 500 matches will have on the management server.
    /Jeff

  • LMS 4.0.1 ani log file too large

    My LMS 4.0.1 platform is installed since several days and I have a very large ani.log file (> 300 MB after 4 days of daemons running).
    In this file, I saw many Ani Discovery errors or warnings like for example:
    2011/11/28 11:08:46 Discovery ani WARNING DcrpSMFUDLDDisabledOnPorts: Unable to fetch device details for container(Device,10.11.101.241 hostname: 10.11.101.241)
    2011/11/28 11:08:46 Discovery ani WARNING DcrpSMFUDLDDisabledOnPorts: Unable to fetch device details for container(Device,10.74.101.245 hostname: 10.74.101.245)
    2011/11/28 11:08:46 Discovery ani ERROR DcrpSMFPortBPDUFilterDisabled: Unable to get span tree device info for the devicecontainer(Device,10.14.101.5 hostname: 10.14.101.5)
    2011/11/28 11:08:46 Discovery ani ERROR DcrpSMFPortBPDUFilterDisabled: Unable to get span tree device info for the devicecontainer(Device,10.3.101.12 hostname: 10.3.101.12)
    2011/11/28 11:18:45  Discovery ani WARNING DcrpSMFCDPAccessPort: Unable to get CDP information for  the devicecontainer(Device,10.1.101.17 hostname: 10.1.101.17)
    2011/11/28  11:18:45 Discovery ani WARNING DcrpSMFCDPAccessPort: Unable to get CDP  information for the devicecontainer(Device,10.1.101.9 hostname:  10.1.101.9)
    2011/11/28 11:18:45 Discovery ani WARNING DcrpSMFSTPBackboneFast:  Unable to get span tree device info for the devicecontainer(Device,10.1.101.85  hostname: 10.1.101.85)
    2011/11/28 11:18:45 Discovery ani WARNING  DcrpSMFSTPBackboneFast: Unable to get span tree device info for the  devicecontainer(Device,192.168.12.51 hostname:  192.168.12.51)
    2011/11/28 11:25:11 EvalTask-background-41 ani ERROR StpSMFGetStpInstance: unable to get stp device information
    These errors are not focused on specific devices (many devices are concerned).
    However, all seems to be working fine on the platform (layer2 maps, data collection, inventory, config backup, UT, DFM, ...).
    For informations, recently, I was in contact with TAC because Data Collection was always in running state.
    They provided a new PortDetailsXml.class file to replace the original one.
    It has fixed the problem.
    I suspect now that ani database could be corrupted and need to be reinitialized.
    I would to be sure of that and if possible to avoid this solution.
    Thanks for your help.

    Hi ,
    Found some errors and exception in the log..
    We need to follow the below steps to fix the issue::
    -we need to re-initialize the ANIdatabase ::
    1.stop the daemon manager :
    /etc/init.d/dmgtd stop
    net stop crmdmgtd
    2.Go to : /opt/CSCOpx/bin/ and issue the command :
    Run the below command:    /opt/CSCOpx/bin/perl dbRestoreOrig.pl dsn=ani dmprefix=ANI
    windows::
    NMSROOT\bin\perl.exe NMSROOT\bin\dbRestoreOrig.pl dsn=ani dmprefix=ANI
    3.Start the Daemon manager :: 
    /etc/init.d/dmgtd start
    net start crmdmgtd
    ***IMP***   Re-initialize the ANI database will not lose any of the history information of the device.  Also ANI database does not contain any historical information as soon as the above steps are complete you need to run and new DATA COLLECTION followed by User tracking Acquisition and then check the issue.
    Data collection:: Go to  Admin > Collection Settings > Data Collection > Data Collection Schedule   under "start Data Collection" > For "All Device"  >> click "START"
    User tracking :: Inventory > User Tracking Settings > Acquisition Actions
    hope it will help
    Thanks-
    Afroz
    ***Ratings Encourages Contributors ****

  • How to clean large system log files?

    I believe that OS X saves a lot of system data in log files that become very large.
    I would like to clear old history logs.
    How may I view and clean system log files?

    Thank you Niel.
    I have obtained the list at /private/var/log.
    There are a lot of files in there.
    Since I am not familiar with functions of these files should I be concerned that by simply deleting all of the files in folder /private/var/log will not cause any problems? Would this action present some unintended consequences?

Maybe you are looking for

  • Why won't it let me boot from my external optical drive?

    Alright here's what I'm working with. Macbook Pro (late 2011) 16gb ram 128gb SDD 750gb HDD (both internal, took the optical drive out and did a data doubler) 2.2 intel core i7 I'm trying to install windows 7 on a partition of my second hard drive. Wh

  • How to skip page1 based on condition

    Hi Experts, I have one issue in adobe form,in that form two master pages there.  If the invoice no having only one line item in this case the first page should be hide that means the one line item should be displaying in second page. in this case sho

  • How do I transfer music files from a portable hard drive to my current iTunes library?

    Last year I moved from a desk top to a Mac. Geek squad helped me save my previous iTunes libarary to an external hard drive. I'm not a sophisticated user by any means, and I cannot figure out how to copy the music files from the external drive into m

  • EHS Phrase sets for dummies needed

    I have set up Dangerous goods checks and everything is working except for phrase sets  used in the Dangerous goods master record.  When I create or change a DG master record using DGP1 or DGP2 I can enter all the information I need except the Dangero

  • Multiple instances of reentrant VI in a modular way

    Is there a way to have a VI run multiple instances (in parallel) of the same reentrant subVI in a modular way, i.e. the VI needs to call n number of instances? Other than having n copies of the reentrant subVI hardwired in the calling VI (which is ob