[solved] files in /var/log and their ownership

Is there a reason why all the system log files are mode 640?
So far I'm in love with Linux (Arch especially) and really enjoy using terminal windows and logging in as root to do productive things (I really do love command-line-wrangling -- my first computer was a DOS machine), but it really seems like an unnecessary hoop to jump through to have to log in as root just to view a log file.
Is there any compelling reason to not set these log files mode 644? If there's no specific reason, could changes be made to have this be default behavior?
Last edited by gilmoreja (2012-08-04 02:06:12)

gilmoreja wrote:Is there any compelling reason to not set these log files mode 644? If there's no specific reason, could changes be made to have this be default behavior?
Logs can of course contain some sensitive info and I would say it's a good thing that the defaults provide a little security.
WorMzy wrote:I can't see any problem with you manually modifying the permissions on your log files. They're there for your benefit after all.
Adding yourself to the log group as Radioactiveman says is easier (and pretty much the same question was answered here maybe a week ago).
[edit:typo]
Last edited by Raynman (2012-08-03 19:10:23)

Similar Messages

  • WARNING: The File Systen "/var/log" is reaching out of Space

    I have a DMM version 5.2.
    When entering the DMM via SSH or console shows me the following sentence:
    WARNING: The File Systen "/var/log" is reaching out of Space
    How I can correct this error?
    I welcome your comments.
    Regards

    Hi Aurelio,
    If cleaning the logs does not help (which I anticipate). You may be running into a known defect, This issue happens due to the fact that the access_log does not rotate properly after it hits the size limit.
    The Cisco defect # is
    CSCti86040 - Access_log does not rollover or delete when clearing logs on DMM
    This defect is fixed in 5.2.2 code release but you can open a TAC case and we can delete the access_log log and apply the workaround for the same.
    Thanks,
    Sagar Dhanrale

  • I cannot import my AVCHD files by using log and transfer. I can see all the files, but when I add the file to import que it doesnt work, the status is a red (!) and says that its no data. I have tried with different cards and different cardreaders, but it

    I cannot import my AVCHD files by using log and transfer. I can see all the files, but when I add the file to import que it doesnt work, the status is a red and says that its no data. I have tried with different cards and different cardreaders, but it is still the same. Why?!

    Please give us more specific information -
    What make/model camcorder are you using?
    Are you absolutely certain this is AVCHD video?  Tell us more about the video settings you used in the camcorder (eg, frame size, frame rate, quality level)
    Why are you attempting Log & Transfer from a card reader instead of from your camcorder?  Have you tried doing Log & Transfer from your camcorder?
    Are the SD cards straight from the camcorder, unaltered, or are you just using the cards as storage?
    Are you using FCE 4.0 or FCE 4.0.1?

  • Files in /var/log

    I was looking through my /var/log directory on my PB G4 running OS X 10.4.4 recently and noticed several compressed files. I have a few questions:
    1. How come there are so many compressed files?
    2. Is it safe to delete them, and how can I delete them?
    3. Is there a way to prevent so many from building up in this directory?
    daily.out lpr.log.0.gz secure.log.0.gz
    fax lpr.log.1.gz secure.log.1.gz
    ftp.log lpr.log.2.gz secure.log.2.gz
    ftp.log.0.gz lpr.log.3.gz secure.log.3.gz
    ftp.log.1.gz lpr.log.4.gz secure.log.4.gz
    ftp.log.2.gz mail.log system.log
    ftp.log.3.gz mail.log.0.gz system.log.0.gz
    ftp.log.4.gz mail.log.1.gz system.log.1.gz
    httpd mail.log.2.gz system.log.2.gz
    install.log mail.log.3.gz system.log.3.gz
    install.log.0.gz mail.log.4.gz system.log.4.gz
    install.log.1.gz monthly.out system.log.5.gz
    install.log.2.gz netinfo.log system.log.6.gz
    install.log.3.gz netinfo.log.0.gz system.log.7.gz
    install.log.4.gz netinfo.log.1.gz weekly.out
    ipfw.log netinfo.log.2.gz windowserver.log
    ipfw.log.0.gz netinfo.log.3.gz windowserver_last.log
    ipfw.log.1.gz netinfo.log.4.gz wtmp
    ipfw.log.2.gz ppp wtmp.0.gz
    ipfw.log.3.gz ppp.log wtmp.1.gz
    ipfw.log.4.gz ppp.log.0.gz wtmp.2.gz
    lastlog ppp.log.1.gz wtmp.3.gz
    lookupd.log.0.gz ppp.log.2.gz wtmp.4.gz
    Thanks,
    Mike

    Hi Mike,
       There is no need to worry about their growing. In fact, that's why they are there. They are the results of log rotation by the periodic scripts. Notice that many logs start with the same name, such as ftp.log, ftp.log.1.gz, ftp.log.2.gz, ftp.log3.gz, and ftp.log.4.gz. When you first installed the OS, there was only one, ftp.log. The next time the 500.weekly script runs, it appends a one, '1', to the filename, gzips the file and creates a new ftp.log. A week later, the next time the 500.weekly script runs, it moves ftp.log.1.gz to ftp.log.2.gz, appends a 1 to the new ftp.log, gzips the file and creates a new ftp.log. This process repeats until the ftp.log.4.gz file is created. A week after that, the ftp.log.4.gz is deleted and the former ftp.log.3.gz takes it place.
       Thus, every week each file's number increases until it's 4 weeks old and then it's deleted. The total number of files doesn't increase and, modulo variations in activity, the total size of the collection remains roughly constant. Older records are kept in as compact a form as possible and really old records are removed to make way for new ones.
       Therefore, your goal of limiting the size of the log files is implemented for you in a more efficient manner than you might have imagined. However, since you don't use the old ones, you could delete them if you're really hard up for space. Oh, one more thing -- more active log files, like system.log, are rotated daily and a week's worth of them are maintained.
    Gary
    ~~~~
       A successful [software] tool is one that was used to
       do something undreamed of by its author.
             -- S. C. Johnson

  • Audio Drops on QT Files After P2 Log and Transfer

    I'm using Log and Transfer to convert my P2 data into editable QuickTimes; however, for some reason, my rendered QuickTime files loose their audio at some point. I say, "Some point," because it's not at a specific timecode. It's different for each clip. Some maintain the audio for 80% of the clip while others might be 30%. I know the audio is there because I can hear throughout the entire length of each clip in the Log and Transfer window. Any idea what's happening during the render to QuickTime process?

    Possible quick fix -won't hurt to try it.
    Deleting FCP Preferences:
    Open a Finder window. Set the view mode to columns:
    Click the house icon that has your name next to it.
    Click the folder "Library" in the next column.
    Find and click the folder "Preferences" in the next column.
    Find and click the folder "Final Cut Pro User Data" in the next column.
    Move the files "Final Cut Pro 6.0 Prefs", "Final Cut Pro Obj Cache" and "Final Cut Pro Prof Cache" to the Trash and empty it.
    When you have a stable set of preference files, download [Preference Manager|http://www.digitalrebellion.com/pref_man.htm] to back them up. If you have further problems, the backup can be recalled and you can continue working.

  • Maximal log file exceeded (freshclam.log and clamav.log)

    Hello,
    Some of my log files (freshclam.log and clamav.log) are no longer logging, they display this error message:
    Log size = 7782920, max = 1048576
    LOGGING DISABLED (Maximal log file size exceeded).
    and
    Log size = 1049052, max = 1048576
    LOGGING DISABLED (Maximal log file size exceeded).
    I have tried editing /etc/clamd.conf and changing the log size to 0, but that has not helped.
    Thank you for your help in advance.

    Thank you.
    I think somewhere along the way, the checkbox to archive logs was unchecked. I'm not sure why.
    Anyway, I checked that box (and set it to rotate every 7 days), then I backed up the current log files and touched new log files. That's working for now, and hopefully they'll archive on their own now.

  • Not all files work in log and transfer

    Hoping maybe someone has found a fix to this issue...
    When editing an HDV project not all my .m2t files (all in the same folder) want to transfer using Log and Transfer.  Most of the files will transfer correctly but sometimes it skips a file and puts a red exclamation mark in the status.  Not sure why most files work and a few don't when they are all from the same folder and from the same shoot.  The files will open fine in other 3rd party apps so the files are ok.  
    Thanks in advance for any help!
    Mike

    Mike Cole1 wrote:
    Not sure why most files work and a few don't when they are all from the same folder and from the same shoot.  The files will open fine in other 3rd party apps so the files are ok.  
    FCP may be a bit more discriminating on the file structure or header integrity, dunno, never shot HDV myself but we've seen similar posts for years. Try copying the files to your drive first and then ingesting.
    Side note, ust because other apps can open the files doesn't mean they are intact or useable. You don't mention what these apps are but if you can export form them, do that and then ingest the new file which might have all of its glitches reparied to suit FCP.
    bogiesan

  • Restore Backup log file back SID .log and be*****.ant

    Dear all,
    I have to restore sap oracle online backup from source server to target server. We have taken the backup of source server on tape. Now we have to restore that tape backup on target server. We don't have back<SID>.log and be*****.ant with us.
    How can we restore the back<SID>.log and be****.ant tape first by which we can start the restore online backup on target server or without this back<SID>.log and be****.ant can we restore the backup on other location ( which i tried but not working, it is asking for back<SID>.log file.
    Thanks
    Ward

    Hi Ward,
    As long as you have complete oracle datafiles, restore without logfiles can be done.
    1. As a precauition, copy all /oracle/<SID> to a safe place during offline state (SAP down, Oracle down)
    2. Replace each datafile from tape to the current datafile location. e.g /tape/sr3.data1 -> /oracle/<SID>/sapdata1/sr3_1/sr3.data1
    3. Make sure that archive logs is complete from the point where you start online backup.
    4. sqlplus /nolog
    5. startup mount
    6. You will likely found error with controlfile. Replace all oracle controlfiles from tape backup, then repeat the startup
    7. recover database until cancel using backup controlfile
    8. choose the archive where you want the point of restore
    Good luck

  • Can't read log files from /var/log with Geany

    With the few exeptions like hibernate.log, I can't read most of those files, including:
    everything.log
    errors.log
    kernel.log
    messages.log
    syslog.log
    When I try to open them (as root), all I get is
    The file xxx.log does not look like a text file or the file encoding is not supported.
    However, they do open with nano
    Last edited by Lockheed (2013-06-20 14:20:11)

    If you want to only view the files (I assume this since those are logs), maybe try
    $ less -M <file>
    You can navigate pretty well with it (e.g. go to a specific line).
    Edit:
    You could also try to find if there is an option in Geany that would allow you to see "non-text" files.
    Last edited by msthev (2013-06-20 18:41:27)

  • 2 TB MyCloud filesystems "/tmp" and "/var/log" both at 100%

    Now, this is just plain weird...  here's the output from "df -k": Filesystem 1K-blocks Used Available Use% Mounted on
    rootfs 1968336 685956 1182392 37% /
    /dev/root 1968336 685956 1182392 37% /
    tmpfs 40960 20992 19968 52% /run
    tmpfs 40960 64 40896 1% /run/lock
    tmpfs 10240 0 10240 0% /dev
    tmpfs 5120 0 5120 0% /run/shm
    tmpfs 102400 102400 0 100% /tmp                                           <<<<<<<<<<<<<---------------
    /dev/root 1968336 685956 1182392 37% /var/log.hdd
    ramlog-tmpfs 20480 20480 0 100% /var/log                             <<<<<<<<<<<<<---------------
    /dev/sda4 1918220368 26235484 1853008884 2% /DataVolume
    /dev/sda4 1918220368 26235484 1853008884 2% /CacheVolume
    /dev/sda4 1918220368 26235484 1853008884 2% /nfs/TimeMachineBackup
    /dev/sda4 1918220368 26235484 1853008884 2% /nfs/Public
    /dev/sda4 1918220368 26235484 1853008884 2% /nfs/SmartWare (pls. excuse the formatting but you can see at the arrows that /var/log and /tmp are at 100%) "/tmp" is filling up with *hundreds* of files with the form -rw------- 1 www-data www-data 0 Jul 14 00:43 sess_pdh5c9g907vqvusb3mdsvtlum3
    -rw------- 1 www-data www-data 0 Jul 14 00:43 sess_2a7v2di677ra43sh76lonm3de1
    -rw------- 1 www-data www-data 0 Jul 14 00:43 sess_o6kh3i4iggg78evs53kp6enpf6
    -rw------- 1 www-data www-data 0 Jul 14 00:43 sess_o0ahso52sef3h0if3ifpo4dno3
    -rw------- 1 www-data www-data 0 Jul 14 00:43 sess_bvn1o9v4b4ldgoq9uvtn2n24i0-rw------- 1 www-data www-data 0 Jul 14 00:43 sess_h01fbr9o1pte3ud2s9ainth7b6 all similarly named "sess_[somethingorother] And "/var/log" is filling up due to file "/var/log/user.log", with gazillions of error messages of the form Jul 14 00:02:06 WDMyCloud REST_API[6751]: 192.168.1.101 ORION_LOG /var/www/rest-api/api/Auth/src/Auth/User/UserSecurity.php ISAUTHENTICATED [ERROR] dbgvar0: Array\n(\n [_] => 1436857244438\n [RequestScope] => RequestScope Object\n (\n )\n\n)\n and file "/var/log/apache2/error.log", that has more gazillions of error messages in it of the form [Tue Jul 14 00:07:25.715561 2015] [:error] [pid 7107] [client 192.168.1.101:3844] PHP Fatal error: Uncaught exception 'Zend\\Log\\Exception\\RuntimeException' with message 'No log writer specified' in /var/www/rest-api/lib/Zend/Log/Logger.php:245\nStack trace:\n#0 /var/www/rest-api/lib/Zend/Log/Logger.php(396): Zend\\Log\\Logger->log(4, 'Unknown: open(/...', Array)\n#1 [internal function]: Zend\\Log\\Logger::Zend\\Log\\{closure}(2, 'Unknown: open(/...', 'Unknown', 0, Array)\n#2 {main}\n thrown in /var/www/rest-api/lib/Zend/Log/Logger.php on line 245 ooookay... something has clearly gone bezoomny...  Anybody seen this?  before I go off on Yet AnotherMad Debian Bug Hunt?  

    Hey WD...  Y'all's got a BUG... When I access the MyClod from my laptop running XP with FireFox, I get the thousands of  "sess_*" files written to /tmp, and I get groups of messages of the form Jul 14 22:36:26 WDMyCloud REST_API[23951]: 192.168.1.101 ORION_LOG /var/www/rest-api/api/Auth/src/Auth/User/UserSecurity.php ISAUTHENTICATED [ERROR] Authentication failure for /api/2.1/rest/mediacrawler_status?_=1436938574613
    Jul 14 22:36:26 WDMyCloud REST_API[23951]: 192.168.1.101 ORION_LOG /var/www/rest-api/api/Auth/src/Auth/User/UserSecurity.php ISAUTHENTICATED [ERROR] dbgvar0: Array\n(\n [_] => 1436938574613\n [RequestScope] => RequestScope Object\n (\n )\n\n)\n
    Jul 14 22:36:26 WDMyCloud REST_API[23951]: 192.168.1.101 ORION_LOG /var/www/rest-api/api/Auth/src/Auth/User/UserSecurity.php ISAUTHENTICATED [ERROR] dbgvar0: Array\n(\n [_] => 1436938574613\n [RequestScope] => RequestScope Object\n (\n )\n\n)\n written to /var/log/user.log. But when I access it similarly from the desktop machine, also running XP with FireFox, I just get *one* of the "sess_*" whatever files written to /tmp, and just one set of messages of the form Jul 14 22:40:26 WDMyCloud REST_API[24325]: 192.168.1.100 OUTPUT DlnaServer\Controller\Database GET SUCCESS
    Jul 14 22:40:29 WDMyCloud REST_API[23952]: 192.168.1.100 OUTPUT System\Configuration\Controller\FactoryRestore GET SUCCESS
    Jul 14 22:40:29 WDMyCloud Zend\Log[23877]: 8192
    Jul 14 22:40:51 WDMyCloud REST_API[23952]: 192.168.1.100 OUTPUT Alerts\Controller\Alerts GET SUCCESS written to /var/log/user.log. So the MyClod is playing nice with some computers and not others... My guess is this could be happening more than WD knows about and could be producing all manner of mysterious behavior, since not only does it only happen on some machines but it does *not* crash the MyClod - at least not right away.  The main effect is to fill /tmp and /var/log with garbage so nothing can write to them, which will probably affect some things and not others... https://www.youtube.com/embed/2Gwnmb6P-3k  

  • [SOLVED] Help locating various log files (boot process)

    Hi,
    I recently reinstalled with encryption and have a few messages that fly past the screen too fast to parse but do not seem to be in any of the files in /var/log.
    1) there's something complaining about a file related to the cpu ondemand governor but I'm not seeing it well enough to know what it's saying. 'grep -i demand /var/log/*' doesn't give me anything.
    2) When I start X (startx), I'm getting something like 'Xauth unrecognized command 1poiu1qekjh098q70987iohjqper987q34879' but, again, I have no idea where it's coming from or why it's complaining. Xorg.log doesn't have it. 'grep -i auth /var/log/*' doesn't have it.
    Are some things not recorded in logs anywhere or do I just not know where to look?
    Thanks.
    Last edited by jwhendy (2011-03-08 02:47:26)

    @pyther: fantastic. I was actually able to see both messages. One was complaining about /sys/devices/system/cpu/cpufreq/ondemand/up_threshold not existing. I added "sleep 2 &&" before the line that sets this in rc.local and so far it's not been there.
    The message at startx is:
    xauth: (stdin):2: unknown command "8e40369215c09339d784bb2c96c9c777"
    In searching, there were some references to a problem with ~/.Xauthority and one use said he nuked it and when it regenerated, everything was all good. I logged out, logged in as root, moved the file to /home/user/.Xauthority.bak and then re-logged in as myself and it was all good.
    Thanks!

  • Multiplexing Online redo logs, archive logs, and control files.

    Currently I am only multiplexing my control files and online redo logs, My archive logs are only going to the FRA and then being backed up to tape.
    We have to replace disks that hold the FRA data. HP says there is a chance we will have to rebuild the FRA.
    As my archive logs are going to the FRA now, can I multiplex them to another disk group? And if all of the control files, online redo logs and archive logs are multiplexed to another disk group, when ASM dismounts the FRA disk group due to insufficient number of disks, will the database remain open and on line.
    If so then I will just need to rebuild the ASM volumes, and the FRA disk group and bring it to the mount state, correct?
    Thanks!

    You can save your online redo logs and archive logs anywhere you want by making use of of init params create_online_log_dest and log_archive_dest_n. You will have to create new redo log groups in the new location and drop the ones in the FRA. The archive logs will simply land wherever you designate with log_archive_dest_n parameters. Moving the control files off FRA is a little trickier because you will need to restore your controlfile to a non-FRA destination and then shutdown your instance, edit the control file param to reflect changes and restart.
    I think you will be happier if you move everything off the FRA diskgroup before dismounting it, and not expecting the db to automagically recover from the loss of files on the FRA.

  • A conclusion to ALL the AVCHD/.mts Log and Transfer Final Cut Pro 7 issues

    Hey all! Happy 2011!
    So this is my first post here and hopefully when this thread concludes with all your help we should be able to help a lot of people and save them the heartache that I have recently had to endure by stating these things in plain English. I have some questions at the end which I think could be answered very easily! Please, no gobbledygook!
    It would be great if people in the know can confirm/deny or comment on my points so those that are in a similar position to me can save their time and just get on with it.
    1) I have a Panasonic HDC-TM300 and it records in AVCHD/.mts files
    2) FCP 7 can deal with AVCHD files via the Log and Transfer feature if you do it straight from the camera or memory card and can convert to ProRes 422 (LT)?
    3) Log and Transfer _*DOES NOT WORK*_ if you move the AVCHD/.mts to an external drive without maintaining the folder structure?
    4) What is required from the folder structure that makes FCP act this way?
    5) Basically I have f*ed it by doing the part 3 above which I think is the most natural thing in the world to do! Im sure that many people have done the same thing as me please correct me if i’m wrong
    After a lot of research on the internet I have come up with the following solutions and in brackets my thoughts on them. If anyone has any experience with them please please let me and other people know so I can get on with my life and editing these files.
    a) Harass the people that make FCP 7.0 and make sure that the next update solves this stupid/idiotic/ridiculous oversight. (This could take a while but i think we should do it for the sake of future generations)
    b) Someone could let us know about a workaround to this that may be the folder structure that FCP needs to be able to do L&T (needs to be simple so that everyone can do it. This would be the best short term solution until part a is completed)
    c) Buy one of the converter programs such as ClipWrap, Toast Titanium or Aunsoft mts converter. If so which is the most/ least recommended and why? (Least desirable option due to requirement for extra program so extra faffing and extra cost).
    Thanks y’all and keep up the awesome work!!
    K*

    2) FCP 7 can deal with AVCHD files via the Log and Transfer feature if you do it straight from the camera or memory card and can convert to ProRes 422 (LT)?
    Yes.
    3) Log and Transfer DOES NOT WORK if you move the AVCHD/.mts to an external drive without maintaining the folder structure?
    This is true of ANY tapeless format.
    4) What is required from the folder structure that makes FCP act this way?
    This is due to the fact that most of them have files in those other folders in that structure that FCP needs to access in order to get all the proper components assembled properly. Sure, there are formats that have folders with nothing in them. Doesn't matter, FCP is designed to look at the full structure and work from that.
    5) Basically I have f*ed it by doing the part 3 above which I think is the most natural thing in the world to do!
    The most natural thing in the world is to NOT backup everything? Well, that might be the case, since this comes up so darn often. But, it is wrong. The full structure must be maintained. This is true of many tapeless formats and more than just FCP. Avid and Premiere need this for certain tapeless formats too.
    a) Harass the people that make FCP 7.0 and make sure that the next update solves this stupid/idiotic/ridiculous oversight.
    It isn't an oversight. It is designed like this on purpose. You need to learn how to do things right. You obviously know what that is now...keep the full card structure. So now it is up to you to continue to do things properly. Instead of insisting that an NLE conform the way it works to suit your needs.
    b) Someone could let us know about a workaround to this that may be the folder structure that FCP needs to be able to do L&T (needs to be simple so that everyone can do it. This would be the best short term solution until part a is completed)
    Easy. Shoot a small bit on another card. Look at the folder structure. Recreate that structure in your backed up footage. Often this will mean making folders with exact names, even if they are empty. But if they have files in them, even small ones, then the first backup you did might not work....because those might be some of the small files FCP needs in order to do the L&T process properly. Always worth trying though.
    c) Buy one of the converter programs such as ClipWrap, Toast Titanium or Aunsoft mts converter. If so which is the most/ least recommended and why?
    I highly recommend Clipwrap. Mainly because I haven't tried the others. But also because the author is always following the latest formats, and works hard to make sure his software works with them. It solved an importing issue I had with the Sony NXCam camera...within a week of me moaning about the issue.
    Want a tutorial to walk you through the tapeless workflow in FCP? I have one. Tapeless Workflow for FCP 7 Tutorial You might know most of this, but watch it for other tips.
    Shane

  • [solved] [xorgserver 1.6.1] and the minimalist xorg.conf

    Hello new xorgserver users
    According to the recent xorgserver upgrade to 1.6.1 I've got blackscreen eachtime I started X. The only way to solve that problem (I'm driving a nvidia desktop) was to replace nvidia driver with xf86-video-nv and no use xorg.conf anymore (otherwise X crashes).
    But ctrl+alt+backspace miss me
    And I try to use a minimal xorg.conf with the only three lines :
    Section "ServerFlags"
    Option "DontZap" "false"
    EndSection
    X crashes on start, my Xorg.0.log :
    X.Org X Server 1.6.1
    Release Date: 2009-4-14
    X Protocol Version 11, Revision 0
    Build Operating System: Linux 2.6.29-ARCH i686
    Current Operating System: Linux archibald 2.6.29-ARCH #1 SMP PREEMPT Wed Apr 8 12:47:56 UTC 2009 i686
    Build Date: 15 April 2009 11:09:10AM
    Before reporting problems, check http://wiki.x.org
    to make sure that you have the latest version.
    Markers: (--) probed, (**) from config file, (==) default setting,
    (++) from command line, (!!) notice, (II) informational,
    (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
    (==) Log file: "/var/log/Xorg.0.log", Time: Sun Apr 19 15:27:19 2009
    (==) Using config file: "/etc/X11/xorg.conf"
    Parse error on line 2 of section ServerFlags in file /etc/X11/xorg.conf
    Unexpected EOF. Missing EndSection keyword?
    (EE) Problem parsing the config file
    (EE) Error parsing the config file
    Fatal server error:
    no screens found
    Please consult the The X.Org Foundation support
    at http://wiki.x.org
    for help.
    Please also check the log file at "/var/log/Xorg.0.log" for additional information.
    (WW) xf86CloseConsole: KDSETMODE failed: Bad file descriptor
    (WW) xf86CloseConsole: VT_GETMODE failed: Bad file descriptor
    some archers use this minimal xorg.conf. Why can't I ?
    Last edited by frenchy (2009-04-19 15:51:35)

    Hy wonder
    The wiki says "ServerFlags"...
    Anyway, the ServerLayout give me (xorg.0.log) :
    Parse error on line 2 of section ServerLayout in file /etc/X11/xorg.conf
    Unexpected EOF. Missing EndSection keyword?
    (EE) Problem parsing the config file
    (EE) Error parsing the config file

  • DMM: "/var/log" out of space

    Hello,
    the ssh-interface shows "file system /var/log is reaching out of space". I changed the log-level from informational to error. How can I clean this file system?
    DMM-Version 5.02.

    Are the patch .jar and/or .zip files still in /var ? These can be safely removed.

Maybe you are looking for

  • Mac will not boot up

    My mac will not boot up - shows a sort of stop sign then grey screen

  • Phone Randomly Goes Black in iOS7.0.2

    Ever since installing the update to 7.0.2, my phone randomly locks up then goes black and unresponsive for a time, no matter what I do to try to restart. After a few minute the phone restarts. The kicker is I though the update to iOS7.0.2 was suppose

  • Error in EA13 (Single Reversal of Print Documents) : 'TE508 does not exists

    HI All, When reversing an individual invoice in EA13, an error comes up in the log. Though the invoice has been successfully reversed. The following error is displayed 'TE508 does not exist' Any reasons Regards, Rahul

  • Just installed CS6. Bridge tries to open CS4 when I....

    Just installed CS6. Bridge tries to open CS4 when I double click a JPEG from within Bridge. CS4 is not installed on my computer. Windows 7 computer. RAW files open fine..... Bridge ---> ACR ---> CS6

  • Is Costing Model (ECP) Transportable

    Team, I need to create near to 100 forms in CKCM for using in Easy Cost planning. Is the costing models created in CKCM is transportable to production server? While creating I didnt get any transport request and as per BASIS people its a transaction