Systemd + fsck -- default timeout is not enough

systemd-fsck tries for check my ~ 1 TB ext3 partition, but it fails: machine gets turned off because the check seems to reach some systemd timeouts (despite 15 minutes seems reasonable long for it at first):
-- Reboot --
Dec 09 19:08:52 hostname systemd-journal[169]: Journal stopped
Dec 09 19:08:52 hostname systemd[1]: Shutting down.
Dec 09 19:08:52 hostname systemd[1]: Forcibly powering off as result of failure.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for Network Manager.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for A lightweight DHCP and caching DNS server.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for Login Service.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for Graphical Interface.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for Multi-User System.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for GNOME Display Manager.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for D-Bus System Message Bus.
Dec 09 19:08:52 hostname systemd[1]: Dependency failed for Permit User Sessions.
Dec 09 19:08:52 hostname systemd[1]: Timed out starting Basic System.
Dec 09 19:08:52 hostname systemd[1]: Job basic.target/start timed out.
Dec 09 18:53:53 hostname systemd-fsck[300]: /dev/sdb5 has been mounted 24 times without being checked, check forced.
I failed to google it or find in man pages how to change timeout for basic.target/start -- increasing it looks valid in this particular case. Does anyone know how to do that or has another idea how to get fs checked on boot?

being hit by the same issue, does this help?
[root@olivia ~]# systemctl show basic.target | grep Path
FragmentPath=/usr/lib/systemd/system/basic.target
[root@olivia ~]# vi /usr/lib/systemd/system/basic.target
# change 15min to 35min, save and exit
[root@olivia ~]# systemctl daemon-reload

Similar Messages

  • [SOLVED] How to cancel systemd-fsck during reboot after power failure.

    Hello
    If power fails (or if I, for any reason, force a physical shutdown ) on my computer, it will display during boot :
    systemd-fsck[171] : arch_data was not cleanly unmounted, check forced
    And then hangs for a long time (10 minutes ) while checking arch_home partition, wich is a 1T ext4 partition.
    then finish the boot.
    Sometimes i want this behavior, but i may need to have my computer up and running as fast as possible, and i don't seem to be able to cancel this fsck.
    Ctrl+C, or escape have no effect.
    How to allow cancelation of system-fsck for this boot, postponing it for the next boot ?
    my fstab :
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # UUID=cfdf8739-d512-4d99-9893-437a6a3c9bf4 LABEL=Arch root
    /dev/sda12 / ext4 rw,relatime,data=ordered 0 2
    # UUID=98b01aa3-0f7f-4777-a941-8e676a68adce LABEL=Arch boot
    /dev/sda11 /boot ext2 rw,relatime 0 2
    # UUID=8909c168-5f1e-4ae7-974c-3c681237af7a LABEL=Arch var
    /dev/sda13 /var ext4 rw,relatime,data=ordered 0 2
    # UUID=a13efc24-cf66-44d0-b26c-5bb5260627a0 LABEL=Arch tmp
    /dev/sda14 /tmp ext4 rw,relatime,data=ordered 0 2
    # UUID=779aeb69-9360-4df0-af84-da385b7117d1 LABEL=Arch home
    /dev/sdb4 /home ext4 rw,relatime,data=ordered 0 2
    /dev/sdb5 /home/glow/data ext4 rw,relatime,data=ordered 0 2
    Last edited by GloW_on_dub (2013-12-07 16:00:06)

    Maybe you can add a menu item to the grub boot menu so you can pick it from the menu instead of editing the grub line by hand?
    I'm using syslinux, but I can have menu items that only differ in e.g. 'quiet' parameter:
    APPEND root=/dev/sda3 rw init=/usr/lib/systemd/systemd quiet
    APPEND root=/dev/sda3 rw init=/usr/lib/systemd/systemd
    Everything else is the same.

  • Default Timeout on Data Socket

    In my application, I am communicating with a PLC thru RS Linx utilizing a data socket. I can successfully open the socket, write the value and update using the following:
    DS_Open with  DSConst_Write option
    DS_SetDataValue and then DS_Update
    For robustness, I want to be able to alert the operator if the ethernet cable attaching the PC to PLC is unplugged, most likely or some other portion of the process has failed, less likely once setup.
    I do have a solution to the above, but what I have found is that if during the execution of my program I disconnect the cable,  DS_Update does give me an error indicating that sometheing is wrong, but my code hangs the thread for an indeterminate amount of time at least 1 second either through the execution of the DS_SetDataValue cmd or DS_Update. Both commands don't have a timeout associated with them, but is there some default timeout that is used.
    As a work around, my thought is that I if available I could use feedback from a callback if fast enough which can detect when a network cable has been disconnected to avoid this indeterminate timeout by avoiding any set or update commands when the cable is not connected.

    Hi testguy,
    Have you tried some of the other data socket functions? DS_IsConnected or DS_GetStatus may be able to tell the status of the connection a little faster; you could run one of those before the DS_Update so it doesn't hang. The full list of data socket functions can be seen here .
    There are also some other communication protocols that do allow timeout configuration, like telnet, TCP, and UDP.
    Hope this helps!
    -Alexandra
    National Instruments
    Applications Engineer

  • ORA-27301: OS failure message: Not enough storage is available to process

    Hi Team,
    I am using Oralce standard edition on windows 2003 standard edition 32 bit server, After long time i received the below error:
    i had increased ram 4 gb into 8 gb by june. yesterday i received this error.
    Oralce 10gr2 10.2.0.4 version:
    ORA-27300: OS system dependent operation:CreateThread failed with status: 8
    ORA-27301: OS failure message: Not enough storage is available to process this command.
    ORA-27302: failure occurred at: ssthrddcr
    my boot ini files is:
    [boot loader]
    timeout=5
    default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
    [operating systems]
    multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /PAE /fastdetect /NoExecute=OptIn
    C:\CMDCONS\BOOTSECT.DAT="Microsoft Windows Recovery Console" /cmdcons
    Kindly advice to resolve the problem.Please help me.

    You should look at the Windows Event Log for errors / reasons.
    Probably (this is a guess), you ran out of space for the Paging file.

  • Scheduled report failure - Not enough memory for operation

    Crystal Reports Server XI R2 SP4
    SQL Server 2005
    Hi,
    I have a report that is scheduled. Most of the times it runs successfully, however for one parameter setting I continuously get the following error mesage:
    Status:  Failed
    Printer:  The instance is not printed.
    External Destination:  Copy the instance with default filename to the directory '//10.2.25.24/polling_in'.
    Creation Time:  03/04/2009 05:00
    Start Time:  03/04/2009 05:00
    End Time:  03/04/2009 05:00
    Server Used:  BWAQTSREP-1.reportjobserver
    Error Message:  Not enough memory for operation.
    Why would this happen, and how can I resolve? I can run the report successfully in Infoview.
    Thanks!
    Penny

    Hi Penny,
    do you run the report successfully in the infoview using the same parameter setting?
    If it does please try to reschedule the report and monitor (using the task manager) the memory consumption on your windows server.
    Please also make sure that there is enough disk space available on both your server and the destination directory ( '//10.2.25.24/polling_in'. )
    Regards,
    Stratos

  • Getting Error "Not Enough Space" while deploying Java Webservice in AS

    Hello,
    I am trying to deploy a java web-service using OAS enterprise manager but getting an error saying "Not Enough Space". Below are the logs.
    [Jul 21, 2010 12:06:17 PM] Application Deployer for MathAppl STARTS.
    [Jul 21, 2010 12:06:35 PM] Copy the archive to /u02/oracle/product/install/SOAHome1/j2ee/oc4j_soa/applications/MathAppl.ear
    [Jul 21, 2010 12:06:35 PM] Initialize /u02/oracle/product/install/SOAHome1/j2ee/oc4j_soa/applications/MathAppl.ear begins...
    [Jul 21, 2010 12:06:35 PM] Unpacking MathAppl.ear
    [Jul 21, 2010 12:06:35 PM] Done unpacking MathAppl.ear
    [Jul 21, 2010 12:06:35 PM] Unpacking WebServices.war
    [Jul 21, 2010 12:06:35 PM] Done unpacking WebServices.war
    [Jul 21, 2010 12:06:35 PM] Initialize /u02/oracle/product/install/SOAHome1/j2ee/oc4j_soa/applications/MathAppl.ear ends...
    [Jul 21, 2010 12:06:35 PM] Starting application : MathAppl
    [Jul 21, 2010 12:06:35 PM] Initializing ClassLoader(s)
    [Jul 21, 2010 12:06:35 PM] Initializing EJB container
    [Jul 21, 2010 12:06:35 PM] Loading connector(s)
    [Jul 21, 2010 12:06:35 PM] Starting up resource adapters
    [Jul 21, 2010 12:06:35 PM] Initializing EJB sessions
    [Jul 21, 2010 12:06:35 PM] Committing ClassLoader(s)
    [Jul 21, 2010 12:06:35 PM] Initialize WebServices begins...
    [Jul 21, 2010 12:06:35 PM] Initialize WebServices ends...
    [Jul 21, 2010 12:06:35 PM] Started application : MathAppl
    [Jul 21, 2010 12:06:35 PM] Binding web application(s) to site default-web-site begins...
    [Jul 21, 2010 12:06:35 PM] Binding WebServices web-module for application MathAppl to site default-web-site under context root MathApple
    [Jul 21, 2010 12:06:37 PM] Operation failed with error: Error compiling :/u02/oracle/product/install/SOAHome1/j2ee/oc4j_soa/applications/MathAppl/WebServices: Not enough space
    When i checked the application server disk utilization, there is enough space. Below is the output.
    bash-3.00$ df -h
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c1t0d0s0 15G 3.5G 11G 25% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 76M 960K 75M 2% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1
    15G 3.5G 11G 25% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 286M 212M 75M 74% /tmp
    swap 142M 67M 75M 48% /var/run
    /dev/dsk/c1t0d0s3 48G 17G 30G 36% /u01
    /dev/dsk/c1t1d0s0 67G 6.0G 61G 9% /u02
    Please help.
    -Mj

    Dear,
    As suggested, i have checked for the JVM configuration for low heap size but i found the value of MaxPermSize to be 256M. So, i guess this is not causing the "Not Enough Space" issue.
    Please let me know where else do I need to check as i completely clueless about this problem.
    Thanks for your inputs.
    -Mj

  • [SOLVED] VLC 1.1.9 Crashing "Not enough memory"

    Hi, I'm having some problems with VLC 1.1.9 crashing at seemingly random intervals while watching videos (I can confirm I've had problems with .mkv, .avi, .wmv) usually between 10 and 15 minutes after starting.  When running from the command line I get this output:
    VLC media player 1.1.9 The Luggage (revision exported)
    Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS")
    Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE")
    [0x92488fc] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
    Blocked: call to setlocale(6, "")
    Blocked: call to setenv("ORBIT_SOCKETDIR", "/tmp/orbit-ryan", 1)
    Warning: call to srand(1303626505)
    Warning: call to rand()
    Blocked: call to setlocale(6, "")
    (process:20992): Gtk-WARNING **: Locale not supported by C library.
        Using the fallback 'C' locale.
    [0x92e2e6c] signals interface error: signal 17 overriden (0xb4cdb150)
    [0x92e2e6c] signals interface error:  /usr/lib/libQtCore.so.4(?)[(nil)]
    Not enough memory
    But if I run free directly after it crashes, it shows I still have almost 3 gigs of ram free:
    total       used       free     shared    buffers     cached
    Mem:       3356340     266248    3090092          0       4128     109896
    -/+ buffers/cache:     152224    3204116
    Swap:      9214972     139628    9075344
    If I downgrade to version 1.1.8, it runs just fine, and I get the exact same output minus the last line (the "Not enough memory")
    I have this problem when running Gnome, KDE or XFCE on two separate machines; I was wondering if anyone has had the same problem, or has any clues where the problem could be.
    Last edited by Ozymandias_117 (2011-04-29 12:43:20)

    ^^ thanks itman that worked for me. I changed back to alsa plugin and gstreamer backend for KDE.
    They say that this bug has been fixed in 1.2 of vlc release:
    http://forum.videolan.org/viewtopic.php?f=13&t=89621
    Last edited by twilightning (2011-04-27 11:32:41)

  • Cannot initialize Photoshop CS2 "not enough memory" error

    I had Photoshop CS2 installed and running fine on a MacBook Pro (10.5.4, 2GB RAM), until it suddenly crashed and now will not open. Gives the error message: "Could not initialize because there's not enough memory." I have 1.5 GB of memory free, and 70 GB of disk space free. I've reinstalled, tried creating a new user to run it, but nothing is working. I've found a couple of mentions of the same problem online, but no solutions. Can anyone help me?

    Hi Jen,
    I don't know how MacBook works but it sounds like I just had basically the same problem which John Joslin had the correct answer.  He said, "Try resetting your preferences as described in the FAQ at:
    http://forums.adobe.com/thread/375776?tstart=0
    You either have to physically delete (or rename) the preference files or, if using the Alt, Ctrl, and Shift method, be sure that you get a confirmation dialog.
    This resets all settings in Photoshop to factory defaults.
    (A complete uninstall/re-install will not affect the preferences and a corrupt file there may be causing the problem.)"
    I have windows vista ultimate so I'm not sure if this will help you but my preferences were at:
    C:\Users\username\AppData\Roaming\Adobe\Photoshop\9.0\Adobe Photoshop CS2 Settings. I just renamed my pref's and photoshop opened right up.
    Good Luck!

  • Not enough RAM message in Photoshop but preferences shows all 8 GB available

    After a fairly smooth year and a half working with Adobe CS5 (Photoshop,  InDesign, Illustrator) on my iMac 27" i7 I  cannot open any file in Photoshop, no matter how small. I get the message "Could not complete the Open command because there is not enough memory (RAM)." even with all other applications closed and all 8 GB RAM available, and after restart. In Photoshop preferences my usual  70% RAM allowance shows as available. I also cannot view any video files in Firefox or Safari, but WMV and MOV videos play fine with QuickTime outside of the browsers. This all started after I completed the latest Apple security update (2012 001 I think it was) and also downloaded and installed Smith Micro's Anime Studio Debut 8.
    I'm running an iMac 27" i7 (2010), 8 GB RAM, Snow Leopard 10.6.8, Photoshop CS5 Extended (InDesign and Bridge Adobe CS5  seem fine). Have done all latest Firefox updates as well. Video plugins: Flip4Mac, QT, DivX (all up-to-date far as I can tell)
    Since this started Friday I've updated  Pshop and Flash and all Adobe plug-ins, restarting afterward. I ran Disk Utility and found some files were out of order (or corrupted?), so ran the repair sequence using fsck single user/command-line utility since my superdrive hasn't been accepting discs for a couple of months. Which is also why I haven't just reinstalled Photoshop. Eventually I'll get the drive fixed and reinstall PS etc but this will not solve the video problem. It seems there must be a codec or plug-in problem (audio/video) I'm missing or which is not in the right place (and don't know how to look for it).
    Been a Mac user for over 20 years and have never NOT been able to figure out my next move. Any help would be appreciated!

    I ran an extensive hardware diagnostic and after the second pass it came up with an error message that pointed to a RAM problem. Sure enough, one of the RAM components is now being replaced (along with the dodgy optical drive). Lucky  me, I invested in Applecare.
    I'm looking forward to getting it back, hopefully working smoothly.

  • When I am importing photos from DVD rom, iPhoto shows " not enough free space on the volume containing your iPhoto library", but i have over 200G free in hard drive? how do i fix it?

    Mac Pro Book
    Mac OS X 10.7.5
    iPhoto 11 9.4.2
    importing photos into iphoto, shows "Photo cannot import your photos because there is not enough free space on the volume containing your iPhoto library."
    tried creat new library, same thing.
    currently only has 2,500 photos in iPhoto
    Importing about 500 photos, but cant do it due to the message showed above.
    Who knows how to fix it?

    How much free space do you have on your hard drive?  Try the following:
    1 - delete the iPhoto preference file, com.apple.iPhoto.plist, that resides in your
         User/Home/Library/ Preferences folder.
    2 - delete iPhoto's cache file, Cache.db, that is located in your
    User/Home/Library/Caches/com.apple.iPhoto folder (Snow Leopard and Earlier).
    or with Mt. Lion from the User/Library/Containers/com.apple.iPhoto/
    Data/Library/Caches/com.apple.iPhoto folder
    3 - launch iPhoto and try again.
    NOTE: If you're moved your library from its default location in your Home/Pictures folder you will have to point iPhoto to its new location when you next open iPhoto by holding down the Option key when launching iPhoto.  You'll also have to reset the iPhoto's various preferences.
    NOTE 2:  In Lion and Mountain Lion the Library folder is now invisible. To make it permanently visible enter the following in the Terminal application window: chflags nohidden ~/Library and hit the Enter button - 10.7: Un-hide the User Library folder.

  • Message "Could not save because there is not enough memory (RAM)"

    Running Mac OS9.2.2 Suddenly start to get and error message " An error occured saving the enclosure "file name" Not enough memory" when trying to save an attachment in Microsoft Outlook. Same thing with Adobe Photoshop - when trying to save an any new file - message "Could not save because there is not enough memory (RAM)".
    Already Tried:
    During several freezes, suddenly my mac stop recognize any extensions. When used Microsoft Outlook and tried download an attachment got an error message type 3. So I create a new set in extensions manager and get ride of Type 3 error, but get a new error message "Not enough memory" when try to save an attachment, or "Not enough RAM" - when try to save an EPS file in Adobe Photoshop.
    In Outlook ("Get Info" section)changed memory size:
    Suggested Size: 7168 K
    Minimum Size: 16000 K
    Preferred Size: 16000 K
    In all mailboxes - there are not that many messages and they are not heavy at all, I checked them all before. And I rebuilt/compressed Outlook database - holding Option while restarted Outlook...
    Adobe Photoshop - started acting weird at same time. Even if I want to open a file through the menu: File - Open - it gives me the same message.
    Who has any clue - please help!!!

    Hi, Mike -
    I have Outlook Express's Preferred memory allocation set to 45000K, and don't have any problems with attachments. Don't be afraid of increasing OE's Preferred memory allocation (nor IE's, either - I have that set to 65000K).
    How large is the Messages file used by the account you're using in Outlook? You can examine that file here - (hard drive) >> Documents >> Microsoft User Data >> Identities >> (account folder, either the default "Main Identity" or one you've created) >> Messages.
    It seems that the Messages file grows with each email received; but does not shrink when an email is deleted. Over time, especially when lots of attachments are involved, that file can get huge.
    One way to help it not get too big is to -
    • delete attachments after copying them to the desktop. In case you're not familiar with the easy way to copy an attachment to the desktop - select the email so it opens; click the small triangle next to the legend "Attachments" between the upper pane and the lower pane in Outlook's main window; drag the attachment from the list out onto the desktop. It will be extracted and copied to that location. Once that has been done you can delete the attachment from the email with which it arrived.
    • after purging your emails of unneeded emails and attachments, compact the file. To do that, quit Outlook if it is running. Then hold down the Option key and start up Outlook. A splash screen will appear asking if you want to compact its database files - click yes. Follow the prompts.
    Compacting OE's database can also fix some instances of a damaged database that can't be used.
    If you could answer a few more questins, perhaps we can offer additional suggestions -
    1) How much RAM do you have installed? That means the physical RAM, not counting Virtual Memory in use, if any.
    2) How full is your hard drive - how big is it, how much free space is left unused?
    3) When was the last time you rebuilt the desktop file?

  • [Exception... "Not enough arguments [nsIWebBrowserPersist.saveURI]" nsresult: "0x80570001 (NS_ERROR_XPC_NOT_ENOUGH_ARGS)" location: "JS frame :: chrome://s3fo

    I am trying to download assets from my S3 organizer and I am not able to. I am getting this error: [Exception... "Not enough arguments [nsIWebBrowserPersist.saveURI]" nsresult: "0x80570001 (NS_ERROR_XPC_NOT_ENOUGH_ARGS)" location: "JS frame :: chrome://s3fox/content/js/xmlhttpNew.js :: s3_HttpClient.prototype.downloadFile :: line 729" data: no]
    I only see one forum post with a suggested solution, but I am not able to understand how to apply the solution to my situation. Using version 32.0.1

    This issue can be caused by an extension that isn't working properly.
    Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem.
    *Switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance
    *Do NOT click the Reset button on the Safe Mode start window
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes

  • "Not enough memory in target location" error in de...

    When I try to download and "save to device" any files from any website of any size, I am receiving the "Not enough memory in target location" error. It's very frustrating . To reproduce I only need to do a "long tap" on the google image on the default google page and select "save image as" and select any location (eg. documents, root (MyDocs), create new folder) and I get the error. Once the error is displayed, most of the time I can't get rid of it and need to do an "End current task" to close the browser.
    I have checked the output of a "df -h" and there is PLENTY of space on all volumes, including rootfs (95.1M Free) and /dev/mmcblk0p1 (25.9G Free!).
    I've tried flushing the '/home/user/.mozilla/microb' directory and deleting the '/home/user/.browser' file also.
    I can transfer files from my PC connected in Mass Storage mode with no problem, I can also create directories and files from X-Term also with no problem.
    The only information I can find on this error is related to rootfs being out of space when trying to install an app or update...but this is not my problem.
    I have a feeling it could be a permissions issue, anyone have any suggestions?

    Hi cpitchford. I have a similar problem. I can't save a bookmarks in the MicroB explorer. The system says "Not enough memory". I read your post and send you the screenshot for the xterm. Thank you for your time. Let me know if you need more informartion about the issue
    Attachments:
    screenshot03.png ‏93 KB
    screenshot04.png ‏98 KB

  • Systemd-fsck complains that my hardware raid is in use and fail init

    Hi all,
    I have a hardware raid of two sdd drives. It seems to be properly recongnized everywhere and I can mount it manually and use it without any problem. The issue is that when I add it to the /etc/fstab My system do not start anymore cleanly.
    I get the following error( part of the journalctl messages) :
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Found device /dev/md126p1.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Starting File System Check on /dev/md126p1...
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: /dev/md126p1 is in use. <--------------------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: e2fsck: Cannot continue, aborting.<----------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: fsck failed with error code 8.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: Ignoring error.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Started File System Check on /dev/md126p1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Mounting /home1...
    Jan 12 17:16:22 biophys02.phys.tut.fi mount[530]: mount: /dev/md126p1 is already mounted or /home1 busy
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: home1.mount mount process exited, code=exited status=32
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Failed to mount /home1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Dependency failed for Local File Systems.
    Does anybody undersand what is going on. Who is mounting the  /dev/md126p1 previous the systemd-fsck. This is my /etc/fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sda1
    UUID=4d9f4374-fe4e-4606-8ee9-53bc410b74b9 / ext4 rw,relatime,data=ordered 0 1
    #home raid 0
    /dev/md126p1 /home1 ext4 rw,relatime,data=ordered 0 1
    The issue is that after the error I'm droped to the emergency mode console and just pressing cantrol+D to continues boots the system and the mount point seems okay. This is the output of 'system show home1.mount':
    Id=home1.mount
    Names=home1.mount
    Requires=systemd-journald.socket [email protected] -.mount
    Wants=local-fs-pre.target
    BindsTo=dev-md126p1.device
    RequiredBy=local-fs.target
    WantedBy=dev-md126p1.device
    Conflicts=umount.target
    Before=umount.target local-fs.target
    After=local-fs-pre.target systemd-journald.socket dev-md126p1.device [email protected] -.mount
    Description=/home1
    LoadState=loaded
    ActiveState=active
    SubState=mounted
    FragmentPath=/run/systemd/generator/home1.mount
    SourcePath=/etc/fstab
    InactiveExitTimestamp=Sat, 2013-01-12 17:18:27 EET
    InactiveExitTimestampMonotonic=130570087
    ActiveEnterTimestamp=Sat, 2013-01-12 17:18:27 EET
    ActiveEnterTimestampMonotonic=130631572
    ActiveExitTimestampMonotonic=0
    InactiveEnterTimestamp=Sat, 2013-01-12 17:16:22 EET
    InactiveEnterTimestampMonotonic=4976341
    CanStart=yes
    CanStop=yes
    CanReload=yes
    CanIsolate=no
    StopWhenUnneeded=no
    RefuseManualStart=no
    RefuseManualStop=no
    AllowIsolate=no
    DefaultDependencies=no
    OnFailureIsolate=no
    IgnoreOnIsolate=yes
    IgnoreOnSnapshot=no
    DefaultControlGroup=name=systemd:/system/home1.mount
    ControlGroup=cpu:/system/home1.mount name=systemd:/system/home1.mount
    NeedDaemonReload=no
    JobTimeoutUSec=0
    ConditionTimestamp=Sat, 2013-01-12 17:18:27 EET
    ConditionTimestampMonotonic=130543582
    ConditionResult=yes
    Where=/home1
    What=/dev/md126p1
    Options=rw,relatime,rw,stripe=64,data=ordered
    Type=ext4
    TimeoutUSec=1min 30s
    ExecMount={ path=/bin/mount ; argv[]=/bin/mount /dev/md126p1 /home1 -t ext4 -o rw,relatime,data=ordered ; ignore_errors=no ; start_time=[Sat, 2013-01-12 17:18:27 EET] ; stop_time=[Sat, 2013-
    ControlPID=0
    DirectoryMode=0755
    Result=success
    UMask=0022
    LimitCPU=18446744073709551615
    LimitFSIZE=18446744073709551615
    LimitDATA=18446744073709551615
    LimitSTACK=18446744073709551615
    LimitCORE=18446744073709551615
    LimitRSS=18446744073709551615
    LimitNOFILE=4096
    LimitAS=18446744073709551615
    LimitNPROC=1031306
    LimitMEMLOCK=65536
    LimitLOCKS=18446744073709551615
    LimitSIGPENDING=1031306
    LimitMSGQUEUE=819200
    LimitNICE=0
    LimitRTPRIO=0
    LimitRTTIME=18446744073709551615
    OOMScoreAdjust=0
    Nice=0
    IOScheduling=0
    CPUSchedulingPolicy=0
    CPUSchedulingPriority=0
    TimerSlackNSec=50000
    CPUSchedulingResetOnFork=no
    NonBlocking=no
    StandardInput=null
    StandardOutput=journal
    StandardError=inherit
    TTYReset=no
    TTYVHangup=no
    TTYVTDisallocate=no
    SyslogPriority=30
    SyslogLevelPrefix=yes
    SecureBits=0
    CapabilityBoundingSet=18446744073709551615
    MountFlags=0
    PrivateTmp=no
    PrivateNetwork=no
    SameProcessGroup=yes
    ControlGroupModify=no
    ControlGroupPersistent=no
    IgnoreSIGPIPE=yes
    NoNewPrivileges=no
    KillMode=control-group
    KillSignal=15
    SendSIGKILL=yes
    Last edited by hseara (2013-01-13 19:31:00)

    Hi Hatter, I'm a little confused about your statement not to use raid right now. I'm new to the Mac, awaiting the imminent delivery of my first Mac Pro Quad core with 1tb RAID10 setup. As far as I know, it's software raid, not the raid card (pricey!). My past understanding about raid10 on any system is that it offers you the best combination for speed and safety (backups) since the drives are a striped and mirrored, one drive dies, quick replacement and you're up and running a ton quicker than if you had gone RAID5 (20 mins writes per 5G data?)Or were you suggesting not to do raid with the raid card..?
    I do plan to use an external drive for archival backups of settings, setups etc, because as we all know, if the best fool proof plans can be kicked in the knees by Murhpy.
    My rig is destined to be my video editing machine so the combo of Quad core, 4G+ memory and Raid10 should make this quite the machine.. but I'm curious why you wouldn't suggest raid..
    And if you could explain this one: I see in the forums a lot of people are running Bootcamp Parralels(sp) which I assume is what you use to run mulitple OS on your Mac systems so that you can run MacOS and Windblows on the same machine.. but why is everyone leaning towards Vista when thems of us on Windblows are trying to avoid it like the plague? I've already dumped Vista from two PCs and installed XP for a quicker less bloated PC. Is vista the only MSOS that will co-exist with Mac systems? Just curious..
    Thanks in advance.. Good Holidays

  • Systemd-fsck output

    Hallo,
    i have some questions about systemd-fsck output shown in the console during the system boot. it looks like this:
    /dev/mapper/vg-root: clean, 82501/1310720 files, 647916/5242880 blocks
    systemd-fsck[234]: /dev/sda2: clean, 337/64000 files, 88881/256000 blocks
    systemd-fsck[233]: /dev/mapper/vg-home: clean, 30217/5283840 files, 961660/21116928 blocks
    systemd-fsck[232]: /dev/mapper/vg-var: clean, 19485/851968 files, 846527/3407872 blocks
    systemd-fsck[238]: fsck.fat 3.0.20 (12 Jun 2013)
    systemd-fsck[238]: /dev/sda1: 251 files, 3820/403266 clusters
    is it actually running any test?
    assuming the above shown output means something like "ok" or "there is no problem", i then want to hide it, so i created a file /etc/systemd/system/[email protected]/custom.conf with the content:
    # dont show systemd-fsck results in the console, if everything is ok
    [Service]
    StandardOutput=journal
    StandardError=journal+console
    this seems to work pretty nice, but there is one line remaining in the console, the very first one about the root partition. it seems somewhat different to the other lines, in that it misses the prefix systemd-fsck and it can't be found by journalctl
    how could i hide this first line?
    Is it right, that in case systemd-fsck detects any file system error it still would output these on the console because of the setting StandardError=journal+console?

    When you remove the fsck hook from the initramfs, you aren't hiding the output. Rather the test is not being run. Personally, I think that not running an important check on the integrity of your file systems simply to hide a few lines of output on the console reflects rather weird priorities but obviously that's up to you.
    For whatever it is worth, I have had cases in which stuff has gone wrong and fsck has identified the need to look at the file system journal. In every single case, the automated routine has examined, repaired and mounted the file system correctly. It has never prevented booting and I have seen almost no file system corruption with the exception of something on a fat 32 system which is obviously not journalled. Given that my system was for a while routinely shutting off in the dirtiest way possible, I think this reflects the robustness of both the file system and the associated maintenance tools like fsck and I would put up with a great deal more than a few lines of text on the console in exchange for this sort of security. My data is the most important thing on the computer and I will use whatever means are reasonably available to me to protect it.
    Of course, if nothing like this has (yet) happened to you, you may not (yet) appreciate the system's robustness. But sooner or later, you almost certainly will. (Unless your data is unimportant to you, of course. That would be different.)

Maybe you are looking for

  • Problem with "Database Gateway for SQL Server"

    Hello, i am testing the different technologies for connecting an oracle database with a sql-server database. The way using 10g-generic-connectivity with ODBC works fine, but the 11g-DG4MSQL makes problems. Environment: Server PEGASUS (32bit Windows S

  • Supressing  Issue based on grouping

    Our report has been grouped based on few columns, say(Title, Jobcode, Job Name, Job Description and Status). The set of output for this group would contain these columns say(Rev No#, Job Comments, effectivedate) Our requirement is to suppress the col

  • Regex Pattern help.

    Me and my friend pedrofire, that�s probably around forums somewhere, are newbies. We are trying to get a log file line and process correctly but we are found some dificculties to create the right expression pattern. My log have lines like: User 'INEX

  • How to assign one JSP request object to another JSP

    Hi, I want to assign one JSP request object (i.e., previous JSP Page request object) to another JSP (i.e., current JSP page request object). I don't want to use "<jsp:forward>" tag or "RequestDispatcher" obect here. Because i want to display one mess

  • Build a form with xml

    I recently was "volunteered" to build a new website for a local non-profit organization. I am fairly descent at building static sites(but brand new to dynamic), and I have run into an issue with a form area on the site. They have 3 training signup pa