Sshfs/fuse

sshfs/fuse does not work anymore when you compile it yourself (like from Ports). This is a severe regression in functionality.
The problem appears to be related to kernel module signing:
30/10/2014 14:18:38,979 com.apple.kextd[19]: ERROR: invalid signature for com.github.osxfuse.filesystems.osxfusefs, will not load
Is there any plans from Apple to help this software operate as it did in previous revisions?

Guys... I know that I am no expert but... all I did was installed:
sshfs
package on the client and rebooted and right after that I have run on the ssh client (where the drive is being mounted using sshfs):
su
mkdir /media/1862_GB_X-Ternal
chown login:login /media/1862_GB_X-Ternal
and then as user I ran:
sshfs -p 22 login@hostname:/mount/point/ /media/1862_GB_X-Ternal/
and since I have ssh authentication with a auth keys - drive was mounted.
Later I have created a .sh script that runs the command for me at boot time automagically and it works...
Added:
/media/1862_GB_X-Ternal/
to the dolphin places menu and it's there... after each boot.
Same command in my LXDE machine - pcmanfm sees the drive as a local drive - never even had to add it in the "Places" pane...
I know I am probably not doing it right but... it works.
I wrote this while still using PCLinuxOS. I know - You will probably laugh and say it's lame but what the heck...
Regards.
Andrzej
Last edited by AndrzejL (2013-02-14 01:04:32)

Similar Messages

  • [Gnome 3.6, Nautilus] Can't open files on remote servers

    Since the upgrade to Gnome 3.6, I can't open most files on SSH and SMB shares mounted through Nautilus using the "Connect to Server" dialog. Opening PDF files in Evince still works, as does opening image files in EOG. But opening movie files in Totem, for instance, fails: Totem closes instantly. Other file types result in more or less cryptic error messages by the associated application. Opening a LyX document, for example, gives me this:
    The directory in the given path
    /run/user/1000/gvfs/sftp:host=192.168.2.102/[...]
    does not exist.
    Which looks like gvfs is not working like it used to. If I use sshfs (FUSE) to mount the share, everything works fine.
    [Edit: It's not just SSH servers that are affected as I initially thought. Changed title and body to reflect this.]
    Last edited by ulke (2012-11-05 10:47:07)

    I've known this problem for a long time. In pre-3.6 times it would suffice to kill gvfs-fuse-daemon and start it again. Now there appears to be no gvfs-fuse-daemon anymore.

  • Mounting Remote Folder on a iMac ?

    Hi,
    I am very new to Oracle Linux
    I Just installed OL 6.3 on a HP machine
    How I can Mount a remote folder which is on my iMac?
    thanks in advance
    I tried to install fuse-sshfs, but it failed
    #yum install fuse-sshfs
    http://www.zettachem.com/tmp/e.png
    Edited by: 957069 on Sep 16, 2012 1:40 PM

    The following works in Oracle Linux 6.3, x86_64:
    <pre>
    yum install http://pkgs.repoforge.org/fuse-sshfs/fuse-sshfs-2.2-1.el6.rf.x86_64.rpm
    </pre>
    Yum will automatically install fuse and fuse-libs from the Oracle public yum repository.
    Then on your Linux system, type: sshfs username@servername:directory mountpoint
    where username is the username of your Mac account and servername the IP address or FQDN of your Mac.
    For instance, to mount the Desktop folder from your IMac:
    On your Mac system, open "*System Preferences*, *Sharing* and enable *Remote Login*
    (Firewall will be configured automatically if enabled)
    Then, at the command prompt on your Linux system type:
    <pre>
    mkdir /mnt/mymac
    sshfs [email protected]:Desktop /mnt/mymac
    </pre>
    You can find the files from you Mac Desktop in /mnt/mymac.

  • Opening files from SSH in PCManFM

    I'm using Fluxbox, and I'm using PCManFM (launched via "dbus-launch pcmanfm") as my file manager.
    I opened an SSH folder, and when I double clicked on a file to open it, nothing happened.  For example, on a .txt file, gvim opened, but it didn't load the file.  Where does PCManFS mount SSH folder, and how can I have it open files correctly when I double click on them?

    As far as I know You can only manage files (move, copy, delete etc.) the way You are trying to do it as they are not "local". It may not be the greatest advice but I feel like it may sort You out - use SSHFS to mount the ssh share and then just add it to the Pane in PCManFM? This way the files and folder will act like a local files and folders. This will allow You to play with them (edit, play etc.) just like they are on the hdd of the machine that You are currently sitting next to.
    Regards.
    Andrzej
    P.S. Instead of sshfs-fuse package You will need sshfs as the howto was written when I was using PCLinuxOS. I am using the same method with ArchLinux without any issues on LXDE and KDE4 machines here.
    Last edited by AndrzejL (2013-02-20 11:20:47)

  • Sshfs / loading fuse module

    I try to use the ssh file system. This requires the fuse module to be loaded. The problem is that i dont manage to load it:
    # modprobe fuse
    FATAL: Module fuse not found.
    I have also tried to add it in rc.conf, with no success. I guess it is something that i don't understand here...
    (I have tried to make /dev/fuse manually: mknod -m 666 /dev/fuse c 10 229)
    Any ideas?
    Thanks,
    howie

    You don't have to use testing, as long as you're prepared to build fuse yourself. Take the PKGBUILD from extra, and change --disable-kernel-module to --enable-kernel-module, and then run makepkg. After installing it, you'll need to do
    depmod -a
    or if you want to be really thorough, make up a fuse.install that does this for you.
    P.S. I'm not from Norway.

  • Random file corruption with SSHFS, while CIFS is fine

    I'm not sure what is happening, but it seems that sometimes, files read from a  sshfs mount come up corrupted. I have checked my filesystem and my RAM for errors on both ends, everything comes up clean.
    I'm currently mounting my shares as
    fuse.sshfs noauto,x-systemd.automount,idmap=user,_netdev,identityfile=/home/azure/.ssh/id_rsa,allow_other,default_permissions,uid=1000,gid=1000,umask=0,reconnect,cache=yes,kernel_cache,ciphers=arcfour,compression=no 0 0
    I tried copying several 300MB files, and different file(s) ends up partially corrupted. Checking the same file repeatedly does not make the checksum change as it usually happens with faulty RAM.
    Only hint I have is that apparently it is always a block of exactly 2048 bytes that gets corrupted. It's filled with some data, so it's not getting "lost". My hunch is that either the encoding or decoding with arcfour is the culprit. Small files (~40mb) don't seem to be affected, else I'd get decoding errors/glitches on my music. Mounting the same share as cifs makes the problems disappear. Any ideas what might be causing this?

    It just happened again. The file is fine on the other side, but I only see the "cached" state of it on my side. I will try removing "cache=yes" from the mount parameters, but if it's what causing it, then it's some shitty cache if it can't detect the file changed on the other side.
    edit: setting "cache=no" seems to have cleared the cache and is showing the updated file content. The question now for me is, is that a bug in ssh, sshfs or dolphin?
    Last edited by Soukyuu (2015-06-06 19:53:36)

  • Automount SSHFS drive in user-level systemd session

    Hello,
    I'm able to automount a network drive through SSHFS using the following .mount unit in the system-level systemd session:
    [Unit]
    Description=adama shared drive
    [Mount]
    What=[email protected]:/home/shared
    Where=/home/koral/remote/adama
    Type=fuse.sshfs
    Options=_netdev,noauto,users,idmap=user,IdentityFile=/home/koral/.ssh/id_rsa,allow_other
    [Install]
    WantedBy=default.target
    As is, the network drive is mounted at system start-up and it is read/write-able by any user logged into the local system.
    I'd like the drive to be mounted only when my $USER logs-in and read/write-able only by my $USER, so I considered moving the .mount unit to my user-level systemd session, but now the automounting fails with an unhelpful error message:
    systemd[1969]: Mounting adama shared drive...
    systemd[1969]: home-koral-remote-adama.mount mount process exited, code=exited status=1
    systemd[1969]: Failed to mount adama shared drive.
    systemd[1969]: Unit home-koral-remote-adama.mount entered failed state
    I guess there is a permission issue somehow, could you please help figuring it out ?
    Note: I'm still using systemd-204 as the user-level session is kind of broken in later versions as described here.
    Kind regards.
    Last edited by koral (2014-03-30 17:54:45)

    xtian wrote:I can mount using the manual command `$sshfs [email protected]:/ /mnt/mrwizard.local`
    According to the above your username is xtian, which it isn't in your fstab entry:
    xtian wrote:[email protected]:/ /mnt/mrwizard.local ...
    So without having looked for further errors nor knowing anything about sshfs, I would suppose to change this.
    Sometimes simple spelling errors are actually the hardest to solve. – Like I always try to '#include <some_library.c>'.

  • [Solved] sshfs connection: Nothing happens after password prompt

    Hey,
    I'm trying to mount a remote folder via ssh.
    User on the server side is different from local user.
    After entering sshfs -C userserver@server:serverfolder clientfolder I get the password promptfor userserver. Then nothing more happens.
    I've read the troubleshooting section of sshfs and , of course, shh connection is working.
    But what is going wrong here?
    Last edited by dbacc (2015-03-08 19:17:40)

    For comparison, here's my output:
    $ sshfs -d -o sshfs_debug -o LOGLEVEG=DEBUG3 service@clusterfrak:/tmp /tmp/test
    SSHFS version 2.5
    FUSE library version: 2.9.3
    nullpath_ok: 0
    nopath: 0
    utime_omit_ok: 0
    executing <ssh> <-x> <-a> <-oClearAllForwardings=yes> <-oLOGLEVEL=DEBUG3> <-2> <service@clusterfrak> <-s> <sftp>
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug2: ssh_connect: needpriv 0
    debug1: Connecting to clusterfrak [199.200.1.140] port 22.
    debug1: Connection established.
    debug1: identity file /home/testing/.ssh/id_rsa type 1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_rsa-cert type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_dsa type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_dsa-cert type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_ecdsa type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_ecdsa-cert type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_ed25519 type -1
    debug1: key_load_public: No such file or directory
    debug1: identity file /home/testing/.ssh/id_ed25519-cert type -1
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_6.7
    debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7
    debug1: match: OpenSSH_6.7 pat OpenSSH* compat 0x04000000
    debug2: fd 3 setting O_NONBLOCK
    debug3: load_hostkeys: loading entries for host "clusterfrak" from file "/home/testing/.ssh/known_hosts"
    debug3: load_hostkeys: found key type ECDSA in file /home/testing/.ssh/known_hosts:3
    debug3: load_hostkeys: loaded 1 keys
    debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
    debug1: SSH2_MSG_KEXINIT sent
    debug1: SSH2_MSG_KEXINIT received
    debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
    debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ssh-rsa,ssh-dss
    debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
    debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
    debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
    debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
    debug2: kex_parse_kexinit: none,[email protected],zlib
    debug2: kex_parse_kexinit: none,[email protected],zlib
    debug2: kex_parse_kexinit:
    debug2: kex_parse_kexinit:
    debug2: kex_parse_kexinit: first_kex_follows 0
    debug2: kex_parse_kexinit: reserved 0
    debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
    debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
    debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected]
    debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected]
    debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
    debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
    debug2: kex_parse_kexinit: none,[email protected]
    debug2: kex_parse_kexinit: none,[email protected]
    debug2: kex_parse_kexinit:
    debug2: kex_parse_kexinit:
    debug2: kex_parse_kexinit: first_kex_follows 0
    debug2: kex_parse_kexinit: reserved 0
    debug2: mac_setup: setup [email protected]
    debug1: kex: server->client aes128-ctr [email protected] none
    debug2: mac_setup: setup [email protected]
    debug1: kex: client->server aes128-ctr [email protected] none
    debug1: sending SSH2_MSG_KEX_ECDH_INIT
    debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
    debug1: Server host key: ECDSA 1b:38:0e:15:39:c6:93:37:12:fb:62:32:c9:ce:cb:b1
    debug3: load_hostkeys: loading entries for host "clusterfrak" from file "/home/testing/.ssh/known_hosts"
    debug3: load_hostkeys: found key type ECDSA in file /home/testing/.ssh/known_hosts:3
    debug3: load_hostkeys: loaded 1 keys
    debug3: load_hostkeys: loading entries for host "199.200.1.140" from file "/home/testing/.ssh/known_hosts"
    debug3: load_hostkeys: found key type ECDSA in file /home/testing/.ssh/known_hosts:2
    debug3: load_hostkeys: loaded 1 keys
    debug1: Host 'clusterfrak' is known and matches the ECDSA host key.
    debug1: Found key in /home/testing/.ssh/known_hosts:3
    debug2: kex_derive_keys
    debug2: set_newkeys: mode 1
    debug1: SSH2_MSG_NEWKEYS sent
    debug1: expecting SSH2_MSG_NEWKEYS
    debug2: set_newkeys: mode 0
    debug1: SSH2_MSG_NEWKEYS received
    debug1: Roaming not allowed by server
    debug1: SSH2_MSG_SERVICE_REQUEST sent
    debug2: service_accept: ssh-userauth
    debug1: SSH2_MSG_SERVICE_ACCEPT received
    debug2: key: /home/testing/.ssh/id_rsa (0x7fbb6e500e10),
    debug2: key: /home/testing/.ssh/id_dsa ((nil)),
    debug2: key: /home/testing/.ssh/id_ecdsa ((nil)),
    debug2: key: /home/testing/.ssh/id_ed25519 ((nil)),
    debug1: Authentications that can continue: publickey,password
    debug3: start over, passed a different list publickey,password
    debug3: preferred publickey,keyboard-interactive,password
    debug3: authmethod_lookup publickey
    debug3: remaining preferred: keyboard-interactive,password
    debug3: authmethod_is_enabled publickey
    debug1: Next authentication method: publickey
    debug1: Offering RSA public key: /home/testing/.ssh/id_rsa
    debug3: send_pubkey_test
    debug2: we sent a publickey packet, wait for reply
    debug1: Authentications that can continue: publickey,password
    debug1: Trying private key: /home/testing/.ssh/id_dsa
    debug3: no such identity: /home/testing/.ssh/id_dsa: No such file or directory
    debug1: Trying private key: /home/testing/.ssh/id_ecdsa
    debug3: no such identity: /home/testing/.ssh/id_ecdsa: No such file or directory
    debug1: Trying private key: /home/testing/.ssh/id_ed25519
    debug3: no such identity: /home/testing/.ssh/id_ed25519: No such file or directory
    debug2: we did not send a packet, disable method
    debug3: authmethod_lookup password
    debug3: remaining preferred: ,password
    debug3: authmethod_is_enabled password
    debug1: Next authentication method: password
    debug2: we sent a password packet, wait for reply
    debug1: Authentication succeeded (password).
    Authenticated to clusterfrak ([199.200.1.140]:22).
    debug2: fd 4 setting O_NONBLOCK
    debug3: fd 5 is O_NONBLOCK
    debug2: fd 6 setting O_NONBLOCK
    debug1: channel 0: new [client-session]
    debug3: ssh_session2_open: channel_new: 0
    debug2: channel 0: send open
    debug1: Requesting [email protected]
    debug1: Entering interactive session.
    debug2: callback start
    debug2: fd 3 setting TCP_NODELAY
    debug3: packet_set_tos: set IP_TOS 0x08
    debug2: client_session2_setup: id 0
    debug1: Sending subsystem: sftp
    debug2: channel 0: request subsystem confirm 1
    debug2: callback done
    debug2: channel 0: open confirm rwindow 0 rmax 32768
    debug2: channel 0: rcvd adjust 2097152
    debug2: channel_input_status_confirm: type 99 id 0
    debug2: subsystem request accepted on channel 0
    Server version: 3
    Extension: [email protected] <1>
    Extension: [email protected] <2>
    Extension: [email protected] <2>
    Extension: [email protected] <1>
    Extension: [email protected] <1>
    unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
    INIT: 7.23
    flags=0x0003f7fb
    max_readahead=0x00020000
    INIT: 7.19
    flags=0x00000011
    max_readahead=0x00020000
    max_write=0x00020000
    max_background=0
    congestion_threshold=0
    unique: 1, success, outsize: 40

  • [SOLVED] sshfs - Hangs at different occations (reboot needed?)

    I'm using Thunar with gvfs and sshfs, both works fine and does the job.
    Only problem is that for some reason, sshfs when going back to a folder after a while (that's bookmarked if that matters) nothing happens, and if i try to "unmount" the remote filesystem it hangs thunar.
    I'm not quite sure which mount point to unmount so i usually go full out noob and reboot.
    But here's my mount:
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    dev on /dev type devtmpfs (rw,nosuid,relatime,size=8194272k,nr_inodes=2048568,mode=755)
    run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
    /dev/sda2 on / type reiserfs (rw,relatime)
    securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
    tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
    tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
    cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
    pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
    cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
    cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
    cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
    cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
    cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
    systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
    tmpfs on /tmp type tmpfs (rw)
    debugfs on /sys/kernel/debug type debugfs (rw,relatime)
    configfs on /sys/kernel/config type configfs (rw,relatime)
    mqueue on /dev/mqueue type mqueue (rw,relatime)
    hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
    /dev/sda3 on /home type reiserfs (rw,relatime)
    fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
    gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=0)
    Last edited by Torxed (2013-08-15 18:30:02)

    Bha solved it (after living with it for weeks)...
    ps aux | grep ssh
    Just looked for anything suspicious and:
    ssh -oForwardX11 no -oForwardAgent no -oClearAllForwardings yes -oProtocol 2 -oNoHostAuthenticationForLocalhost yes -l user -s host sftp
    Just killed the correct pid.

  • [Solved] sshfs can't mount via fstab on startup

    I have an HDD mounted in my RPI which I've mounted as a filesystem in my desktop via sshfs (through fstab).
    If I do "sudo mount -a", it seems to mount just fine, so I researched a bit (even though I'm still an newbie in Arch and Linux distros in general) and found out that it's most probably is caused due to the network not being initialized fast enough for the sshfs fstab entry to be discovered (due to having no network and therefore no access to it).
    I'm really not sure how to provide any info which would help you guyz help me so I'm just gonna provide the sshfs entry from my fstab file and the journalctl logs that seem to be relevant.
    Jun 23 22:32:00 home systemd[1]: home-username-media-downloads-mydrive.mount mount process exited, code=exited status=1
    Jun 23 22:32:00 home systemd-udevd[168]: renamed network interface eth0 to enp0s25
    Jun 23 22:32:00 home systemd[1]: Failed to mount /home/username/media/downloads/mydrive.
    Jun 23 22:32:00 home systemd[1]: Dependency failed for Remote File Systems.
    Jun 23 22:32:00 home systemd[1]: Unit home-username-media-downloads-mydrive.mount entered failed state.
    Jun 23 22:32:00 home dhcpcd[236]: eth0: waiting for carrier
    Jun 23 22:32:00 home dhcpcd[236]: eth0: removing interface
    [email protected]:/media/mydrive/ /home/username/media/downloads/mydrive fuse IdentityFile=/home/username/.ssh/mykey,uid=1000,gid=1000,reconnect,_netdev,allow_other,transform_symlinks,user,idmap=user,BatchMode=yes 0 0
    ps: This issue might be related to a recent attempt I've had to enable? the Gnome NetworkManager service, I'm not sure if it's related but I've first noticed the issue the same day I tried to enable that service (which I've later disabled because I couldn't get my network connected).
    Last edited by Varemenos (2014-06-27 23:44:00)

    It just doesn't want to work...
    # systemctl status home-username-media-downloads-mydrive.mount
    ==>
    ● home-username-media-downloads-mydrive.mount - mydrive Sshfs mount
    Loaded: loaded (/etc/systemd/system/home-username-media-downloads-mydrive.mount; enabled)
    Active: failed (Result: exit-code) since Thu 2014-06-26 16:45:21 EEST; 31s ago
    Where: /home/username/media/downloads/mydrive
    What: [email protected]:/media/mydrive/
    Process: 281 ExecMount=/bin/mount [email protected]:/media/mydrive/ /home/username/media/downloads/mydrive -t fuse.sshfs -o IdentityFile=/home/username/.ssh/mykey,uid=1000,gid=1000,reconnect,_netdev,allow_other,transform_symlinks,user,idmap=user,BatchMode=yes (code=exited, status=1/FAILURE)
    Jun 26 16:45:21 home mount[281]: read: Connection reset by peer
    Jun 26 16:45:21 home systemd[1]: home-username-media-downloads-mydrive.mount mount process exited, code=exited status=1
    Jun 26 16:45:21 home systemd[1]: Failed to mount mydrive Sshfs mount.
    Jun 26 16:45:21 home systemd[1]: Unit home-username-media-downloads-mydrive.mount entered failed state.
    The file:
    [Unit]
    Description=mydrive Sshfs mount
    Requires=network-online.target
    After=[email protected]
    [Mount]
    What=[email protected]:/media/mydrive/
    Where=/home/username/media/downloads/mydrive
    Type=fuse.sshfs
    Options=IdentityFile=/home/username/.ssh/mykey,uid=1000,gid=1000,reconnect,_netdev,allow_other,transform_symlinks,user,idmap=user,BatchMode=yes
    [Service]
    Restart=on-failure
    RestartSec=10
    [Install]
    WantedBy=multi-user.target
    Note that running
    systemctl start home-username-media-downloads-mydrive.mount
    , works just fine...
    Last edited by Varemenos (2014-06-26 14:00:23)

  • [SOLVED] Systemd and Sshfs Automount

    Hi,
    I'm trying to set up automount on an sshfs drive. From other posts I already concluded this line for my /etc/fstab:
    [email protected]:/users/schnitzl /home/ben/data/mnt/uni/ fuse.sshfs noauto,x-systemd.automount,users,idmap=user,IdentityFile=/home/ben/.ssh/id_rsa,allow_other,reconnect 0 0
    But yet it does not work. Whilst logging into the remote Host from command line is no problem:
    ssh [email protected],
    automounting fails:
    -- cd uni/
    bash: cd: uni/: No such device.
    journalctl -b | grep uni tells me:
    Nov 15 16:31:31 mario systemd[1]: Mounting /home/ben/data/mnt/uni...
    Nov 15 16:31:31 mario systemd[1]: Mounted /home/ben/data/mnt/uni.
    Nov 15 16:31:31 mario systemd[1]: home-ben-data-mnt-uni.mount mount process exited, code=exited status=1
    Nov 15 16:31:31 mario systemd[1]: Unit home-ben-data-mnt-uni.mount entered failed state.
    Any Ideas?
    Benjamin
    Edit: Which services do I have to restart, in order to let systemd reload the /etc/fstab configuartion?
    Last edited by Lord Bo (2012-11-15 17:06:55)

    Thank you very much! That was the right option! I already had it there, but as it did not work yet, I altered the line and forgot to add this option... . However: one last thing: Which services do I have to restart, after having altered /etc/fstab in order to take changes into effect? It is a bit annoying always having to restart the whole system.
    Edit:
    teekay wrote:EDIT: sorry, I oversaw the noauto. So you want on-demand, right? https://bbs.archlinux.org/viewtopic.php?id=146674
    Well it is now mounted on demand (as soon, as I try to access the directory).
    And the link you provided: I already found that during my researches, but the crucial point was, as you mentioned, the missing _netdev option. However: Thanks again .
    Last edited by Lord Bo (2012-11-15 16:54:52)

  • X-systemd.automount and sshfs

    I've been using sshfs to access files on my pi for serveral months now. Today i tried to enable automatic mounting with x-systemd.automount, but although a manual mount is still possible, the automatic mount fails constantly with a "no such device" error since i added the additional option in /etc/fstab. Journalctl -f on the pi reveals that the "Connection is closed [preauth]", so I would guess that something is wrong with my client authenticating. But I don't understand how is this possible because I can in fact mount the partition manually. Even if I use the exact command used by systemd (found via ps -fe) with root it prompts me for the password of my key and then it mounts correctly. Shouldn't automount then prompt for my password, when my X starts? Any ideas?
    line in /etc/fstab:
    herrzinter-pi.local:/media/mediacrypt /home/herrzinter/Multimedia fuse.sshfs users,noauto,_netdev,reconnect,x-systemd.automount 0 0
    Last edited by MrTea (2013-07-01 15:34:13)

    WonderWoofy is right, until now I always used the automatically generated nautilus entry to mount the remote folder. In this case mount command is executed as <user> and so I don't need to put the additional information in the /etc/fstab. btw i didn't exactly look this up, but I've used this setup for quiet some time now, and I guess, that I forgot the herrzinter@ part once and noted that it worked anyway
    I now added this and the "IdentityFile=/..." option pointing to a dummy ssh key without passphrase and it then it's working, but I am not to happy with this, as an empty passphare key is not really nice, and also nautilus is showing me duplicated entries... but in genreal I have to look at the setup again I am not sure anymore if the x-systemd.automount option is really such a good idea in my case, as the network setup of my laptop is changing quite often... perhaps a bash script at login, or a custom systemd unit is better suited for my purpose
    Thank you all anyway

  • Automatic mount via sshfs

    Hello guys,
    I set up sshfs using the /home directory of my server and mounting it on /mnt/server. I had to configure the directory permission but now I'm able to mount the directory as normal user with that command:
    sshfs user@serverip:/home/user /mnt/server
    The next step I tried was to mount my server directory automatically using the fstab method from the wiki. So I added
    sshfs#user@serverip:/home/user /mnt/server fuse defaults 0 0
    to my fstab, rebooted my system but nothing was mounted. What's wrong? I have to add that I already set up a SSH Key.
    Best regards.
    Last edited by orschiro (2009-06-20 00:01:55)

    I have ssh-agent setup to ask for my passphrase on the first login after I bootup the machine. This works quite well, and then I don't have to worry about passphrase entry for automated ssh/rsync backups, or logging in to my server for whatever.
    in ~/.bashrc --
    SSH_ENV="$HOME/.ssh/environment"
    function start_agent {
    echo "Initialising new SSH agent..."
    /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
    echo succeeded
    chmod 600 "${SSH_ENV}"
    . "${SSH_ENV}" > /dev/null
    /usr/bin/ssh-add;
    # Source SSH settings, if applicable
    if [ -f "${SSH_ENV}" ]; then
    . "${SSH_ENV}" > /dev/null
    ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
    start_agent;
    else
    start_agent;
    fi
    The code isn't mine, but pieced together from several tutorials (don't remember where!) I believe this is basically the same thing that, for example, the gnome-keyring does when it asks for your keyring password. Just doing it the lightweight KISS way
    Good luck!
    Scott
    Last edited by firecat53 (2009-06-20 22:04:35)

  • Autofs / sshfs / osxfuse

    Attempting to do sshfs mounts through autofs and have hit a snag.
    my direct map in auto_master:
    /-                              auto_ssh          -nobrowse,nosuid
    my auto_ssh file:
    /mnt/users/ssh/scanner          -fstype=sshfs,sshfs_debug,idmap=user,follow_symlinks,max_read=65536,rw,nodev,ca che=no,IdentityFile=/Users/cmp12/.ssh/id_dsa cmp12@dirac:/tmp/that
    I'm pretty sure the problem is because root becomes the owner of my mount point when i add it to autofs:
    tesla:~ cmp12$ ls -ld /mnt/users/ssh/scanner
    dr-xr-xr-x  2 root  wheel  1 Mar 11 11:48 /mnt/users/ssh/scanner
    Its owned by me when not included in autofs:
    tesla:~ cmp12$ ls -ld /mnt/users/ssh/scanner
    dr-xr-xr-x  2 cmp12  DHE\BIAC-Users  68 Mar 10 13:04 /mnt/users/ssh/scanner
    When i try to access the mount location the sshfs mount is getting done correctly and shows up in the mount tab, but i cannot access the folder because its now owned by root. 
    mount entry:
    cmp12@dirac:/tmp/that on /mnt/users/ssh/scanner (osxfusefs, nodev, nosuid, synchronous, nobrowse, mounted by cmp12)
    logs:
    Mar 11 11:54:20 tesla.dhe.duke.edu automountd[71454]:   fork_exec: /sbin/mount_sshfs -o nobrowse -o nosuid,nodev -o sshfs_debug,idmap=user,follow_symlinks,max_read=65536,rw,nodev,cache=no,Identit yFile=/Users/cmp12/.ssh/id_dsa -o automounted -o nosuid cmp12@dirac:/tmp/that /mnt/users/ssh/scanner
    Mar 11 11:54:20 tesla.dhe.duke.edu automountd[71454]:   fork_exec: returns exit status 0
    Mar 11 11:54:20 tesla.dhe.duke.edu automountd[71454]:   mount of /mnt/users/ssh/scanner dev=2f000073 rdev=0 OK
    Mar 11 11:54:20 tesla.dhe.duke.edu automountd[71454]: MOUNT  REPLY  : status=0, AUTOFS_DONE
    That mount command returns a successful/usable sshfs mount when NOT used in combination with autofs ( because my user is the owner of the mount point ).  with any of the other mounts i have in autofs, the mountpoint starts off being owned by root, then ownership is changed to be the user accessing the mount after the mount happens. ( my smbfs mounts are working perfectly )
    Is there any way around this in autofs?  I would like this to be in autofs so it can handle mounts/umounts/etc automatically.
    Thanks,
    -chris

    Thanks for the suggestion .. i finally found a way around this.
    I found that the fuse module doesn't respect the options set in the fuse.conf file ( this works on my linux machines ).
    However, if i specifically set the option with sysctl and use allow_other, then things work the way i want .. and the way i've been using this in linux.
    I turn on allow_other:
    sysctl -w osxfuse.tunables.allow_other=1
    But i'm only mounting in a location that my user has access .. and adding allow other in the auto_sshfs file:
    tesla:local_cvs cmp12$ cat /etc/auto_sshfs
    /mnt/users/ssh/scanner          -fstype=sshfs,sshfs_debug,allow_other,idmap=user,follow_symlinks,max_read= 65536,rw,nodev,cache=no,IdentityFile=/Users/cmp12/.ssh/id_dsa          cmp12@dirac:/tmp/that
    May not be the most secure way, but it works

  • KDE/dolphin delete files located on sshfs remotely

    Currently deleting any files mounted via fuse.sshfs makes dolphin copy the files over to my pc and save them in ~/.local/share/Trash.
    I have found this, and have checked the directories in question
    - .Trash exists with 777 + sticky bit
    - .Trash-1000 (id of both my local AND remote user) exists as well with 700 without the sticky bit
    Both folders are in the $topdir (/data remotely) as described by the specification. Copying them over to /home/myuser/mountpoint (which would be my local $topdir) does not change the behavior
    The specification says that .Trash-uid is to be created automatically in case .Trash with required attributes does not exist.
    Why is dolphin not deleting the files remotely?

    Thanks, but that's a crude workaround at best. It also does not handle trash at all - the only option is to delete the file permanently.
    I've  found this 11 years old unresolved bug here, seems to be what is causing it.  Konqueror behaves the same, so it seems it's a kio thing once again.

Maybe you are looking for