Mount SFTP in OS X

Hello Everyone,
Is there any possible way to mount SFTP shares in OS X (to show up as a drive on the desktop)? I know there's no support built-in, but I was hoping maybe somebody knows of a third party utility (preferably free) that exists to assist in this. I know it can be done in *NIX with SSHFS, and there's a utility (not free) that allows you to do it in Windows.
Thanks!!

Hello Everyone,
Is there any possible way to mount SFTP shares in OS
X (to show up as a drive on the desktop)? I know
there's no support built-in, but I was hoping maybe
somebody knows of a third party utility (preferably
free) that exists to assist in this. I know it can
be done in *NIX with SSHFS, and there's a utility
(not free) that allows you to do it in Windows.
Finder browsing of FTP servers mounted on your desktops are ReadOnly so browsing SFTP sharepoints in the Finder if possible I guess would also be read only.
However my ftp client of choice is transmit which is a nice gui app that can log into SFTP with full read/write permission if the server allows it.

Similar Messages

  • [Solved] gvfs-mount with sftp:// reports "unable to spawn ssh program"

    I have two arch installs for my laptop and desktop, both of which are running sshd, and both have the same sshd_config and ssh_config (as verified by md5sum). I can use ssh, scp, and sftp in either direction without issue.  Gvfs-mount works fine on the desktop, but fails on the laptop:
    $ gvfs-mount sftp://192.168.1.10
    Error mounting location: Unable to spawn SSH program
    My googling of that error only results in hits for the source code of gvfs, and I get lost in the source trying to hunt down what would cause that error to occur.  Is there a session management thing I have to enable? I run vanilla GNOME with GDM on the desktop, but I use a custom .xinitrc file with startx on the laptop:
    #!/bin/sh
    # ~/.xinitrc
    # Executed by startx (run your window manager from here)
    if [ -d /etc/X11/xinit/xinitrc.d ]; then
    for f in /etc/X11/xinit/xinitrc.d/*; do
    [ -x "$f" ] && . "$f"
    done
    unset f
    fi
    xset r rate 200
    xset -dpms s off
    xbindkeys
    . ~/.fehbg
    conky -q &
    metacity &
    #Required for usb/cdrom/etc mounting in thunar
    /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
    xfce4-panel
    EDIT:
    Executing startxfce4 in .xinitrc doesn't fix the issue, either.
    Last edited by Odysseus (2015-02-18 09:59:59)

    I tried running wayland today with weston, and I couldn't run weston-terminal due to an error about opening a pty. Turns out I still had lines in my fstab about /dev/pts, and most importantly, it was setting the wrong permissions.  I commented all that out, rebooted, and now both weston-terminal and gvfs-mount work properly.

  • Ftp location commands not working on programs

    Hi, I dont know how to explain this so I better post a screenshot. Im trying to open a remote file with bluefish to edit it in place and save changes after.
    I have tried to open with several programs with no luck, so its not bluefish specific, I have also tried aptana studio 3 and same thing. Of course those are not the real parameters for the ftp server (just in case ) they are just for ilustrative purposes.
    If I do it via dolphin, it works, konqueror also works, but whenever Im inside a different program it wont let me.
    What can I do to fix this?
    thanks in advance for the help!

    ok I am back with the solution
    For remote files, bluefish uses GVFS, so first thing you should check that you have that installed
    pacman -Ss gvfs
    that will bring you the related packages along with which ones are installed. In my case I had all the relevant ones and didnt install the bluetooth support or anything like that, just read the package descriptions and you should be fine.
    the command that will do magic for you is gvfs-mount use it as so:
    gvfs-mount ftp://yourdomain.com
    then it will ask for user and password just type them in and you are good to go (that command mounts the ftp on /hone/youruser/.gvfs)
    Then you can go into bluefish and see it on the file list and explore it to your hearts content along with edit it in place etc etc.
    you can do it with sftp also:
    gvfs-mount sftp://yourdomain.com
    If your hosting provider port number is different from the default just add it at the end of that line with a ":"
    gvfs-mount sftp://yourdomain.com:2222
    Hope it helps!

  • [SOLVED] avahi sftp mount problem

    greetings,
    i hope i'm posting this in the right subforum (the other abvious choice would have been networking), but since this is likely to be a configuration issue, i'll post here. sorry, i'm kind of a newbie here.
    sso.. i'm using a thinkpad t42, i've installed arch linux for the second time (please be patient with me, lol), and i've set up everything correctly: i have the latest gnome (2.30), and kernel. uname -a:
    Linux YURI 2.6.32-ARCH #1 SMP PREEMPT Mon Mar 15 20:08:25 UTC 2010 i686 Intel(R) Pentium(R) M processor 1.60GHz GenuineIntel GNU/Linux
    yesterday, i've messed around a bit with my rc.conf, and now everytime i try to mount a mac share via avahi/sftp, i only see nautilus's 'opening *insert-share-name-here*', and a certain process (gvfsd-sftp) eats all my processor resources. the share never actually gets mounted. i'm guessing some kernel module is not loaded, or there's a problem with the order of my daemons.
    i've searched around in on the wiki, on the forums and in google, but to no avail. everything is supposed to be in order. if someone can help, please do.
    modules array from rc.conf:
    MODULES=(acpi-cpufreq cpufreq_ondemand cpufreq_conservative cpufreq_userspace cpufreq_powersave fuse !slhc !thinkpad-acpi e1000 ipw2100)
    daemons array:
    DAEMONS=(syslog-ng dbus hal crond alsa !network rpcbind nfs-common nfs-server netfs ntpd avahi-daemon networkmanager gdm)
    thank you,
    bamdad
    Last edited by bamdad (2010-04-03 12:01:35)

    update:
    it seems, rc.conf has nothing to do with the problem. i'll leave it here for reference, but i tried logging in to gnome as root, and voilá, i could connect without a problem. so there had to be something messed up with my gnome settings. i had a crash yesterday, during which i was logged in to the other computer.
    so i popped open Accessories>Passwords and Encryption keys from the gnome menu and deleted the [email protected] and [email protected] (interestingly, there wasn't an [email protected], and the problem was resolved.
    my guess is that the crash somehow messed up my stored login information, and thus gvfsd-sftp went crazy trying to authenticate. if you happen to have the same problem, try the solution above.
    Last edited by bamdad (2010-04-03 12:07:18)

  • Mounting samba share starts avahi, ssh and sftp at client

    The problem is at the client. When i mount a samba share (with # mount), avahi is started, which starts ssh and sftp. This is wrong on many levels.
    Not sure how long this has been going on, someone else already asked this on stackexchange on 11.2.15, but didn't get any answers.
    Journal output immediatly after mounting (hostname, ip etc. removed):
    Mär 18 01:35:51 hostname dbus[434]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service'
    Mär 18 01:35:51 hostname systemd[1]: Cannot add dependency job for unit boot.automount, ignoring: Unit boot.automount is masked.
    Mär 18 01:35:51 hostname systemd[1]: Listening on Avahi mDNS/DNS-SD Stack Activation Socket.
    Mär 18 01:35:51 hostname systemd[1]: Starting Avahi mDNS/DNS-SD Stack Activation Socket.
    Mär 18 01:35:51 hostname systemd[1]: Starting Avahi mDNS/DNS-SD Stack...
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Found user 'avahi' (UID 84) and group 'avahi' (GID 84).
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Successfully dropped root privileges.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: avahi-daemon 0.6.31 starting up.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
    Mär 18 01:35:51 hostname dbus[434]: [system] Successfully activated service 'org.freedesktop.Avahi'
    Mär 18 01:35:51 hostname systemd[1]: Started Avahi mDNS/DNS-SD Stack.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Successfully called chroot().
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Successfully dropped remaining capabilities.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Loading service file /services/sftp-ssh.service.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Loading service file /services/ssh.service.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Joining mDNS multicast group on interface enp1234.IPv4 with address myip.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: New relevant interface enp1234.IPv4 for mDNS.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Network interface enumeration completed.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Registering new address record for myip on enp1234.IPv4.
    Mär 18 01:35:51 hostname avahi-daemon[2583]: Registering HINFO record with values 'X86_64'/'LINUX'.
    Mär 18 01:35:52 hostname avahi-daemon[2583]: Server startup complete. Host name is hostname.local. Local service cookie is 123.
    Mär 18 01:35:53 hostname avahi-daemon[2583]: Service "hostname" (/services/ssh.service) successfully established.
    Mär 18 01:35:53 hostname avahi-daemon[2583]: Service "hostname" (/services/sftp-ssh.service) successfully established.

    Thanks for your answer.
    snakeroot wrote:Are you sure it is actually starting ssh and ssftp or is it just having avahi advertise them as existing?
    I'm not sure if anything is started, the term "Service ssh successfully established" sounds like the ssh serrver is started to me, but it might just be strange wording. What does "advertise as existing" mean?
    From the snippet you quoted, it looks like the latter. Unless you have alread started socket activation for ssh or sftp, whether via systemd *.socket or inetd, I'm not sure it would actually be started.
    I didn't enable anything manually.
    I think you can rm/mv the sftp-ssh.service and ssh.service files /etc/avahi/services/ and prevent those services from being advertised.
    OK thanks for the hint. Nontheless i would rather stop avahi from starting than configuring it.
    Begin rant...
    I'm a bit annoyed that avahi is starting without my permission. Seems like systemd is getting a bit overzealous with starting services. Interestingly this was one of the big problems with upstart, and was supposed to be solved with systemd. I still like systemd.

  • How to setup real SFTP in Yosemite?

    There's lots and lots of information online about how to setup remote access to my mac using SSH and SFTP in Yosemite...but apparently none of it is actually useful for genuinely  remote access...access from outside my network, access from miles away.  Because here is what I have learned:
    The ip address listed in the Sharing pane when you access remote login is internal.  it's some version of 192.168.x.x, which is essentially everybody's computer and utterly useless from outside the network.
    So my actual public ip as assigned by my service provider is really my IP if you are trying to get to my computer from outside.
    But that IP doesn't "just work" via SSH or, much more importantly for me, SFTP.  Evidently that involves something called "port forwarding" which one used to be able to do via Airport Utility, but is no longer possible because such options do not exist in the current version of the utility.
    (What I want to achieve is for a friend of mine, a friend on a mac about 4 miles away,  to be able to directly access very large files on my computer. I don't want to use clouds or public sites or anything like that, I just want to be my own private little FTP server just for her. That used to be no big deal.)
    So... is there a solution for this?

    Dropbox may be a better option if you can both have accounts with enough storage, it is a simple setup process that can be forgotten about.
    Bittorrent sync is another option that mirrors data between several endpoints. You should get better performance if the files are not being transferred over the network (Dropbox & Bittorrent sync create copies at each end).
    ssh & port forwarding allows you to use sftp, if you do some more setup you can also use sshfuse to mount the remote content as a network disk, it's a plugin for http://osxfuse.github.io
    Just be aware that many scripts & botnets are constantly attempting to login on visible ssh ports on routers across the internet. Enabling ssh to outside access means that any user on you Mac can be a potential login - have good passwords set on ALL user accounts, and do not enable the root account password. There are also options to disable passwords in ssh (& use ssh keys only) if you want better security.
    ssh is good, but it is not trivial to setup and it won't open ports for you. Apps like bittorrent sync can use UPnP to request port forwarding rules, to make setup easier.
    Back to My Mac is OK, provided you are willing to hand over the keys to your Apple ID.
    P.S. You keep mentioning ftp, please avoid ftp. ftp can use plain text passwords that can be sniffed over the network, avoid opening the network to inbound ftp traffic too.

  • SFTP chroot from non-global zone to zfs pool

    Hi,
    I am unable to create an SFTP chroot inside a zone to a shared folder on the global zone.
    Inside the global zone:
    I have created a zfs pool (rpool/data) and then mounted it to /data.
    I then created some shared folders: /data/sftp/ipl/import and /data/sftp/ipl/export
    I then created a non-global zone and added a file system that loops back to /data.
    Inside the zone:
    I then did the ususal stuff to create a chroot sftp user, similar to: http://nixinfra.blogspot.com.au/2012/12/openssh-chroot-sftp-setup-in-linux.html
    I modifed the /etc/ssh/sshd_config file and hard wired the ChrootDirectory to /data/sftp/ipl.
    When I attempt to sftp into the zone an error message is displayed in the zone -> fatal: bad ownership or modes for chroot directory /data/
    Multiple web sites warn that folder ownership and access privileges is important. However, issuing chown -R root:iplgroup /data made no difference. Perhaps it is something todo with the fact the folders were created in the global zone?
    If I create a simple shared folder inside the zone it works, e.g. /data3/ftp/ipl......ChrootDirectory => /data3/ftp/ipl
    If I use the users home directory it works. eg /export/home/sftpuser......ChrootDirectory => %h
    FYI. The reason for having a ZFS shared folder is to allow separate SFTP and FTP zones and a common/shared data repository for FTP and SFTP exchanges with remote systems. e.g. One remote client pushes data to the FTP server. A second remote client pulls the data via SFTP. Having separate zones increases security?
    Any help would be appreciated to solve this issue.
    Regards John

    sanjaykumarfromsymantec wrote:
    Hi,
    I want to do IPC between inter-zones ( commnication between processes running two different zones). So what are the different techniques can be used. I am not interested in TCP/IP ( AF_INET) sockets.Zones are designed to prevent most visibility between non-global zones and other zones. So network communication (like you might use between two physical machines) are the most common method.
    You could mount a global zone filesystem into multiple non-global zones (via lofs) and have your programs push data there. But you'll probably have to poll for updates. I'm not certain that's easier or better than network communication.
    Darren

  • How to use external table - creating NFS mount -the details involved

    Hi,
    We are using Oracle 10.2.0.3 on Solaris 10. I want to use external tables to load huge csv data into the database. This concept was tested and also found to be working fine. But my doubt that : since ours is a J2EE application, the csv files have to come from the front end- from the app server. So in this case how to move them to the db server?
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process?
    Regards

    orausern wrote:
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process? Not sure why "putty" cannot be used. This s/w uses the standard telnet and ssh protocols. Why would it not work?
    As for getting the files from the app server to the db server. There are a number of options.
    You can look at it from an o/s replication level. The command rdist is common on most (if not all) Unix/Linux flavours and used for remote distribution and sync'ing of files and directories. It also supports scp as the underlying protocol (instead of the older rcp protocol).
    You can use file sharing - the typical Unix approach would be to use NFS. Samba is also an option if NTLM (Windows) is already used in the organisation and you want to hook this into your existing security infrastructure (e.g. using Microsoft's Active Directory).
    You can use a cluster file system - a file system that resides on shared storage and can be used by by both app and db servers as a mounted/cooked file system. Cluster file systems like ACFS, OCFS2 and GFS exist for Linux.
    You can go for a pull method - where the db server on client instruction (that provides the file details), connects to the app server (using scp/sftp/ftp), copy that file from the app server, and then proceed to load it. You can even add a compression feature to this - so that the db server copies a zipped file from the app server and then unzip it for loading.
    Security issues. Well, if the internals is not exposed then security will not be a problem. For example, defining a trusted connection between app server ad db server - so the client instruction does not have to contain any authentication data. Letting the client instruction only specify the filename and have the internal code use a standard and fixed directory structure. That way the client cannot instruct something like +/etc/shadow+ be copied from the app server and loaded into the db sever as a data file. Etc.

  • NSS326 Problems Using New SFTP Features

    Is anyone else having issues using the new SFTP feature that was released in the 1.2 firmware?  We are running firmware 1.3.  When we try to connect to the NSS326 via SFTP, the only user that can login is "admin" and they get placed directly into the root of the device.  We want to use the SFTP feature to backup Communications Manager (which requires SFTP) but being dumped into the "root" does not appear to give us access to the vast disk space on the unit, so the backup fails.  Is there a way to configure the SFTP options, because the administration GUI only specifies FTPS (Explicit)?  Thanks.

    On my NSS326, I have a single RAID-6 volume and it's mounted under /share/MD0_DATA.  Each shared folder is containted in that directory (i.e. Web), but is also symlinked to /share, so my Web content is available under /share/Web.   If you create a new "backups" shared folder, for example, it should be available under /share/backups.  You could try specifying that destination directory for your SFTP backups.

  • Sftp problems

    I am having problems connecting with sftp, though ssh appears to be fine. I run arch on 2 boxes, and connect to a CentOS 6.X server. Both Arch boxes appear to have the same problem when connecting with the CentOS server and different error/warning when connecting to each other.
    When I connect from my Arch laptop to my Arch desktop with ssh, I have no problems. When I connect with sftp, I get a warning message but it still works:
    sftp lab
    shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
    Connected to lab.
    sftp>
    When connecting from either Arch machine to the CentOS machine, ssh works but sftp gives me an error:
    sftp intuit
    Connected to intuit.
    Couldn't canonicalize: No such file or directory
    Need cwd
    Note that the parent directories have permissions 755 and in the case of the Arch->Arch is owned by my user, in the Arch->CentOS it's owned by root. The CentOS machine is mounted on a network share using autofs and nfs but I suspect that isn't the problem because I was able to sftp from it a few months ago (I don't regularly sftp so I don't know if I have updated to a newer version of openssh since the last time it worked).
    ssh -vvvv intuit
    Connected to intuit.
    debug3: Sent message fd 3 T:16 I:1
    Couldn't canonicalize: No such file or directory
    Need cwd
    debug2: channel 0: read<=0 rfd 4 len 0
    debug2: channel 0: read failed
    debug2: channel 0: close_read
    debug2: channel 0: input open -> drain
    debug2: channel 0: ibuf empty
    debug2: channel 0: send eof
    debug2: channel 0: input drain -> closed
    debug2: channel 0: rcvd eof
    debug2: channel 0: output open -> drain
    debug2: channel 0: obuf empty
    debug2: channel 0: close_write
    debug2: channel 0: output drain -> closed
    debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
    debug2: channel 0: rcvd close
    debug3: channel 0: will not send data after close
    debug2: channel 0: almost dead
    debug2: channel 0: gc: notify user
    debug2: channel 0: gc: user detached
    debug2: channel 0: send close
    debug2: channel 0: is dead
    debug2: channel 0: garbage collecting
    debug1: channel 0: free: client-session, nchannels 1
    debug3: channel 0: status: The following connections are open:
    #0 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)
    debug1: fd 0 clearing O_NONBLOCK
    debug3: fd 1 is not O_NONBLOCK
    Transferred: sent 4520, received 3224 bytes, in 0.1 seconds
    Bytes per second: sent 65873.0, received 46985.5
    debug1: Exit status 0
    Current versions of openssh:
    pacman -Q|grep ssh
    libssh2 1.4.3-2
    openssh 6.6p1-2
    I am wondering if I have some sort of configuration problem and what other things I should try to get it to work.
    The openSSH FAQ says I should get no output from the following command (but I do):
    ssh intuit /bin/true
    shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
    Same output for other machine (with /usr/bin/true).

    Are your user names the same on all of these systems?
    Are the UIDs the same for your username on all these systems?
    Do all of these systems have the same $HOME for your user name? 
    Are they all mounted?
    When you connect to a host, can you check on that host to find the owner of the sftp server process?  Is it your user? is it something else? Can that something else read the $HOME on that host?

  • Reading an SMB Share Without Mounting

    I'm sure hte answer to this must be incredibly simple, but when I Google, almost every link is about mounting SMB shares.
    I want to be able to read an SMB share on another system from the command line WITHOUT mounting the share.
    For example, in Windows, I can type "dir \\servername\directory" and get a listing of the files on an SMB mount without having to mount it.
    Can I do anything like that from the command line in OS X?  (I tried variations, like "ls //servername/directory" and did not get anything that worked.)

    There's no support for the Microsoft \\ syntax within the various tools of the command line environment.
    You're using a CIFS/SMB/Samba remote access path and particularly attempting to use a feature and syntax that is specific to Microsoft Windows (that \\ stuff) and its MS-DOS command mode.
    It's probably easier (and likely best) to use Filezilla or another similar tool to remotely access the share via ftp or sftp transport or another path when the GUI is required, or to use the command-level tools scp or sftp (which can be configured for a no-password login), or to use ftp (wildly insecure, and largely incompatible with firewalls), or to simply mount (or automount) the share. 
    Alternatively and if the other system involved here is remote and you're being blocked by a gateway firewall, then you might look to use a VPN into the remote network gateway or (less desirably) a VPN that's been port-forwarded into the remote server.  That can allow the volume to be mounted, and also avoids exposing CIFS/SMB/Samba to the Internet.
    The most direct command-line analog to what you're trying here is the smbclient command-line tool.  For details and features and limitations of that, issue the smbclient --help command, or see the available Samba smbclient documentation.  Though it can do what you want, I"d still look to use scp or sftp (directly at the command line, or via a GUI client) or a VPN here.

  • [Solved] sftp-like with directories support?

    Often when I need to pull data quickly from one box to another i use sftp (because i have ssh configured, but no ftp/nfs/...)
    However sftp does not support directories, so what i usually do is (If i don't know the exact path/filenames):
    1) ssh to box
    2) cd to the right path and put it in my selection buffer
    3) logout
    4) scp -rp <box>:<path> .
    Which is of course, not all that efficient.
    Is there a way to do directory transfers with sftp?  directory tab completion would be useful too.
    Last edited by Dieter@be (2009-06-13 12:16:48)

    Xyne wrote:Take a look at sshfs too. I've found that to be very useful when transferring many files over ssh. It's available in extra.
    Indeed... I don't know how I forgot that. I used NFS before to map my server's data to my laptop but now it's just sshfs . Quite easy. If you cache your SSH keys you can mount them easily (that is, with fstab fiddling and some pipe menu magic in Openbox <3).

  • Best way to mount one computer's HD on another?

    I've got a mac laptop (Air) and a mac desktop (iMac). I'd like to keep some of the files between the two in sync, and I have a 3rd party software program that will synchronize the contents of two specified folders. The trouble is, I'm not sure how to mount the laptop's HD on the desktop. What is the best way to do this, given that enabling "file sharing" from the system preferences doesn't seem to accomplish the task, and OS X does not seem capable of mounting an SFTP-accessible volume. Any advice?
    Thanks!

    File Sharing is the most obvious way. Please see this page for the method:
    http://homepage.mac.com/rfwilmut/notes/sharing.html
    When you establish the connection you will only have access to the Public Folder, but you can log in to the remote computer using the usual account and password name.
    Another method is to use FireWire Target Mode: connect the two computers via a 6-pin FireWire cable (the 'target' one must be off) and boot the target machine while holding the T key down. Full instructions here:
    http://support.apple.com/kb/HT1661
    In order to do actual syncing you will need a program capable of doing this over a network connection: FoldersSynchronizer can do this, though I don't know whether it can do it over FireWire Target mode - it should do, I would have thought.

  • SFTP again

    I have read the questions and answers about sftp mounts in the Finder, but I think people miss the point. Mounting a remote filesystem and working directly on its files is very different, and a lot more desirable, than mirroring it on one's own machine. I work on 4 different machines at different sites and would like to work on one copy of some important files, but the only permissable remote access to my server at work is via sftp. I have a great sync program (Yummy FTP) that lets me sync this way, but it's hard to keep all of these machines manually synced, and accidents happen. It would be far better just to mount the remote filesystem like you can do with ftp, smb, nfs, afs, etc.
    So I guess this amounts to a plea to Apple to add one more important protocol to Finder...
    Thanks
    JEH
    iMac 20   Mac OS X (10.4.6)  

    Hello Justin:
    You may wish to post your enhancement suggestion in the OS X feedback area. Although Apple employees read these forums, the feedback route will; get your suggestion to the developers.
    Barry

  • OES2sp2 miggui Failed to mount volume

    Trying to migrate NW65SP8 (DS8.8.4) to OES2sp2.
    miggui is running on the OES2sp2 server which is patched up to date using rug.
    Source and target servers are in the came container in the DS.
    I can define the source and target servers and get to the point where I want to configure the migration of volumes, but then I get a failure. The log (minus the timestamps for clarity) reads:
    ERROR - FILESYSTEM:volmount.rb:Command to mount source: ruby /opt/novell/migration/sbin/volmount.rb -s 138.37.100.118 -a "cn=joshua,ou=sys,o=qmw" -c cp437 -f "/var/opt/novell/migration/cc18/fs/mnt/source" -m -t NW65
    INFO - FILESYSTEM:volmount.rb:*****************Command output start**********************************
    INFO - FILESYSTEM:volmount.rb:
    INFO - FILESYSTEM:volmount.rb:Information: ncpmount using code page 437
    INFO - FILESYSTEM:volmount.rb:Information: ncpshell command executed as: LC_ALL=en_US.UTF-8 /opt/novell/ncpserv/sbin/ncpshell --volumes --ip=138.37.100.118 --u="joshua.sys.qmw"
    INFO - FILESYSTEM:volmount.rb:Information: Mounting Volume = _ADMIN
    INFO - FILESYSTEM:volmount.rb:Information: Executing command: /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d /var/opt/novell/migration/cc18/fs/mnt/source/_ADMIN -v _ADMIN
    INFO - FILESYSTEM:volmount.rb:Fatal: Failed to mount volume _ADMIN for server 138.37.100.118
    INFO - FILESYSTEM:volmount.rb:Fatal: SystemCallError, Unknown error 1008 - Failed to mount volume _ADMIN for server 138.37.100.118 .
    I tried executing what appears to be the offending command by hand:
    ruby /opt/novell/migration/sbin/volmount.rb -s 138.37.100.118 -a "cn=joshua,ou=sys,o=qmw" -c cp437 -f "K" -m -t NW65 -p password --debug
    which produces the output:
    Information: ncpmount using code page 437
    Information: ncpshell command executed as: LC_ALL=en_US.UTF-8 /opt/novell/ncpserv/sbin/ncpshell --volumes --ip=138.37.100.118 --u="joshua.sys.qmw"
    Information: Mounting Volume = _ADMIN
    Information: Executing command: /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d K/_ADMIN -v _ADMIN
    Fatal: Failed to mount volume _ADMIN for server 138.37.100.118
    Fatal: SystemCallError, Unknown error 1008 - Failed to mount volume _ADMIN for server 138.37.100.118 .
    Information: File K/_ADMIN/Novell/Cluster/PoolConfig.xml does not exist. No cluster resources attached
    Information: Mounting Volume = SYS
    Information: Executing command: /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d K/SYS -v SYS
    Fatal: Failed to mount volume SYS for server 138.37.100.118
    Information: unmounting all mounted volumes
    Information: Executing command:/opt/novell/ncl/bin/nwlogout -f -s QMWCC18
    Cannot perform logout: Cannot connect to server:[QMWCC18]. Error:NWCCOpenConnByName:
    Information: Command Output:
    Information: Executing command: rm K/*
    rm: cannot remove `K/*': No such file or directory
    Fatal: SystemCallError, Unknown error 1008 - Failed to mount volume SYS for server 138.37.100.118 .
    SLP shows both servers, they are on the same network as each other, and the firewall is turned off.
    Does anyone have any idea what may be causing the mount failure or what error 1008 might be?
    Tim

    cgaa183 wrote:
    > Thankyou all for helpful suggestions, I'll go through them.
    >
    > 1. novfsd
    > /etc/rc.d/novfsd status
    >
    > running
    >
    > /etc/rc.d/novfsd restart
    > Stopping Novell novfs daemon...
    >
    > done
    > Starting Novell novfs daemon...
    >
    > No Config File Found - Using Defaults
    > novfsd: Novell Client for Linux Daemon
    > Copyright 1992-2005, by Novell, Inc. All rights reserved.
    > Version 3.0.1-503
    >
    > done
    >
    > Nothing changes, so I don't think that was the problem in this case.
    >
    did you restart the server after installing OES2 SP2?
    > 2. try using migfiles
    > /opt/novell/migration/sbin/migfiles -s 138.37.100.118 -v APPS -V APPS
    >
    > The result:
    > Error:
    > Error: nbackup: Unable to retrieve the Target Service Name list from
    > 138.37.100.118
    > Error:
    > Fatal: nbackup command failed to execute: nbackup: Connection denied
    >
    migfiles is not able to connect to the TSA on the source server. Either
    TSAFS is not loaded on the source server or not able to locate the TSA
    using SLP.
    > This might be informative to someone who knows a little more than me. I
    > wonder if I can call it with options to get more information. I have
    > just re-checked I can attach and mount volumes with the same username
    > from another system.
    >
    > 3. ruby version
    >
    > rpm -qa |grep ruby
    > ruby-1.8.4-17.20
    > rubygems-0.9.2-4.4
    > rubygem-needle-1.3.0-1.5
    > rubygem-net-sftp-1.1.0-1.5
    > ruby-devel-1.8.4-17.20
    > rubygem-net-ssh-1.0.9-1.5
    >
    > Doesn't appear to be an afflicted version and migfiles --help tells
    > me all about the options available.
    >
    >
    > As #2 looked interesting I thought I'd look at it a bit more. I turned
    > up TID 7001767. The reason migfiles failed for me was that SMDR and
    > TSAFS weren't loaded, at least I don't get the error and file migration
    > appears to start now they are loaded, though it does appear to have
    > ceased copying files rather prematurely....
    >
    > Going back to volmount.rb I now realise its using ncpfs, and not a
    > lightweight Novell client. So I tried mounting a volume by hand:
    >
    > qmwcc28:~ # ncpmount -S 138.37.100.118 -U joshua.sys.qmw K
    > Logging into 138.37.100.118 as JOSHUA.SYS.QMW
    > Password:
    > ncpmount: Server not found (0x8847) when trying to find 138.37.100.118
    >
    > A bit of a giveaway, but why doesn't it work?
    > It seems I need to use -A DNSname -S servername and then it works.
    > The next important bit seems to be
    > /opt/novell/ncpserv/sbin/ncpshell --volumes --ip=138.37.100.118
    > --u="joshua.sys.qmw"
    > which executed by hand lists volumes correctly with the output:
    > Please enter your password ?
    > [0] SYS
    > [1] _ADMIN
    > [2] APPS
    > 3 NCP Volumes Mounted
    >
    > "ncpshell" appears to be from Novell's client for Linux so I don't
    > understand why we'd be trying to use that if we're using ncpfs, and we
    > already know which volumes are mounted by looking in the folder in which
    > we mounted the server using ncpfs. AFAICS is used to invoke NLMs on
    > NetWare remotely using OES so its not testing anything we don't already
    > know.
    >
    > This takes me inevitably to "nwmap". "nwmap" is also from Novell's
    > client for Linux so maybe the ncpfs stuff is unnecessary.
    > /opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d sys -v SYS
    > produces:
    > map: server not Found:138.37.100.118 - drive sys not mapped
    >
    ncpmount uses udp as the default. Add the option -o tcp to the ncpmount
    command then mount should work.
    > nwmap doesn't ask for a username. Maybe I'm wrong, but as far as the
    > Novell client goes I don't think it can have attached or logged into the
    > source server (ncpfs having a different connection table and ncpshell
    > asking the remote server to return the answer). I can't actually see
    > where /volmount.rb is calling nwmap at the moment but the results I get
    > my calling it at the command prompt with the same options given in the
    > log are the same.
    >
    if there is an existing connection to the same tree, nwmap does not ask
    for user name. Use the command "nwconnections" to check the existing
    connections. Use nwlogout to logout the connection. check
    /var/opt/novell/nclmnt/root/ for any stale entries.
    > I've tried logging in using nwlogin, but that fails too saying:
    > Cannot perform login: The system could not log you into the network.
    > Make sure that the name and connection information are correct, then
    > type your password again.
    >
    > ncl_scanner -T does list NDS trees but I suspect its only querying an
    > SLP server and nothing more useful. ncl_scanner -S produces:
    > INFORMATION FOR SERVER [QMWCC18] :
    > Server Name : [QMWCC18]
    > Bindery : [FALSE]
    > eDirectory Context : []
    > should it show a context?
    >
    > Looking at the files of the Novell client on the system, it looks a
    > rather cut down set with no config files. Even having introduced
    > protocol.conf the situation is not improved, but I'm now sure the
    > problem lies in this area. Possibly a full client installation is
    > required, or maybe there is something else wrong which is preventing the
    > client from working correctly. namcd is looking suspect.
    >
    >
    You do not need all files for Novel Client. If you want
    You can logout all connections using the command "nwlogout -a" and try
    the nwmap command again.
    "/opt/novell/ncl/bin/nwmap -s 138.37.100.118 -d
    /var/opt/novell/migration/cc18/fs/mnt/source/SYS -v SYS"
    Looks like the novell client is failed to resolve the IP address. You
    can do the following to configure different name resolution methods in
    the following way:
    Create a file: /etc/opt/novell/ncl/protocol.conf with data.
    Name_Resolution_Providers=NCP,SLP,DNS
    then restart rcnovfsd deamon using the command "rcnovfsd restart"
    do you see any NCP errors in the network packet trace?
    regards
    Praveen

Maybe you are looking for

  • Limit to number of states in an MSO?

    I am working on a project that will include an extensive alphabetical index. I'm setting this up with a page for each letter with an mso with about 50 states (on each page) and a scrolling frame with about 50 buttons that control the states. I named

  • Dynamic attribute in CRM Loyalty

    Hi, I am creating a new dynamic attribute for Year to Date total amount spent by a customer when a member activity gets created. I created the dynamic attribute in SPRO under the Marketing->loyaltyPrograms->dynamic attribute. The attribute is added u

  • How can I view to-do-list from my Mobileme ical in my Ipad?

    I can not view in the ical in my iPad the to-do-list from my Mobileme ical. Where is the to-do-list in the iCal in the iPad? Thanks

  • OM Notable able to capture Approver Rejected comments from the Note field in WF

    Hi, Could you see attached image, I need to capture the comments from Approver if it is rejected. Any these comments must flow to (Notification from AR). Could you let me know how to capture Rejected comments. Approver has Note field for providing Re

  • HP Pavilion dv6 Notebook PC Battery Problem

    I have an HP Pavilion dv6 Notebook PC Windows Vista 64-bit, I bought it less than 2 months ago.  Yesterday I fogot my charging cable so I ran my battery all the way out.  Then this morning after charging I turned on my laptop and it gave me a message