[Solved] Mounting a ntfs usb drive in kdemod

I have an external usb drive that automounts in gnome, with write permissions. In kdemod it would not mount, and gave the error: "PermissionDeniedByPolicy mount-removable-extra-options no". I fixed this using this workaround... http://wiki.archlinux.org/index.php/HAL … utomounter. The drive now mounts, but i don't have write permission. Is there a way to get it to automount in kdemod? Would i have to change /etc/fstab for every external usb drive?
Last edited by benerivo (2009-03-22 10:49:32)

I'm not sure what i'm using to mount it. I have ntfs-3g installed and working for an internal drive. I'm hoping for a 'plug and wrirte' solution, as happens for me in gnome. Currently in kde, with the fix i mentioned, it mounts but only with read permissions. I'm hoping to avioid an /etc/fstab solution as i may need to use this computer in an environment where any ntfs usb external drive can be plugged in and written to.
Last edited by benerivo (2009-03-22 02:04:47)

Similar Messages

  • Mounting and Unmounting USB drives [Solved]

    HI there:
    I want a program like in KDE that will automaticly mount and allow me to click to unmount my usb drives. I am building machines and I need them to be user friendly and fast. I am running openbox 3.5 with a lxpanel right now. Also how do you allow users to access the usb drives. Here is an example of an error. I am logged in as usera. I fire up shotwell plug in my camera. It detects it I click on the camera and it says
    "Shotwell
    Unable to fetch previews from the camera:
    I/O problem (-7)"
    I log in as root it works perfectly. I hope this all makes sense.
    Last edited by mich04 (2011-10-27 23:36:30)

    AliWam wrote:
    How do you start openbox?
    I am using startx with this ~/.xinitrc
    exec ck-launch-session dbus-launch openbox-session
    and it works well.
    Have a look here.
    Nice!  I just installed Xfce on my new netbook (I use awesome WM and a different automounting solution for my other Arch box) and adding "dbus-launch" right before "startxfce4" on the line that reads:
    "exec ck-launch-session dbus-launch startxfce4"
    fixed my problem.
    I'm a little confused though, in the wiki it says that, "In case you are wondering, dbus-launch will be launched by the xinitrc.d code at the beginning of the file...."  https://wiki.archlinux.org/index.php/Xfce
    My .xinitrc is functionally the same as the example in the wiki, so I had thought that somehow dbus-launch wouldn't be necessary.  Oh well, it's working now and that's all that matters, thanks for your suggestion AliWam!

  • [SOLVED]How exactly are USB drives mounted?

    When I plug in a USB drive, Cinnamon brings up the device in Nemo so I can see its files. The location it's mounted at is /run/media/<username>/<long string I assume to be a UUID>/. How is it that I can mount it again at another directory and use it from there? Also, when I unmount it from the second directory, is it already safe to remove it or do I need to unmount it from the first one as well?
    Last edited by Hurricane (2014-01-01 18:52:25)

    ewaller wrote:How, "Exactly"?
    http://www.usb.org/developers/docs/usb2 … 070113.zip
    http://www.usb.org/developers/docs/devc … 9-2010.pdf
    This is actually along the lines of what I was looking for, lmao.
    nd7rmn8 wrote:cant you place the relevant info by uuid in your /etc/fstab, and then when it automounts, it should mount to that location specified.  twas what i did back in my ubuntu days...
    That's the way I had it on my 8.04. I remember having to set that myself though.
    xtraroot wrote:If it's mounted on more than one place, you'd have to unmount it in those places if you've done file transfers and want to make sure it's safe to remove. You only mentioned one place that it's mounted though?
    I had it mounted at that default location and then I had mounted it onto ~/mnt (I recreated /mnt in my home directory)
    tomk wrote:You seem to lack some understanding about your own system. If your USB devices are automounted when you plug them in, that's because you configured your system to do that. The fact that you are asking how that works should worry you IMO.
    I never configured my system to do that. When I used Arch and XFCE, I don't remember USB drives being automatically mounted. I had to do it myself. Cinnamon apparently does it automatically. What I wanted to know was why, if it does it automatically, does mount not give me an error about trying to mount the same partition again in a different place.
    Thanks for your answers everybody.

  • [Solved] Delete files from usb drive connected to wireless router

    Hello,
    I have a home network with Samba for file sharing.
    I have a Netgear router with a Western Digital 2TB external drive attached to the router.
    I installed Samba according to the arch wiki and everything works fine but I cannot delete or access some files from the shared USB drive.
    I access the file as follows:
    $ cd /mnt/smbnet/WORKGROUP/READYSHARE/USB_Storage/
    Here are the permissions:
    $ ls -l /mnt/smbnet
    total 0
    drwxrwxrwx 2 root root 0 Dec 31 1969 DEANSNETWORK
    drwxrwxrwx 2 root root 0 Dec 31 1969 MYGROUP
    drwxrwxrwx 2 root root 0 Dec 31 1969 WORKGROUP
    $ ls -l /mnt/smbnet/WORKGROUP/
    total 0
    lrwxrwxrwx 1 root root 13 Dec 31 1969 READYSHARE -> ../READYSHARE
    $ ls -l /mnt/smbnet/WORKGROUP/READYSHARE/
    total 0
    drwxrwxrwx 2 root root 0 Dec 31 1969 USB_Storage
    $ ls -l /mnt/smbnet/WORKGROUP/READYSHARE/USB_Storage/
    total 0
    drwxr-xr-x 2 edgar users 0 Jul 22 21:04 BACKUPS
    drwxr-xr-x 2 edgar users 0 Jul 15 10:43 WD
    $ ls -l /mnt/smbnet/WORKGROUP/READYSHARE/USB_Storage/BACKUPS/
    total 0
    drwxr-xr-x 2 edgar users 0 Jul 11 22:36 Laurie_Laptop
    drwxr-xr-x 2 edgar users 0 Jul 22 21:04 TRASH
    drwxr-xr-x 2 edgar users 0 Jul 12 09:50 Work_Laptop
    If I try to delete the BACKUPS directory, I get errors:
    $ rm -r BACKUPS/
    rm: cannot remove ‘BACKUPS/Laurie_Laptop’: Permission denied
    rm: cannot remove ‘BACKUPS/TRASH/Documents/C++Prog/2541/BibleTriviaTest’: Permission denied
    rm: cannot remove ‘BACKUPS/TRASH/Documents/C++Prog/2541/HouseTest’: Permission denied
    rm: cannot remove ‘BACKUPS/TRASH/Documents/C++Prog/Bitmanip’: Directory not empty
    rm: cannot remove ‘BACKUPS/TRASH/Documents/C++Prog/CurrentMusicPrograms/Musc110’: Directory not empty
    etc...
    Attempting to change the mode doesn't work:
    $ chmod 777 BACKUPS/
    [edgar@arch USB_Storage]$ ls -l
    total 0
    drwxr-xr-x 2 edgar users 0 Jul 22 21:04 BACKUPS
    drwxr-xr-x 2 edgar users 0 Jul 15 10:43 WD
    Thanks for any help you can provide.
    Last edited by mrgar (2014-07-24 02:11:56)

    Here is the output:
    $ mount | grep /mnt
    smbnetfs on /mnt/smbnet type fuse.smbnetfs (rw,nosuid,nodev,relatime,user_id=1000,group_id=100)
    The USB drive is Western Digital My Book 2 TB drive attached as a NAS to the router that shares it.  I installed Samba on the arch machine so I can access and backup various home computers onto the NAS.
    Here is some info I just found on the Western Digital Support site:
    The Western Digital units mentioned above use a proprietary file system and cannot be reformatted as FAT32, NTFS, or a Mac File System.
    The file system on the WD My Cloud EX2, WD My Cloud EX4, WD My Cloud, WD My Book Live, WD My Book Live Duo, WD ShareSpace, WD ShareSpace, WD TV Live Hub, My Net N900 Central, WD My Book World Edition, WD NetCenter hard drives support access from Windows, Mac and most Linux based computer systems through a SAMBA network sharing connection.
    I thought I would mention that I can access the NAS and write, read and execute from the drive.
    $ cd /mnt/smbnet/WORKGROUP/READYSHARE/USB_Storage/
    [edgar@arch USB_Storage]$ mkdir Test
    [edgar@arch USB_Storage]$ ls
    BACKUPS Test WD
    [edgar@arch USB_Storage]$ cd Test
    [edgar@arch Test]$ vim testfile.txt
    [edgar@arch Test]$ ls -l
    total 1
    -rwxr--r-- 1 edgar users 16 Jul 23 07:47 testfile.txt
    [edgar@arch Test]$ cat testfile.txt
    This is a test.
    [edgar@arch Test]$ rm testfile.txt
    [edgar@arch Test]$ cd ..
    [edgar@arch USB_Storage]$ rm -r Test/
    [edgar@arch USB_Storage]$ ls
    BACKUPS WD
    However I want to copy large files with cp -ruv from my arch machine to the NAS.
    This is where the trouble lies.  Thanks for your help.

  • MacBook Pro Retina hangs while opening mounted share or USB drive

    I have a Macbook Pro Retina that I just updated to 10.8.2 over the past weekend.  Now, when I try to open a mounted SMB share or USB drive, the system hangs completely!  I had the system set to auto-reconnect my network shares, so when this started happening the system would hang right after login.  I was able to get in with Safe Boot and disable the auto-connect, so at least I can work now, but not being able to use a flash drive is really getting on my nerves.  Any ideas?

    I just ran a few tests to confirm my suspicion that this is a sleep problem for me and found that my rMBP will restart as soon as it has entered the sleep cycle.  Set it to auto sleep after 10 minutes and it will restart at 10 minutes, set it to sleep at 3 minutes and it will restart at 3 minutes.  Command it to sleep and it will restart.
    So for me at least there is a workaround and that is to set sleep to "never".  The display still sleeps but my mac keeps running.  So that's not a solution but it is a workaround, and if I want it to shutdown and not restart I either hold the powerbutton or take off the power cable and choose shut down from the dropdown menu (it won't restart on battery power, it'll just shut down).
    I hope that helps someone else!  And I hope it helps the engineers to diagnose and actually solve the very long running problem for us all..

  • Mounting 2 GB usb drive in solaris 10

    Hi Folks,
    I have 2 GB usb device.
    When i format the device in windows and mount the same in solaris, i get the below output
    #mount -F pcfs /tmp/SUNWut/units/IEEE802.080020f8112c/dev/dsk/disk3s2:c /home/es161022/Documents/usb
    mount: /tmp/SUNWut/units/IEEE802.080020f8112c/dev/dsk/disk3s2:c is not a DOS filesystem.
    The file system present in DOSBIG (view fdisk output below)If i format with option C or D (from fdisk output), I can mount the device, read and write files... but after this, i cant access the device in windows.. i will have to format it again in windows.
    # fdisk /tmp/SUNWut/units/IEEE802.080020f8112c/dev/rdsk/disk3s2
    Total disk size is 247 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active DOS-BIG 0 61 62 25
    SELECT ONE OF THE FOLLOWING:
    1. Create a partition
    2. Specify the active partition
    3. Delete a partition
    4. Change between Solaris and Solaris2 Partition IDs
    5. Exit (update disk configuration and exit)
    6. Cancel (exit without updating disk configuration)
    Select the partition type to create:
    1=SOLARIS2 2=UNIX 3=PCIXOS 4=Other
    5=DOS12 6=DOS16 7=DOSEXT 8=DOSBIG
    9=DOS16LBA A=x86 Boot B=Diagnostic C=FAT32
    D=FAT32LBA E=DOSEXTLBA F=EFI 0=Exit?
    It wud b fine, if any1 of u provide solution to access the same in windows / solaris....
    Is der any other way to mount DOSBIG filesystem in solaris...
    Thanks,
    Enz.
    +919880099311

    It should be the same. For instance with a usb keychain (formated pcfs):
    mkdir /usbdrive
    mount -F pcfs /dev/dsk/c3t0d0p1 /usbdrive
    On this particular laptop the second usb port was. c4t0d0p1 so:
    mount -F pcfs /dev/dsk/c4t0d0p1 /usbdrive
    The numbering might be different for each usb port and you have to specify the filesystem of the usb drive, but it should work.

  • [SOLVED] BIOS not detecting USB Drives after i installed Arch

    SOLVED: In BIOS under 'Advanced' disable 'Fast BIOS Mode'.
    Original post:
    Hi there!
    I'm fairly new to the arch (and linux) scene and just setup my Samsung 530U3B with a brand new installation of arch. Everything went fine and the system is running flawlessly. Almost.
    I just happend to notice that BIOS is not checking for USB drives anymore, it just somehow skips that part. If enter the boot selection, the bootable USB drive won't show up. Neither does it show up in BIOS setup. I tried to boot with the same USB drive that i used to install arch just yesterday, the live media does not boot - the drive doesn't even blink. The boot priority is set to usb drive first, the boot selection screen (hammering f10 during boot) doesn't give me any other option than to boot from the normal SSD. Also, prior to the arch installation (came with w7 installed) the BIOS screen would stay for about 2seconds and then proceed to boot - this doesn't happen anymore. When i press the power button greeted by the grub bootloader in about 1s.
    I tried 2 different usb drives, both with the arch iso (same that i used to install the currently running OS, downloaded yesterday). The drives also show up once my arch is booted (starting to blink just after i hit enter in grub). Also, both drives boot without a problem on my desktop computer.
    Anyone got any ideas?
    I don't really now what kinda of information would be helpful so i'm just dumping stuff i know:
    BIOS: Phoenix SecureCore Tiano Version 04XK
    Machine: Samsung 530U3B
    SSD: Samsung MZ7PC128HAFU-000
    ISO used to install (checksum ok): archlinux-2012.07.15-netinstall-dual.iso
    Last edited by araex (2012-07-26 16:57:47)

    DSpider wrote:
    The boot priority is set to usb drive first
    Welcome to the forum.
    If you just set the USB stick to be first in the boot order, the next option (from the list) takes its place the moment you unplug it. You need to set the BIOS to boot from your equivalent of "Removable Devices" first, "Internal Drives" second and "Optical Drives" third.
    The USB stick doesn't show up at all. I set the priority as follows:
    1. USB HDD
    2. USB FDD
    3. USB CD
    4. SATA CD
    5. SATA HDD
    6. NETWORK
    I used that same configuration to install arch. USB doesn't seem to initialize at all. Also, if i enter "usb" in the grub commandline, no devices are listed.
    Thanks!
    UPDATE:
    I just figured it out. In my BIOS under 'Advanced' the 'Fast BIOS Mode' was enabled, once i disabled it, everything went fine. I don't remember changing that option. Silly me.
    Sorry for your time
    Last edited by araex (2012-07-26 16:56:30)

  • [SOLVED] Encrypted root on USB drive problem

    Hi,
    I have encrypted root on external USB harddrive. On one machine it works just fine, LUKS ask for password and system starts.
    On second it does not work. I tryed nearly all possible combinations of modules and hooks. I can also access my usb drive if I use break=y. I am using current kernel & utils, I definitely use right paths...
    Machine is an Dell with Intel chipset. My USB drive is like this:
      /dev/sdb1 - big fat32
      /dev/sdb2 - ext2 boot with Grub, kernel and initrd image
      /dev/sdb3 - root fs, reiserfs encrypted with LUKS
    I made little debuging and it seems that encrypt hook was launched, but did not make anything. Before I digg deeper I wonder that someone had same problem, or can give me advice.
    Kernel panic screenshot:
    PS: how can I put busybox to initrd image? echo * sucks.
    Thanks
    Last edited by Trained.Monkey (2007-10-10 09:42:13)

    I solved, problem is that encrypt is running BEFORE usb drive is fully initialized. Encrypted partition is not found and not used.
    Solotion:
    put sleep 5 at beggining of encrypt hook. You must also add sleep binary at installer.

  • [SOLVED] Problem with external USB drive started yesterday

    My system freezes for about 65 seconds at the login screen. After 65 seconds, I can login and everything is normal afterwards.
    I discovered that I can eliminate the freeze by disconnecting the external USB drive from the PC. Here are the error messages from kernel.log when the drive is connected:
    Oct 15 23:10:14 localhost kernel: [ 17.035091] usb 1-1.3: device descriptor read/64, error -110
    Oct 15 23:10:29 localhost kernel: [ 32.155955] usb 1-1.3: device descriptor read/64, error -110
    Oct 15 23:10:29 localhost kernel: [ 32.325394] usb 1-1.3: new high speed USB device number 4 using ehci_hcd
    Oct 15 23:10:31 localhost kernel: [ 34.045126] MediaState is connected
    There is supposed to be a kernel boot option (irqpoll) to prevent this from occurring. However, that option seems to no longer work for kernels after 2.6.39. If I try using it with Arch, the system will not boot, saying all the CPU's failed.
    I also don't understand why this just started happening yesterday, as the USB drive has been connected to the PC longer than I've been using Arch. Also, my kernel has not been updated since I installed Arch on Oct 8.
    Does anyone know another fix or workaround for this problem?
    Thanks,
    Tim
    Last edited by ratcheer (2011-10-17 01:30:08)

    Problem solved
    I feel stupid, but I also feel better. I turned off the PC, disconnected the power from the USB drive for about 15 seconds, reconnected the power, rebooted, removed the irqpoll kernel option, and Ubuntu started, perfectly. I need to learn that the USB drive just goes freaky sometimes and remember to do this. But, this is only the second time I've had to do this in months of use.
    Tim

  • [Solved] Mount LUKS encrypted hard drive at boot

    Hi,
    This is driving me nuts. I'm getting angry to be honest.
    I encrypted my brand new WD portable hard drive with LUKS + dm-crypt and I can now normally map and mount it with the following commands:
    sudo cryptsetup luksOpen /dev/sdc1 WesternDigital
    [Enter Passphrase]
    sudo mount /dev/mapper/WesternDigital /media/WesternDigital
    I would like to map and mount it at boot time (where I should be prompted for the passphrase), so I edited:
    /etc/crypttab
    WesternDigital /dev/sdc1 none luks
    and:
    /etc/fstab
    /dev/mapper/WesternDigital /media/WesternDigital ext4 defaults,noauto,noatime 0 0
    During boot I get some errors regarding the decrypting or mapping of WesternDigital that fails but it's too fast to note down something (and, as you probably know, there's no known way to log boot messages on Arch...)
    After boot if I try to manually mount /media/WesternDigital I get a message saying /dev/mapper/WesternDigital does not exist.
    So I guess the problem is in the mapping phase and thus in the /etc/crypttab file.
    I can't find anything in the internet but maybe I'm missing something very basic (a daemon, a module?).
    Any help is indeed very appreciated, thank you.
    Last edited by rent0n (2010-09-24 15:27:16)

    Ok, it's solved. I tried many different configurations of /etc/crypttab, /etc/fstab, /etc/mkinitcpio.conf /etc/rc.conf /boot/grub/menu.lst and I finally found the right setup.
    I'm not sure of what was wrong in the first place so I'll just post my current working configs for future reference.
    /boot/grub/menu.lst
    Doesn't need to be edited at all (ignore the above post).
    /etc/rc.conf
    You don't need to add any module here because the dm-crypt and dm-mod modules are loaded thanks to the encrypt hook.
    /etc/mkinitcpio.conf
    The HOOKS line should include usb, usbinput (probably) and encrypt. usb must precede encrypt that must precede filesystems:
    HOOKS="base udev autodetect pata scsi sata usb usbinput keymap encrypt filesystems resume"
    /etc/crypttab
    WesternDigital /dev/sdX ASK
    Do not insert 'luks', 'retry=X' or other kind of options (you can find this kind of options in many tutorials and howtos). That was one of my problems I guess.
    /etc/fstab
    /dev/mapper/WesternDigital /media/WesternDigital auto defaults,noatime 0 0
    Note
    I'm not sure if this has been helpful or not... however I was able to get it to work after following the advice found here.
    Cheers,

  • [SOLVED]mounting cdrom and usb devices doesn't work

    Hello,
    i have trouble mounting my usb devices. automount doesn't work and i can't mount them manually even as a superuser. here's the output of mount command:
    mount: wrong fs type, bad option, bad superblock on /dev/sdc,
    missing codepage or helper program, or other error
    In some cases useful info is found in syslog - try
    dmesg | tail or so
    i can mount cdrom manually but i can't do the same for my usb stick and external hard drives. it always complains about the above regardless of the filesystem. my devices are working under windows and other linuxes so i know hardware is not the problem.
    i'm using lxde with pcmanfm.
    if you need any other files just ask.
    thanks
    Last edited by the gray (2009-03-16 20:15:18)

    above was the error when i tried to mount any usb device with "mount -t <type> /dev/sdc /media/mountpoint". and i couldn't mount any usb device as root from console. when i tried to mount devices using pcmanfm it just popped some empty dialogs and i couldn't find any mention of "IsCallerPriviliged failed" error so i (wrongly) presumed it wasn't that. i did try some of the other fixes mentioned in the forum with no success. but adding exec ck-launch-session startkde to my .xinitrc fixed the issue
    thanks again
    Last edited by the gray (2009-03-16 20:17:53)

  • I just want to mount my external USB drive (connected to AEBS)!!

    I'm sure there's a post out there, but I have yet to find it after repeated searches for this topic. I've got to be missing something very very simple. I'll try to be complete on the set-up, please let me know if I've missed any configuration...
    MacAlly FW/USB 2.0 enclosure, connected via USB (powered) hub to AEBS. AEBS is running 7.4.1. I can "see" the drive under the "Disks" option in the Airport Utility:
    http://images46.fotki.com/v1506/photos/4/48510/325829/disk1-vi.jpg
    However, I can't seem to mount/access the disk. (Yes I do have Finder->Prefs->Show Items:Servers checked)
    The disk will mount when connected directly to my MBP via FW:
    http://images43.fotki.com/v1505/photos/4/48510/325829/DISK4-vi.jpg
    I believe I have FileSharing correctly configured (I've since changed from AE pass to disk password):
    http://images46.fotki.com/v1506/photos/4/48510/325829/Disk2-vi.jpg
    Lastly, my AEBS (named "office") does appear under the SHARED tab, and allows me to connect using my disk password. However, there's nothing there...?
    http://images50.fotki.com/v1512/photos/4/48510/325829/disk5-vi.jpg
    I must be missing something very easy, feels like I've almost got it! Any help greatly appreciated. Thanks.
    Andy

    Andy, your post did not indicate how your drive was formatted.
    AEBS wants to see a drive format of HFS+. FAT32 should work as well.
    Might be something to check if you have not already done so.

  • Mounting USB drive as regular user (with ntfs-3g)

    Hello. First of all, I not asking to do the homework for me, rather is someone can help me understand why I can't get this work.
    I spent the last night trying to figure how mount an USB drive as a regular user, using ntfs-3g. I read the related wiki entries and researched quite a lot in the forums. I came up with this:
    fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    devpts /dev/pts devpts defaults 0 0
    shm /dev/shm tmpfs nodev,nosuid 0 0
    #/dev/cdrom /media/cd auto ro,user,noauto,unhide 0 0
    #/dev/dvd /media/dvd auto ro,user,noauto,unhide 0 0
    #/dev/fd0 /media/fl auto user,noauto 0 0
    /dev/sda1 / ext3 defaults,noatime 0 1
    /dev/sda2 /home ext3 defaults,noatime 0 2
    /dev/sda3 swap swap defaults 0 0
    /dev/sdb1 /mnt/usb ntfs-3g noauto,uid=0,gid=0,noatime,umask=000, 0 0
    I created a ntfsuser group, added my user to that group and trim permissions to the ntfd-3g executable (link in this post). That allows me mount the partition as root and read/write as regular user. It works, so (i think) not big deal here.
    However if I add user to the mount options the following error shows up:
    Mount is denied because setuid and setgid root ntfs-3g is insecure with the
    external FUSE library. Either remove the setuid/setgid bit from the binary
    or rebuild NTFS-3G with integrated FUSE support and make it setuid root.
    Please see more information at http://ntfs-3g.org/support.html#unprivileged
    What bugs me the most is I don't understand why I can't mount as regular user when the user option is set in the fstab. Shouldn't that allow regular users to mount and unmount? Is not like that I'm mounting and dismounting USB drives every 5', but I would like to get this done because I know it can be done
    Sorry for asking such trivial question, but I sense that I'm missing something really stupid and I just can't figure what it is

    Beware of the double post! (+1)
    Ok, I decided I'd get this to work, although the method and the implications it could have might not seem pretty to some. There are certain conditions for a user to mount any ntfs volume with ntfs-3g, I will name them here:
    1. ntfs-3g with integrated fuse support. You'll get this by:
        1A. Removing ntfs-3g and fuse from your system if you have them installed as separate packages, so do this as root:
    pacman -Rn ntfs-3g
    pacman -Rn fuse
    Now you can install the new package.
        1B. Getting a modified version of the PKGBUILD found in that AUR link previously mentioned by me, here's mine:
    # Maintainer: Gula <gulanito.archlinux.org>
    # Slightly modified by anderfs
    # Don't forget to setuid-root for the ntfs-3g binary after you install this
    pkgname=ntfs-3g-fuse-internal
    pkgver=2010.5.16
    pkgrel=1
    pkgdesc="Stable read and write NTFS driver (whit internal fuse suport)"
    url="http://www.tuxera.com"
    arch=('i686' 'x86_64')
    license=('GPL2')
    depends=('glibc')
    conflicts=('ntfs-3g')
    makedepends=('pkgconfig')
    options=('!libtool')
    source=(http://www.tuxera.com/opensource/ntfs-3g-${pkgver}.tgz
    http://aur.archlinux.org/packages/ntfs-3g-fuse-internal/ntfs-3g-fuse-internal/25-ntfs-config-write-policy.fdi)
    sha1sums=('895da556ad974743841f743c49b734132b2a7cbc'
    '200029f2999a2c284fd30ae25734abf6459c3501')
    build() {
    cd "${srcdir}/ntfs-3g-${pkgver}"
    ac_cv_path_LDCONFIG=/bin/true ./configure --prefix=/usr \
    --with-fuse=internal --disable-static || return 1
    make || return 1
    package() {
    cd "${srcdir}/ntfs-3g-${pkgver}"
    make DESTDIR="${pkgdir}" install || return 1
    ln -s /bin/ntfs-3g "${pkgdir}/sbin/mount.ntfs" || return 1
    install -m755 -d "${pkgdir}/usr/share/hal/fdi/policy/10osvendor"
    install -m644 "${srcdir}/25-ntfs-config-write-policy.fdi" "${pkgdir}/usr/share/hal/fdi/policy/10osvendor/" || return 1
    Save this as PKGBUILD, preferrably in an empty directory so it doesn't clutter things up when you build it.
        1C. Now go to the directory where you saved it and do this as a regular user:
    makepkg PKGBUILD
    After that's done, you'll get a package called ntfs-3g-fuse-internal-2010.5.16-1-i686.pkg.tar.xz, or something similar.
        1D. Install that package as root:
    pacman -U ntfs-3g-fuse-internal-2010.5.16-1-i686.pkg.tar.xz
    If all went well you now have ntfs-3g compiled with integrated fuse support.
    2. The ntfs-3g version must be higher than 1.2506, this is already covered, the package installed from AUR matches this requirement.
    3. The ntfs-3g binary must be set to setuid-root, to accomplish this you shall do the following as root:
    chown root $(which ntfs-3g)
    chmod 4755 $(which ntfs-3g)
    I used 4750 instad of 4755, I guess that last bit can be a matter of personal taste as long as it isn't something obnoxious like "7".
    4. The user must have the right access to the volume. Okay, this is the ugly part, volumes are owned by root and managed by the disk group with permissions brw-rw----, this means you have to add any users you want mounting this volume to the disk group.
        4A. So, do this as root:
    gpasswd -a [user] disk
    Where [user] is obviously the name of whichever user you're adding to the disk group, do this for any user you want mounting this volume.
        Any users currently logged in will have to log out and back in for these change to take effect, this most likely includes you.
        4B. Now that you logged back in, try this:
    groups
    One of the groups listed should be disk, if it's not there you didn't completely log out of all open sessions.
    5. The user must have the right permissions/access to the mount point. For a user to be able to mount something to a mount point, that user needs to have read permission (pretty self-explanatory), write permission (so the user can make any changes to the sub-structure of the mount point), and execute permission (so the user can change-dir to that mount point) to it. Mount points can be anywhere, so this really depends where you're mounting.
    In my case, I'm mounting these volumes on certain directories under /mnt/, for example /mnt/example. If you're mounting stuff there, you might as well take advantage of the fact your "mounting user" is already in the group disk, and do the following as root:
    chgrp disk /mnt/example
    chmod 774 /mnt/example
    Now users in the disk group will be able to manage these mount points.
    6. Mount it. That's it, you should now be able to mount ntfs volumes as an "unpriveleged enough" user. Here's an example of what you'd have to put in /etc/fstab:
    UUID=XXXXYYYYXXXXYYYY /mnt/example ntfs-3g noauto,noatime,user,uid=0,gid=6,fmask=137,dmask=027,rw 0 0
    uid=0 means root will be the owner of this mount-point and anything in it after it's mounted. This is due to the fact that even though users might own their mountpoints and have rwx permissions on them, you might still not want them to write to the mounted ntfs volumes. Remove this if you want them to be able to write to the volume.
    gid=6 means this will be managed by the disk group in my system. Perhaps the disk group has a different id in your system, run "id root" to find out, as root usually is part of this group.
    fmask = 137 means the owner (root) can do anything with files in this volume except executing files. Group members (disk) can only read files here, not create or execute them. And other users can't do anything in this volume.
    dmask = 027 means the owner can do anything with directories (execute here is needed to chdir), users can't write directories but they can read or execute in them (once again, needed by 'cd'), and finally other users still don't have any access.
    You can use whichever fmask and dmask makes sense to you, or use an umask instead.
    Last edited by anderfs (2010-07-15 11:34:48)

  • Cannot mount external USB drive with FDisk_partition_scheme

    Due to some recent mac issues in my lab, we updated one of the computers to Maverick 10.9.4 from Snow Leopard 10.6.8 yesterday. Anyhow, this seems to have erased the ability of the mac to read or mount several identical USB drives with the NTFS format (WD My Passport Ultra 500GB USB3.0). It could do it yesterday when it was OSX 10.6.8, but not today (OSX 10.9.4). The drives still work fine on other OSX 10.6.8, Windows 8.1 and Ubuntu 14.04 computers (this is appears to be a problem with the Maverick 10.9.4 operating system itself, rather than the flash drives which work flawlessly with every other computer I've tried). I want to remove data from the computer to create backups on the USB drives (no, I don't want to create an online backup or buy a new set of USB drives).
    diskutil can see the drives, but cannot verify or repair them as they lack a GUID (GPT) partition scheme. I'm assuming this is the root of the problem. Manually mounting the drives does not work. Installing the ntfs-3g driver also did not work (I was using this to write to the NTFS drives until we upgraded OS's). Installing the WD Passport drivers for OSX also did not work. How do I get the mac to mount these drives?
    I could reformat one USB drive and then copy to the others, but this would be extremely inconvenient and take days of copying files back and forth (and would prefer to fix the mac rather than go through this with every NTFS drive I own). If I did this, the FAT32 hard drive format would not be suitable, as format is unable to handle the extremely large sequencing datasets I need to transfer. I'm not super familiar with hard drive formats, but in the event we cannot get Maverick to actually work could someone possibly suggest a format able to handle large file sizes (the largest file I need to move is ~60GB) and be compatible with Windows 8.1 / OSX 10.9.4 / Ubuntu 14.04 as well?
    Another workaround I thought of would be just to install an Ubuntu partition on the mac and get the files off through the Linux install. Which is also inconvenient, but would probably be faster than trying to reformat the drives and copy everything over onto them again given the amount of data I want to transfer.
    Unrelated, but is there any way to view files and folders in the OSX root directory besides through Terminal? It seems like this functionality was also removed when we "upgraded" from Snow Leopard. (At this point, if it were up to me I would wipe all of the macs we own and replace them with Ubuntu... even the most minor of tasks always require some sort of workaround with them... at my wits end here ).

    Yeah this definitely appears to be a problem specifically with this mac. It's been having all sorts of weird problems lately, and we figured that upgrading to Maverick might fix them (but instead we got new problems).
    To give an update on this, I ended up reformatting one of the drives to HFS+ and disabled journaling. It now recognizes the drive again, and I'm pulling the files off that way. I looked at exFAT, but it looks like HFS+ has much better Linux support (Windows is read-only but that's only a minor annoyance, as the algorithms to process the data only run on UNIX machines anyways). It's a shame I couldn't keep using NTFS (had to copy over literally EVERYTHING again...) but whatever. Again, no solution for the issue (where this mac can't read NTFS drives).
    @rkaufmann87 - To give a bit more explanation, we recently had to disable online backups because apparently the sheer amount of data causes Time Machine to freeze the computer and fail every time it has a scheduled backup (original issue we thought that upgrading to Maverick would solve... didn't work). The hard drive has been having a lot of issues when it begins to reach max capacity as well. So I am pulling off all of my files to external hard drives and deleting the local copy before we attempt to back up online again. And the data would probably take me a year and a lot of money to recreate (for the curious, it's high throughput sequencing data). I'm choosing not to take any chances as a result.

  • Single-user mode: How to mount and access an external USB drive?

    My MacBook Pro HD is acting up. Cannot boot normally or into "safe mode". Cannot reinstall OS without wiping out the HD. Need to recover some critical files but DiskUtil First Aid and Restore options cannot successfully complete. Problem traced down to "invalid node structure" which means I either have a hardware problem or my filesystem partition directory structure is corrupted. I need to recover some files that are not backed up (timin issue with my regular backup process).
    I can boot into single-user mode, mount the root file system (/sbin/mount -uw /) and can see/navigate the rot filesystem structure via good UNIX command line. Here's what I would like to do (in single-user mode):
    1. Mount an external USB drive (250 GB already formatted as Mac OS X Extended)
    2. Copy various files and/or directories from my HD to the external USB drive (UNIX cp command)
    I realize I could go spend $$ for the Disk Warrior or Data Rescue products (or something similar) that SHOULD help me recover my HD or files, but it seems silly to do this when I can see, touch and taste them from within single-user mode....
    Comments? Suggestions?
    TIA --
    Trent
    P.S. Once I've recovered my files, I'll try to reformat the HD and then reinstall the OS. And THEN go have Apple look at my machine (thank goodness for AppleCare coverage)!

    Resolution:
    1) Boot system in single-user mode (SUM) with external HD attached.
    2) Execute the following UNIX CLI commands once SUM boot process is completed:
    # fsck -fy
    # mount -uw /
    # mkdir /Volumes/target_directory
    # mount -t hfs -w /dev/diskXXX /Volumes/target_directory
    # cp -RXv /source_directory /Volumes/target_directory
    Where XXX is the device-level name for your external HD's data partition. In my case this was /dev/disk1s2. It may take some experimentation to identify this device name if your system has multiple HD's.
    3) Verify contents were successfully copied onto the /Volumes/target_directory.
    Comments and observations:
    - Do NOT use "/" as your source directory - cp will make a second (redundant) copy of /volumes/target_directory
    - I was able to successfully copy ALL files off my HD despite the fsck command's "invalid node structure" error message with this simple procedure. YMMV, depending on the state of your HD.
    - The repeated disk0s2: I/O error warnings displayed during the SUM boot process did not seem to have a negative effect on this procedure. I also received this same error warning intermittently as I navigated the mounted filesystem did not seem to be a problem, either. Again, YMMV.
    Commercial software:
    I downloaded ProSoft Engineering's Data Rescue 3 product (trial version) before spending $99 to attempt to recover my "bad" HD's data via mounting to a good system with FW target mode. It could not successfully complete its "QuickScan" process and immediately hung on block 0 of 390M during its "Deep Scan" process. The product did seem to function properly on an operational system. ProSoft's technical support was responsive and helpful but had no answer for my "Deep Scan" error.
    I did not attempt to use Alsoft's Disk Warrior 4 product. I could not find any trial software available and was reluctant to spend $100 based upon the mixed reviews and comments on this discussion forum as well as other reviews. Alsoft does claim to address the "invalid node structure" error in their marketing materials. Hindsight being 20/20 - I saved $100 by using this simple procedure.
    Final note:
    Neither Leopard nor Snow Leopard's installation DVD could recognize the bad internal HD when trying to do a reinstall. While DiskUtil was able to "see" the bad internal drive it immediately failed when I tried to do an "erase and format". Took the system to my local Apple store and the Genius ran a tool called "SMART Utility" from Volitans Software (www.volitans-software.com). SMART utility confirmed that my HD was bad so it was replaced. AppleCare pays for itself (once again!).

Maybe you are looking for