Fixed mount points for USB disks via HAL

I've been trying to figure this out on my own but I've been able to achieve my goal.
I want to setup my system so that it auto-mounts a specific hard drive and makes its contents available at a specific file system location.
I'd love to do that by identifying my hard drives by UUID or ID and assigning each one a different mount point.
I've tried the approach described in the ArchWiki:
File: /etc/hal/fdi/policy/20-$device_name.fdi
<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="volume.uuid" string="$device_uuid">
<merge key="volume.label" type="string">$device_name</merge>
</match>
</device>
</deviceinfo>
http://wiki.archlinux.org/index.php/HAL#About_volumes_mount_points
and this one:
<device>
<match key="info.udi" string="/org/freedesktop/Hal/devices/volume_uuid_E265_5E6A">
<merge key="volume.policy.desired_mount_point" type="string">ipod</merge>
<merge key="volume.policy.mount_option.iocharset=iso8859-15" type="bool">true</merge>
<merge key="volume.policy.mount_option.sync" type="bool">true</merge>
</match>
</device>
http://www.mythic-beasts.com/~mark/random/hal/
I restart HAL each time I change the configuration of the 20-$device_name.fdi or preferences.fdi (the second code example). Nothing at all can be found in /var/log/messages. It just silently refuses to mount the devices.
It works okay without these configurations and HAL auto-mount all these hard drives but only if I do not mess with the configs in /etc/hal/fdi/policy.
Can someone please explain what could be wrong here?

Dehir wrote:Im actually having similar difficulties. Have created etc/hal/fdi/policy/20-$device_name.fdi names for each device. But when im trying mount them from pcmanfm they get mounted every single time by random order, which is not that i want. Id prefer hal mounting over fstab but still want them to be mounted with specific names.
Yeah, that's the whole point - I want to have it done automatically with only one condition - fixed mount point names.

Similar Messages

  • Force Mount Point for USB disk drive

    I need to force my 2TB usb drive to mount at the same mount point every time I log on or after I reboot the machine it seems to change - /Volumes/data, /Volumes/Data 1, ...
    Can someone offer a solution?

    Just to make sure I'm understanding this correctly....the iTunes library files (iTunes Library.itl and iTunes Music Library.xml) are currently located in my "My Documents\My Music\iTunes" folder. So, first I should delete the .xml file, right?
    Then, you say to change my preferences and redirect iTunes to the new location of my songs (on the external USB drive). So, I would change the current setting of "C:\Documents and Settings\Jim\My Documents\My Music\iTunes\iTunes Music" to now point to the location of the new drive "I:\mp3" ?
    What happens to the .itl file left behind in the old location? ...and all the directories with all my album art....I dont' want to lose that stuff.
    Sorry, but I'm just a bit confused...well, a lot confused about where these files will be located. Before, my iTunes was configured with the iTunes music folder locations to be: "C:\Documents and Settings\Jim\My Documents\My Music\iTunes\iTunes Music" and all my mp3s were stored on the external USB disk drive....no music was in the iTunes Music folder. This config also put the library files in the \My Music\iTunes folder.
    With making the changes you suggest will the library files and all the music now be on the external drive? The reason I ask is that I read some other posts from people that had everything on the external drive and if they had issues with their system connecting to the external drive, then iTunes would create a new library on the C drive and not let the user know.
    Thanks for your patience in trying to explain this to me.

  • Powershell- Associate Mount Point to Physical Disk

    I need to be able to associate mount points (Get-WmiObject -Class Win32_MountPoint) with the physical drive on which it resides.
    Scenario: I have physical disks (SAN LUNs) mounted as folders on an E: drive (also a SAN LUN) of a server.  I need to be able to, via a PowerShell script, associate the "folder" name to the physical disk (i.e., Harddisk4 or PhysicalDrive4).
    I can get Mount Point associated with the Volume, etc., but can't make the link to the physical disk.
    Any help is appreciated.

    Unfortunately there isn't an association class between mount points and physical disks like there is between logical and physical disks. I did a blog about finding partition alignment which required using Win32_LogicalDiskToPartition class. One of the comments
    suggested using Sysinternals diskext as workaround. See comments:
    http://sev17.com/2009/02/disk-alignment-partitioning-the-good-the-bad-the-ok-and-the-not-so-ugly/

  • How to determine the mount point for directory /tmp ?

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer and according to the tutorial
    http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    I am installing Grid infrastructure. I am on step 7 of 10 (verify Grid installation enviroment) and get this error:
    "Free Space: Rac2: /tmp"
    Cause: Could not determine mount point for location specified.
    Action: Ensure location specified is available.
    Expected value: n/a
    Actual value: n/a
    I have checked the free space using the command:
    [root@Rac2 /]# df -k /tmp
    Output:
    Filesystem     1k-blocks     used     Available     Use%     Mounted on
    /dev/sda1     30470144     7826952     21070432     28%     /
    As you see above, the free space is enough, but could not determine mount point for /tmp.
    Do any folk understand how to determine the mount point for directory /tmp ?
    Thanks.

    I have just checked "/home/oracle/.bash_profile". But in my computer, there is no "oracle" under /home directory.Is this your first time Linux and Oracle installation? I had a brief look at your referenced link. The reason why you do not find a "oracle" user is because the instructions use "ora11g" instead, which, btw, is not standard. The directories of your installation and your installation source can be somewhat different from known standards and you will have to adjust it to your system.
    My best guess is that you have either missed something in the instructions or you need to ask the author of the blog what is wrong. The chance to find someone here who has experience with these custom instructions is probably unlikely.
    I suggest you try to locate the cluster verification tool, which should be in the bin directory of your grid installation. Alternatively you might want to check the RAC, ASM & Clusterware Installation forum: RAC, ASM & Clusterware Installation

  • HANA Mount Point for MCOS

    Hi All,
    We have implemented BW on HANA System and it was all implemented as single instance on one Hardware, now we have to implement HANA in MCOS for DEV and QAS system.
    The current file system is :
    Filesystem              Size  Used  Avail   Use% Mounted on
    /dev/sda1                63G   11G   50G    18%  /
    devtmpfs                127G  280K  127G   1%  /dev
    tmpfs                     213G  100K  213G   1%  /dev/shm
    /dev/sapmntdata      1.1T   16G  1.1T     2%  /sapmnt
    Need to know the new mount point for this for MCOS type implementation. Where our DEV and QAS will reside in the same HANA DB box.
    Thanks,
    Sharib
    Message was edited by: Sharib Tasneem

    Hi All,
    There is no separate Mount point needed for MCOS HANA system.
    Need to provide log directory, data directory and Shared folder during MCOD creation.
    Log Directory will be /<Shared directory>/log/SID2 which in our case was /sapmnt/log/SID2
    Data Directory will be /Shared Directory>/data/SID2 which in our case was /sapmnt/data/SID2
    And Shared location which is /sapmnt/ in our case.
    Use HLM(HANA Life cycle management tool to create new HANA SID. It was like a Cake walk for Updateing HANA Revision, Client, etc.
    Thanks and Regards,
    Sharib
    Note: You need to create SID2 directory inside the location and give appropriate permission to <SID1>ADM user to write over there.
    Message was edited by: Sharib Tasneem

  • Two mount points for ipod.

    Hi there,
    When I connect the IPOD there appear two moint points. One with the name of the device (in this case Mike) and one with the name 'Ipod'.
    The first works fine so I can sync with banshee without problems. The second throws the following error:
    Unable to mount the volume 'Ipod'
    Details:
    mount: wrong fs type. bad option. bad superblock on dev/sdb1. missing codepage or helper program. or other error in some cases useful info is found in syslog - try dmesg | tail or so
    dmesg output:
    [miguel@miguel ~]$ dmesg | tail -40
    sd 9:0:0:0: [sdb] Write Protect is off
    sd 9:0:0:0: [sdb] Mode Sense: 68 00 00 08
    sd 9:0:0:0: [sdb] Assuming drive cache: write through
    usb-storage: device scan complete
    sd 9:0:0:0: [sdb] 39075372 2048-byte hardware sectors: (80.0 GB/74.5 GiB)
    sd 9:0:0:0: [sdb] Assuming drive cache: write through
    sdb: sdb1 sdb2
    sd 9:0:0:0: [sdb] Attached SCSI removable disk
    FAT: invalid media value (0x2f)
    VFS: Can't find a valid FAT filesystem on dev sdb1.
    FAT: invalid media value (0x2f)
    VFS: Can't find a valid FAT filesystem on dev sdb1.
    FAT: invalid media value (0x2f)
    VFS: Can't find a valid FAT filesystem on dev sdb1.
    sdb: detected capacity change from 80026361856 to 0
    Clocksource tsc unstable (delta = 4398030963787 ns)
    ath5k phy0: unsupported jumbo
    usb 1-2: USB disconnect, address 6
    usb 1-2: new high speed USB device using ehci_hcd and address 7
    usb 1-2: configuration #1 chosen from 2 choices
    scsi10 : SCSI emulation for USB Mass Storage devices
    usb-storage: device found at 7
    usb-storage: waiting for device to settle before scanning
    scsi 10:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0
    sd 10:0:0:0: Attached scsi generic sg2 type 0
    usb-storage: device scan complete
    sd 10:0:0:0: [sdb] 39075372 2048-byte hardware sectors: (80.0 GB/74.5 GiB)
    sd 10:0:0:0: [sdb] Write Protect is off
    sd 10:0:0:0: [sdb] Mode Sense: 68 00 00 08
    sd 10:0:0:0: [sdb] Assuming drive cache: write through
    sd 10:0:0:0: [sdb] 39075372 2048-byte hardware sectors: (80.0 GB/74.5 GiB)
    sd 10:0:0:0: [sdb] Assuming drive cache: write through
    sdb: sdb1 sdb2
    sd 10:0:0:0: [sdb] Attached SCSI removable disk
    FAT: invalid media value (0x2f)
    VFS: Can't find a valid FAT filesystem on dev sdb1.
    FAT: invalid media value (0x2f)
    VFS: Can't find a valid FAT filesystem on dev sdb1.
    FAT: invalid media value (0x2f)
    VFS: Can't find a valid FAT filesystem on dev sdb1.
    [miguel@miguel ~]$
    ¿Any help or ideas?
    Thanks

    The post of dlew86 is the good way to do it, but there is some errors in what he wrote:
    First, you need to know what is the label of the firmware partition:
    $ lshal -u `hal-find-by-property --key block.device --string /dev/sdd1` | grep volume.label
    volume.label = 'iPod' (string)
    replace '/dev/sdd1' by the device your ipod uses. The default label is 'iPod', but if you installed RockBox, it is probably different.
    Now, create a fdi file in /etc/hal/fdi/policy/, e.g. 99-ipod-ignore-firmware.fdi (.fdi, not .rules like owain wrote):
    <?xml version="1.0" encoding="UTF-8" ?>
    <deviceinfo version="0.2">
    <device>
    <match key="volume.label" string="iPod">
    <merge key="volume.ignore" type="bool">true</merge>
    </match>
    </device>
    </deviceinfo>
    Replace "iPod" by the label you got with the lshal command.
    If volume.label is empty, you can match against the volume.uuid property instead. But in this case, the rule will apply to your ipod only.
    Now, restart hal:
    # /etc/rc.d/hal restart
    It restart fails, it is probably due to some hald helpers still running. This should solve the problem:
    # pkill hald
    # /etc/rc.d/hal start
    Last edited by vdust (2009-10-05 20:02:13)

  • USB Disk via USB Hub or Ethernet NAS Connection More Reliable?

    This is a rather techical question that calls for an opinion.
    I want to centralize backups for my daughter, wife and myself.  I'm going to add the latest model Airport Extreme Base Station as both an internet access point (from my cable modem) and as a wireless router to hard drive storage.  I know there are cheaper (and probably faster) wireless routers; however, I've used various models of airport base stations over the years with my cable modem and they have all worked flawlessly and never need rebooting.
    My two choices:  1) Connect a 4-port, powered USB hub to the usb connection on the Airport Extreme and use a pair of 3GB hard drives--one for primary backup and the second for the backup of the backup.  2) Use a QNAP TS110 or TS112 NAS device and connect it to the Base Station via gigabit ethernet.  Then, an external USB drive would be connected to the QNAP NAS device.  I've encountered QNAP NAS products in my work life and have been incredibly impressed with them--reliable, very low power requirements, reasonably easy setup via Web browser.
    Choice 1 will be about $125 less expensive than Choice 2.  I'll have to use the Linux Ext3 or 4 partition map for both drives under Choice 2 but the external USB volume can be formatted in Mac HFS+.  Choice 2 will be theoretically faster because of the gigabit ethernet connection (probably irrelevant because of the slow wireless n throughput) but the QNAP device can be set to do automated backups to the USB drive that would be connected to it.  That's nice. 
    My question:  Based on what I've read over the years about connecting USB disks to an Airport Extreme base station, I have the opinion that an ethernet connection to an NAS device may be more reliable (ie, the client computers don't drop the connection to the hard drive as often).  Maybe I'm wrong about that.  I would like an opinion.
    Also, I know how slow the real throughput will be as a result of clients connecting via wireless n.  In real life, with decent quality connections, I'll be looking at 10MB to 12MB per second max.  The first backups will take days; however, subsequent incremental backups will be much faster.  I don't plan to use Time Machine, but rather to use Carbon Copy Cloner to make backups to read/write sparse disk image bundles (CCC won't do regular backups to networked drives).  I tried Time Machine a while back but would rather have access to actual files as as a complete disk image that would be bootable when written back to a drive.  Call me old-fashioned, but that's me.  I also have two other bootable backups on 2.5" external drives--one that stays with me and another is always at my work place.  Also, I'm not interested in a Time Capsule because I don't think their reliability is all that great (bad design with too much heat buildup).
    Thanks for any insights you can offer.

    I meant to say 3TB drives, not 3GB.

  • Messaging Server and Calendar Server Mount points for SAN

    Hi! Jay,
    We are planning to configure "JES 05Q4" Messaging and Calendar Servers on 2 v490 Servers running Solaris 9.0, Sun Cluster, Sun Volume Manager and UFS. The Servers will be connected to the SAN (EMC Symmetrix) for storage.
    I have the following questions:
    1. What are the SAN mount points to be setup for Messaging Server?
    I was planning to have the following on SAN:
    - /opt/SUNWmsgsr
    - /var/opt/SUNWmsgsr
    - Sun Cluster (Global Devices)
    Are there any other mount points that needs to be on SAN for Messaging to be configured on Sun Cluster?
    2. What are the SAN mount points to be setup for Calendar Server?
    I was planning to have the following on SAN:
    - /opt/SUNWics5
    - /var/opt/SUNWics5
    - /etc/opt/SUNWics5
    3. What are the SAN mount points to be setup for Web Server (v 6.0) for Delegated Admin 1.2?
    - /opt/ES60 (Planned location for Web Server)
    Delegated Admin will be installed under /opt/ES60/ida12
    Directory server will be on its own cluster. Are there any other storage needs to be considered?
    Also, Is there a good document that walks through step-by-step on how to install Messaging, Calendar and Web Server on 2 node Sun Cluster.
    The installation document doesn't do a good job or atleast I am seeing a lot of gaps.
    Thanks

    Hi,
    There are basically two choices..
    a) Have local binaries in cluster nodes (e.g 2 nodes) ... which means there will be two sets of binaries, one on each node in your case.
    Then when you configure the software ..you will have to point the data directory to a cluster filesystem which may not be neccasarily global. But this filsystem should be mountable on both nodes.
    The advantage of this method is that ... during patching and similar system maintenance activities....the downtime is minimum...
    The disadvantage is that you have to maintain two sets of binaries ..i.e patch twice.
    The suggested filesystems can be e.g.
    /opt for local binaries
    /SJE/SUNWmsgr for data (used during configure option)
    This will mean installing the binaries twice...
    b) Having a single copy of binaries on clustered filesystem....
    This was the norm in iMS5.2 era ...and Sun would recommend this...though I have seen type a) also for iMs 5.2
    This means there should no configuration files in local fs. Everything related to iPlanet on clustered filesystem.
    I have not come accross type b) post SUNONE i.e 6.x .....it seems 6.x has to keep some files on the local filesystem anyway..so b) is either not possible or needs some special configuration
    so may be you should try a) ...
    The Sequence would be ..
    After the cluster framework is ready:
    1) Insall the binaries on both side
    2 ) Install agent on one side
    3) switch the filesytem resource on any node
    4) Configure the software with the clustered FS
    5) Switch the filesystem resource on the other node and useconfig of first node.
    Cheers--

  • ZFS 7320c and T4-2 server mount points for NFS

    Hi All,
    We have an Oracle ZFS 7320c and T4-2 servers. Apart from the on-board 1 GB Ethernet, we also have a 10 Gbe connectivity between the servers and the storage
    configured as 10.0.0.0/16 network.
    We have created a few NFS shares but unable to mount them automatically after reboot inside Oracle VM Server for SPARC guest domains.
    The following document helped us in configuration:
    Configure and Mount NFS shares from SUN ZFS Storage 7320 for SPARC SuperCluster [ID 1503867.1]
    However, we can manually mount the file systems after reaching run level 3.
    The NFS mount points are /orabackup and /stage and the entries in /etc/vfstab are as follows:
    10.0.0.50:/export/orabackup - /orabackup nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    10.0.0.50:/export/stage - /stage nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    On the ZFS storage, the following are the properties for shares:
    zfsctrl1:shares> select nfs_prj1
    zfsctrl1:shares nfs_prj1> show
    Properties:
    aclinherit = restricted
    aclmode = discard
    atime = true
    checksum = fletcher4
    compression = off
    dedup = false
    compressratio = 100
    copies = 1
    creation = Sun Jan 27 2013 11:17:17 GMT+0000 (UTC)
    logbias = latency
    mountpoint = /export
    quota = 0
    readonly = false
    recordsize = 128K
    reservation = 0
    rstchown = true
    secondarycache = all
    nbmand = false
    sharesmb = off
    sharenfs = on
    snapdir = hidden
    vscan = false
    sharedav = off
    shareftp = off
    sharesftp = off
    sharetftp =
    pool = oocep_pool
    canonical_name = oocep_pool/local/nfs_prj1
    default_group = other
    default_permissions = 700
    default_sparse = false
    default_user = nobody
    default_volblocksize = 8K
    default_volsize = 0
    exported = true
    nodestroy = false
    space_data = 43.2G
    space_unused_res = 0
    space_unused_res_shares = 0
    space_snapshots = 0
    space_available = 3.97T
    space_total = 43.2G
    origin =
    Shares:
    Filesystems:
    NAME SIZE MOUNTPOINT
    orabackup 31K /export/orabackup
    stage 43.2G /export/stage
    Children:
    groups => View per-group usage and manage group
    quotas
    replication => Manage remote replication
    snapshots => Manage snapshots
    users => View per-user usage and manage user quotas
    zfsctrl1:shares nfs_prj1> select orabackup
    zfsctrl1:shares nfs_prj1/orabackup> show
    Properties:
    aclinherit = restricted (inherited)
    aclmode = discard (inherited)
    atime = true (inherited)
    casesensitivity = mixed
    checksum = fletcher4 (inherited)
    compression = off (inherited)
    dedup = false (inherited)
    compressratio = 100
    copies = 1 (inherited)
    creation = Sun Jan 27 2013 11:17:46 GMT+0000 (UTC)
    logbias = latency (inherited)
    mountpoint = /export/orabackup (inherited)
    normalization = none
    quota = 200G
    quota_snap = true
    readonly = false (inherited)
    recordsize = 128K (inherited)
    reservation = 0
    reservation_snap = true
    rstchown = true (inherited)
    secondarycache = all (inherited)
    shadow = none
    nbmand = false (inherited)
    sharesmb = off (inherited)
    sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.13/16:@10.0.0.200/16:@10.0.0.203/16
    snapdir = hidden (inherited)
    utf8only = true
    vscan = false (inherited)
    sharedav = off (inherited)
    shareftp = off (inherited)
    sharesftp = off (inherited)
    sharetftp = (inherited)
    pool = oocep_pool
    canonical_name = oocep_pool/local/nfs_prj1/orabackup
    exported = true (inherited)
    nodestroy = false
    space_data = 31K
    space_unused_res = 0
    space_snapshots = 0
    space_available = 200G
    space_total = 31K
    root_group = other
    root_permissions = 700
    root_user = nobody
    origin =
    zfsctrl1:shares nfs_prj1> select stage
    zfsctrl1:shares nfs_prj1/stage> show
    Properties:
    aclinherit = restricted (inherited)
    aclmode = discard (inherited)
    atime = true (inherited)
    casesensitivity = mixed
    checksum = fletcher4 (inherited)
    compression = off (inherited)
    dedup = false (inherited)
    compressratio = 100
    copies = 1 (inherited)
    creation = Tue Feb 12 2013 11:28:27 GMT+0000 (UTC)
    logbias = latency (inherited)
    mountpoint = /export/stage (inherited)
    normalization = none
    quota = 100G
    quota_snap = true
    readonly = false (inherited)
    recordsize = 128K (inherited)
    reservation = 0
    reservation_snap = true
    rstchown = true (inherited)
    secondarycache = all (inherited)
    shadow = none
    nbmand = false (inherited)
    sharesmb = off (inherited)
    sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.203/16:@10.0.0.200/16
    snapdir = hidden (inherited)
    utf8only = true
    vscan = false (inherited)
    sharedav = off (inherited)
    shareftp = off (inherited)
    sharesftp = off (inherited)
    sharetftp = (inherited)
    pool = oocep_pool
    canonical_name = oocep_pool/local/nfs_prj1/stage
    exported = true (inherited)
    nodestroy = false
    space_data = 43.2G
    space_unused_res = 0
    space_snapshots = 0
    space_available = 56.8G
    space_total = 43.2G
    root_group = root
    root_permissions = 755
    root_user = root
    origin =
    Can anybody please help?
    Regards.

    try this:
    svcadm enable nfs/clientcheers
    bjoern

  • R3load cannot export more than 100 mount points for Oracle?

    We have a DB with more than 390 sapdata###  mount points  (HPUX-PARISC). They are truly mount points, NOT directories under some other mount points.
    After export using R3load (i.e NO DB-specific ), the keydb.xml generated for import has only from sapdata1 to sapdata100.
    Is there any limit here for R3load?
    Thanks!

    R3load doesn't copy the filesystem structure but unloads the content of the database after having checked the size of it and then distributes it across the files.
    Why do you have so many different mountpoints? Is there a technical reason behind? Just curious...
    Markus

  • [SOLVED] Mount point for a DB partition

    Hey guys,
    I want to create a separate partition for my Postgresql Data Base. What mount point should I give it?
    Also, whats the best file system for it? Its gonna be fairly large, about 100GB and growing.
    Last edited by corsakh (2009-11-15 06:32:38)

    xfs or ext3 or ext4 (xfs is particularly useful for large files)
    As for mount-point - heck - you can call it absolutely anything you want! (eg. /db)

  • What the mount point for installing AS for AIX Machine?

    i have a BW Server running in AIx Machine
    I need to install Applicaton Server for teh existing BW Server in a Windows machine
    during that process it is asking me for the profile directory.
    Please let me know of how should the path be given
    In that screen it istelling as /<SAP System mount Directory>/<SAPSID>/profile
    the Bw server is running on the machine wiht name abcdev
    The mount points are /usr/sap 
    Please help

    I am getting confused.
    Please explain in Detail
    I have a BW server running in AIX Machine .
    Now i am installing Application Server for that BW Server(running in AIX Machine) in Windows.2003 Server.
    During the installation process it is asking me for the SAP Systems profile Directory. As per my understanding it is asking me for the BW Server system's profile path.
    Please tell me what should i give here.
    I have even given /sapmnt/DBW/profile and tried. But it did not work. It is giving me the message Node \sapmnt\DBW\profile does not exist.

  • Need help connecting to my usb disk via wan

    Hey
    Ive manage to connect to my usb disk in my own wifi network but when i try to connect to it through Safari using afp:// myip:chosenport it will not work, why doesnt it? I think ive checked all boxes in Airport ultility to make my disk avalible.
    Could some friendly soul make a guide so i know if im doing anything wrong with my settings?

    I see my usb-disk in the sidebar when i open Finder, if thats what u mean?
    I should have the latest update with Mountain Lion so i suppose it would support it? And yes i am using an Airport Extreme router!
    When i try to connect through Finder->Go to Server and-> writing my ip and connecting, the connection takes about 1 minute to say that the server is not avalible at the moment or something like that!

  • External Mount Point for ~/music?

    I would like to mount external drives to mount points such as ~/music or ~/movies in my home directory. From an organization perspective this can make life much easier to manage. Has anyone done this? Is there any problem that anyone knows of with changing one of these pre-designated locations to a mount point?

    I have no experience in mounting the whole external drive inside my home folder, but I have stored my iTunes library on an external and just put an Alias called "iTunes" pointing to the folder on the external drive in the "Music" folder in my Home. I believe the same will work within "Movies".
    BTW: Aliases within the "iTunes Media" folder (like "Music", "Movies" or "TV Shows") do NOT work, though - you have to move either the complete "iTunes Media" folder to the external drive or the whole "iTunes" folder.

  • Mount point for SMB filesystem

    Hi folks
    Using the finder I've connected to a Windows box drive "D" and get a icon on the desktop. This contains GIS data. I'm using an OpenSource application GRASS and want to use that data. GRASS opens a dialogue box which wants an input of the form /<filesystempath>/<filename>. It doesnt accept drag and drop. But I cant find where the Windoze Drive "D" has been mounted!! In Linux it would have been under /mnt but that doesnt exist here,
    any ideas??
    Hugh
    PS Ive tried using spotlight to find"D"... nogo.
      Mac OS X (10.4.8)  

    Sorted!
    The remote filesystem is mounted under /Volumes.
    The other solution I've found is to create a dir
    mkdir /datapoint
    Then mount the SMB filesystem on that dir
    mount_smbfs -W <WORKGROUPNAME> //<username>@<windoze-computername>/<nameofshare> /datapoint
    you will be asked for <username>'s password
    and the the remote data can be accessed using the dir /datapoint
      Mac OS X (10.4.8)  
      Mac OS X (10.4.8)  

Maybe you are looking for

  • Logic Express 7 crashes before files finish loading + crash report?!!

    Hi I'm a newbie! And I'm going crazy here. Hope someone can help me. Was trying to get into a studio to work on this track in 2 days time. I'm running LE7(vers 7.1.1) and started a song that had about 3 Garriton Personal Orchestra(GPO) sounds alongsi

  • Dw8+PHP5+MySQL5 problem

    Hi! My WEB server was upgraded to Apache/2.0.59 (NETWARE) PHP/5.2.6 mod_jk/1.2.15 with Mysql 5.0.67, Now, trying to run simple PHP script with query to a MySQL table, the server log the following errors: PHP Warning: PHP Startup: Unable to load dynam

  • Solution Manager -- Export Import Project

    Hi, I want to import a project from SolMan 7.00 to SolMan 7.00 EHP1, but I can´t find an option to do it. As I know  I have to create an Implementation Project in tx SOLAR_PROJECT_ADMIN and then import the implementation projects that already exists.

  • Can't run OS 9applications. They are considered as Unix applications.

    Hi, I transferred several applications from my former mac running with OS 9 to my last one (PPC, 10.4). For most of them there was no problem. But Tiger considers some of them as Unix applications and won't run them. They appear with a terminal icon,

  • Enhanced Podcast Artwork missing in Quicktime

    I created a number of enhanced podcasts using Garageband 09. All of them display the podcast track (artwork) just fine in iTunes. Only some of them display correctly in QuickTime. I was using QT to export them as 3gp files, but some of them show audi