Which file defines mount point

In Linux, /etc/fstab tells the OS the details of a volume when mounting it. I wanted to edit this to have certain devices use different mount points than they do automatically, so either I'm not doing it properly or OS X works a little differently in this respect.
Thanks.
p.s. I have Tiger, I just haven't updated my info (below)

Brad,
See if this info helps you.
Beavis2084

Similar Messages

  • ASM vs ext3 File system(mount point)

    Please suggest which one is better for small databases.
    ASM or ext3 File system(mount point)?
    Any metalink note.

    ASM better if you do not want to play with I/O tiuning, (if you tune ext3 file system it woud be the same from performace view),
    but it more compilcated for admininstering database files in ASM then in ordinary file system.
    Oracle is recomending to use ASM for database file system.
    I woud think if you have some development database and nead a lot of cloning, moving of datafiles its better to use ordinary file system,
    so you can use copy OS comands, not so complicated.
    If you nead some striping, miroring, snapshoting from ext3 you can use LVM on unix/linux.
    I am not sure but I think what striping or miroring is better on ASM then on LVM, becouse ASM is doing it better for databse I/O.

  • New zone and inherited file system mount point error

    Hi - would anyone be able to help with the following error please. I've tried to create a new zone that has the following inherited file system:
    inherit-pkg-dir:
    dir: /usr/local/var/lib/sudo
    But when I try to install it fails with:
    root@tdukunxtest03:~ 532$ zoneadm -z tdukwbprepz01 install
    A ZFS file system has been created for this zone.
    Preparing to install zone <tdukwbprepz01>.
    ERROR: cannot create zone <tdukwbprepz01> inherited file system mount point </export/zones/tdukwbprepz01/root/usr/local/var/lib>
    ERROR: cannot setup zone <tdukwbprepz01> inherited and configured file systems
    ERROR: cannot setup zone <tdukwbprepz01> file systems inherited and configured from the global zone
    ERROR: cannot create zone boot environment <tdukwbprepz01>
    I've added this because unknown to me when I installed sudo from sunfreeware in the global it requires access to /usr/local/var/lib/sudo - sudo itself installs in /usr/local. And when I try to run any sudo commands in the new zone it gave this:
    sudo ls
    Password:
    sudo: Can't open /usr/local/var/lib/sudo/tdgrunj/8: Read-only file system
    Thanks - Julian.

    Think I've just found the answer to my problem, I'd already inherited /usr ..... and as sudo from freeware installs in /usr/local I guess this is never going to work. I can only think to try the sudo version of the Solaris companion DVD or whatever it's called.

  • Time Machine (OSX) doesn't back up files in Mount Point or Disk Image File

    Hi all,
    I have two disk images which are scripted to be mounted on login. These two disk images are always mounted to the same location. These two disk images are encrypted TrueCrypt volumes.
    Time Machine (TM) will only back up the disk images the first time they are mounted, but not after that. As I modify documents within the volumes throughout the day, the modified timestamps are adjusted properly. However, TM does not back them up. TM never backs up the mount points which are two folders within my home directory.
    Any ideas as to why neither the mount point or the image files are backed up? Do the image files have to be closed (unmounted) after being modified for TM to back them up?
    Now if TM won't back up the image files because they are locked/open, then why does it back them up the first time even though they are mounted? Also, from what I've read, TM should back up any HFS+ drive which is mounted. Both these drives are HFS+, so it should back up the mount points? I can verify this by doing a 'mount' in the terminal, the following is echo'd back:
    /dev/disk0s2 on / (hfs, local, journaled) (Main System Drive)
    /dev/disk1 on /Users/username/Folder1 (hfs, local, nodev, nosuid, journaled, noowners, mounted by username) (First TrueCrypt volume)
    /dev/disk2 on /Users/username/Folder2 (hfs, local, nodev, nosuid, journaled, noowners, mounted by username)(Second TrueCrypt volume)
    Any ideas will help!
    Thanks,
    Chris
    Message was edited by: sleagle328

    Here is the resolution:
    Apparently time machine checks the parent folder's timestamp before moving into that folder to look for modified files.
    So as I was modifying files, the timestamps on the volumes were changing but not the containing directory. Because of this, TimeMachine saw the parent directory had not changed, so did not look for the changed volumes inside.
    So to resolve this, I wrote a simple touch script which touches the parent directory to match the modified volumes timestamp (if there is a modified volume).
    Thanks,
    Chris
    Message was edited by: sleagle328

  • ZFS mount points and zones

    folks,
    a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
    we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
    with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
    is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
    we haven't looked at zone clusters at this stage if for no other reason than time....
    or is there a better way?
    thanks muchly,
    nelson

    i must be missing something...any ideas what and where?
    nelson
    devsun012~> zpool import Zbob
    devsun012~> zfs list|grep bob
    Zbob 56.9G 15.5G 21K /export/zfs/bob
    Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
    Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
    devsun012~> zpool export Zbob
    devsun012~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    1 bob running /opt/zones/bob native shared
    devsun013~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    16 bob running /opt/zones/bob native shared
    devsun012~> clrt list|egrep 'oracle_|HA'
    SUNW.HAStoragePlus:6
    SUNW.oracle_server:6
    SUNW.oracle_listener:5
    devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
    devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
    devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
    root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
    root@devsun012 > bob-has-rs
    clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
    clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
    clrs: (C891200) Failed to create resource "bob-has-rs".

  • How to create a mount point with PowerShell

    Hello,
    Can you please point me to a sample which creates a mount point using powershell?

    Try this. I cooked it up by consolidating a few different sources.  I posted it on the microsoft connect suggestion for native support, under workarounds:
    https://connect.microsoft.com/PowerShell/feedback/details/627099/need-powershell-equivalent-to-cmds-mklink
    #symlinks.ps1
    # First add a type so it stays available for this instance
    Add-Type -TypeDefinition @'
    using System;
    using System.Runtime.InteropServices;
    namespace mklink
    public class symlink
    [DllImport("kernel32.dll")]
    public static extern bool CreateSymbolicLink(string lpSymlinkFileName, string lpTargetFileName, int dwFlags);
    [DllImport("kernel32.dll")]
    public static extern bool RemoveDirectory(string lpPathName);
    [DllImport("kernel32.dll")]
    public static extern uint GetLastError();
    Function New-ReparsePoint() {
    <#
    .SYNOPSIS
    Creates a reparse point to the specified target.
    .DESCRIPTION
    Creates a reparse point to the specified target.
    .PARAMETER Path
    Path to the reparse point to remove.
    .NOTES
    Author: Jordan Mills
    Version: 1.0
    .EXAMPLE
    Remove-SymbolicLink -Path "E:\directory\mount"
    #>
    [cmdletbinding(DefaultParameterSetName="default")]
    Param (
    [parameter(
    ParameterSetName="default",
    Position=0,
    Mandatory=$true,
    ValueFromPipeLine=$True,
    ValueFromPipelineByPropertyName=$True
    [parameter(
    ParameterSetName="file",
    Position=0,
    Mandatory=$true,
    ValueFromPipeLine=$True,
    ValueFromPipelineByPropertyName=$True
    [parameter(
    ParameterSetName="directory",
    Position=0,
    Mandatory=$true,
    ValueFromPipeLine=$True,
    ValueFromPipelineByPropertyName=$True
    [Alias("Path","FileName","Directory")]
    [string[]]$Name,
    [parameter(
    ParameterSetName="default",
    Position=1,
    Mandatory=$true,
    ValueFromPipelineByPropertyName=$True
    [parameter(
    ParameterSetName="file",
    Position=1,
    Mandatory=$true,
    ValueFromPipelineByPropertyName=$True
    [parameter(
    ParameterSetName="directory",
    Position=1,
    Mandatory=$true,
    ValueFromPipelineByPropertyName=$True
    [string]$TargetPath,
    [parameter(
    ParameterSetName="file",
    Position=2,
    Mandatory=$false,
    ValueFromPipelineByPropertyName=$True
    [switch]$IsFile,
    [parameter(
    ParameterSetName="directory",
    Position=2,
    Mandatory=$false,
    ValueFromPipelineByPropertyName=$True
    [switch]$IsDirectory
    If ($IsFile -or $IsDirectory) {
    If($IsFile) {
    $result = [mklink.symlink]::CreateSymbolicLink($Name,$TargetPath,0)
    Else {
    If($IsDirectory) {
    $result = [mklink.symlink]::CreateSymbolicLink($Name,$TargetPath,1)
    Else {
    Write-Error -Message "Conflicting path type parameters. This should not happen."
    Break;
    Else {
    If (Test-Path -LiteralPath $TargetPath -PathType Leaf -ErrorAction SilentlyContinue) {
    $result = [mklink.symlink]::CreateSymbolicLink($Name,$TargetPath,0)
    Else {
    If (Test-Path -LiteralPath $TargetPath -PathType Container -ErrorAction SilentlyContinue) {
    $result = [mklink.symlink]::CreateSymbolicLink($Name,$TargetPath,1)
    Else {
    Write-Error -Message "Unable to determine path type of TargetPath. Use -IsFile or -IsDirectory."
    Break;
    If ($result) {
    Get-Item $Name
    } Else {
    Write-Error -Message "Error creating symbolic link" -Category WriteError #-ErrorId $([mklink.symlink]::GetLastError())
    Function Remove-ReparsePoint {
    <#
    .SYNOPSIS
    Removes a file or directory that is a reparse point (symlink or hardlink) without removing all child objects.
    .DESCRIPTION
    Removes a file or directory that is a reparse point (symlink or hardlink) without removing all child objects.
    .PARAMETER Path
    Path to the reparse point to remove.
    .NOTES
    Author: Jordan Mills
    Version: 1.0
    .EXAMPLE
    Remove-SymbolicLink -Path "E:\directory\mount"
    #>
    [cmdletbinding()]
    Param (
    [parameter(
    Position=0,
    Mandatory=$true,
    ValueFromPipeLine=$True,
    ValueFromPipelineByPropertyName=$True
    [Alias("FullName","Name","FileName","Directory")]
    [ValidateScript({Test-Path $_})]
    [string[]]$Path
    $Path |
    Get-Item |
    ForEach-Object {
    $Item = $_;
    Switch ($Item.Attributes -band ([IO.FileAttributes]::ReparsePoint -bor [IO.FileAttributes]::Directory)) {
    ([IO.FileAttributes]::ReparsePoint -bor [IO.FileAttributes]::Directory) {
    # Is reparse directory / symlink
    If ($whatif) {
    Write-Host "What if: Performing the operation `"Delete Directory`" on target `"$($_.FullName)`""
    } Else {
    [System.IO.Directory]::Delete($Item.FullName);
    Break;
    ([IO.FileAttributes]::ReparsePoint -bor 0) {
    # Is reparse file / hardlink
    If ($whatif) {
    Write-Host "What if: Performing the operation `"Delete File`" on target `"$($_.FullName)`""
    } Else {
    [System.IO.File]::Delete($Item.FullName);
    Break;
    default {
    Write-Error "$Item is not a reparse point."

  • Logical Disk (Mount Point) Performance Counters

    We're currently running SCOM 2012 SP1, Management Servers and Agents are all at UR8
    The Windows Server Operating system management pack is at 6.0.7292.0
    We're successfully collecting performance metrics which are not mount points but the performance metrics for mount points are missing. We've enabled the discovery for mount points.
    The error that I'm seeing on the agent machines is:
    In PerfDataSource, could not resolve counter instance LogicalDisk, Avg. Disk Write Queue Length, G:\Archive\. Module will not be unloaded.
    One or more workflows were affected by this. 
    Workflow name: Microsoft.Windows.Server.2008.LogicalDisk.AverageDiskWriteQueueLength.Collection
    Instance name: G:\Archive\
    Instance ID: {EE651C3D-A5B8-E5C2-459E-8B6C27F8B813}
    We used to be able to collect this performance data but now it's failing. Does anyone know how to resolve this?

    Hi!
    There was a new MP version fixing Mount Point monitoring online earlier this year but it has been pulled since there where some bugs.
    Expect the new version available within the next couple of weeks. I assume your issue will be fixed there.
    Cheers,
    Patrick
    Please remember to click “Mark as Answer” on the post that helped you.
    Patrick Seidl (System Center and Private Cloud)
    Website: http://www.syliance.com
    Blog: http://www.systemcenterrocks.com

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • Mobile accounts WHICH MOUNT POINT?

    How does a machine handling a mobile account
    a. know how to set $HOME ?
    b. know where to mount the folder during sync?
    The scenario and problem:
    (can u help?)(you'll win a free beer if you come to Switzerland).
    I have the user's files stored in /Volumes/team1/users/user1
    The folder /Volumes/team1 is mounted on a local drive on server1.
    The server1 share point, called 'users' is /Volumes/team1/users
    (sounds simple.. doesn't it
    In config-scenario1 I do this:
    The user's share point URL is afp://server1.disneyland.ch/users
    The path to home folder is user1
    The full path is /Network/Servers/server1.disneyland.ch/users/user1
    THIS WORKS ON CLIENT MAC1
    When logged in (via login panel), the $HOME is set to /Users/user1
    and during syncing I see /Volumes/users mounted temporarily (weird,.. it used to show the temporary mount as /Network/Servers/server1.disneyland.ch/users )
    user1@mac1:~ > pwd
    /Users/user1
    user1@mac1:~ >
    THIS WORKS ON SERVER1
    When I ssh into the user, on server1,
    I see $HOME set to /Network/Servers/server1.disneyland.ch/users/user1
    user1@server1:~ > pwd
    /Network/Servers/server1.disneyland.ch/users/user1
    user1@server1:~ >
    THIS DOES NOT WORK ON SERVER2
    On server2 where the same external drive is mounted (it's an Xsan using fiber-channel)
    I get this:
    user1$ cd ~
    -bash: cd: /Network/Servers/server1.disneyland.ch/users/user1: No such file or directory
    SOLUTION1 (that fails)(aka config-scenario 2)
    The user's share point URL is afp://server1.disneyland.ch/users
    The path to home folder is user1
    Set the full path to /Volumes/team1/users/user1
    THIS WORKS ON MAC1
    THIS WORKS ON SERVER1
    THIS CAUSES PROBLEMS ON SERVER2 : (see this posting: http://forums.macosxhints.com/showthread.php?p=581557 ).
    Questions :
    1. How is the HOME folder determined by the computer? Despite the two config-scenarios, the client mac1 uses HOME as /Users/user1 (as it should) but not on Server1 or Server2 (on Server1 it always uses the value in the full path specified in the Mobility>Home for the user)(In Server2 it sometimes uses that value and sometimes it uses the previous configuration value).
    2. How is the MOUNT on the client determined? It seems that it is always /Volumes/users (as it should?); Perhaps it is identified by the system seeing what is behind the user's share point? What's weird is that I'm sure/certain that at one point, under config-scenario1, the mount point on mac8 was NOT /Volumes/users but rather /Network/Servers/server1.disneyland.ch ! Did I dream that ?
    3. How can I PREVENT the mount happening on Server2 (it's mounting on top of the existing /Volumes/team1/users !!) ?
    4. If I DO accept config-scenario1, couldn't I just create a symbolic link on server2 in /Network/Server/server1.disneyland.ch/users-->/Volumes/team1/users ?? Actually this doesn't work because even with sudo I can't mkdir /Networks/Servers/server1.disneyland.ch into which I wold have make the users-> link
    ARGHHH!
    Thanx for any insight.
    /shawn

    DrKdev wrote:
    I've bumped the topic over there:
    Then please do not do so here. The point of posting to the appropriate forum is to attract users with interest & expertise in that area. Bumping here, in an inappropriate forum, is annoying since it just keeps the topic near the top of this forum's list where users that have no expertise with or interest in the issue will keep seeing it.
    This generally does not increase your chances of a reply; if anything it will prompt some users to ignore both of your topics.

  • SQL 2014 cluster installation on mounting point disks error: Updating permission setting for file

    Dear all,
    I am attempting to install SQL 2014 fail over cluster under a physical windows 2012 R2 server which have mounting point disks.
    At the finish of setup I get the following error message.
    Could you please help me on this?
    Thanks
    The following error has occurred:
    Updating permission setting for file 'E:\Sysdata!System Volume Information\.....................................' failed. The file permission setting were supposed to be set to 'D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-3880718306-3832830129-1677859214-2598158968-1052248003)'.
    Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup.
    For help, click: go.microsoft.com/fwlink
    I am using an administrator account which have all security rights (cheched on secpol.msc mmc) needed for installation.

    Hi Marco_Ben_IT,
    Do not install SQL Server to the root directory of a mount point, setup fails when the root of a mounted volume is chosen for the data file directory, we must always specify
    a subdirectory for all files. This has to do with how permissions are granted. If you must put files in the root of the mount point you must manually manage the ACLs/permissions. We need to create a subfolder under the root of the mount point, and install
    there.
    More related information:
    Using Mount Points with SQL Server
    http://blogs.msdn.com/b/cindygross/archive/2011/07/05/using-mount-points-with-sql-server.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • How remove extra files from root mount point

    Hi sun solaris expert,
    Solaris version is 10
    Kindly see my root mount point near to fill 100 %.
    Before hang on my server,
    kindly assist me which solaris files are not essential and to remove them I may create more space. Moreover, there are
    also some files which are create by oracle application user and oracle user automatically.
    kindly guide me.
    /dev/md/dsk/d30 24G 24G 100M 100% /
    Thanks

    Check the following thread, your problem may get resolved
    How to delete unwanted files in filesystem of solaris 10 sparc

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Btrfs with different file systems for different mount points?

    Hey,
    I finally bought a SSD, and I want to format it to f2fs (and fat to boot on UEFI) and install Arch on it, and in my old HDD I intend to have /home and /var and  try btrfs on them, but I saw in Arch wiki that btrfs "Cannot use different file systems for different mount points.", it means I cannot have a / in f2fs and a /home in btrfs? What can I do? Better use XFS, ZFS or ext4 (I want the faster one)?
    Thanks in advance and sorry my english.

    pedrofleck wrote:Gosh, what was I thinking, thank you! (I still have a doubt: is btrfs the best option?)
    Just a few weeks ago many of us were worrying about massive data loss due to a bug introduced in kernel 3.17 that caused corruption when using snapshots. Because btrfs is under heavy developement, this sort of thing can be expected. That said, I have my entire system running with btrfs. I have 4 volumes, two raid1, a raid0 and a jbod. I also run rsync to an ext4 partition and ntfs. Furthermore I make offline backups as well.
    If you use btrfs make sure you have backups and make sure you are ready to use them. Also, make sure you check sum your backups. rsync has the option to use checksums in place of access times to determine what to sync.

  • How can we find out the disk which is used for a mount point

    How can we find out the disk which is used for a mount point?
    one of our mount point(/u03/oracle/prod) was using high I/O and this was causing slowness in the server.
    I can see a disk operation error in errpt at the same time as below. Wanted to check whether the mount point /u03/oracle/prod is using the disk hdisk31
    $errpt|more
    IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
    DXB78877 1125032114 T H hdisk31 DISK OPERATION ERROR
    OS version:AIX 6.1
    DB:11.2.0.2

    this is the output for cat /etc/filesystem
    /u02:
            dev             = /dev/fslv00
            vfs             = jfs2
            log             = /dev/loglv00
            mount           = true
            options         = rw
            account         = false
    /u01:
            dev             = /dev/fslv01
            vfs             = jfs2
            log             = /dev/loglv00
            mount           = true
            options         = rw
            account         = false

Maybe you are looking for

  • Lightning cable that came with iPhone 5 won't charge

    Waited in line since 4am to get my iPhone only to bring it home and try to to restore from iTunes when I find out I got a bad cable. When I plug it into an outlet it won't charge and when I plug (even just the USB end) into my laptop, it gives me an

  • Web dynpro ALV and some other questions

    Hi All, I have couple of question for experts. I am very much new into Web Dynpro world, and i have one web dynpro application built by my old colleague. I want to add search feature into the application so that out of 1000 records i could search one

  • Problem installing Sun Role Manager 5.0.3

    Hi All, I've set up Sun Role Manager 5.0.3 (The Oracle Identity Analytics branded version) I'm using Tomcat 5.5 and Oracle database 11.1.0.6 The issue occurs during start up. I get a bunch of errors which I'll list below. It looks like a database con

  • Create OEM report - Databases using Oracle Partitioning option

    Hi Is it possible to draw a report on Oracle Enterprise Manager 10g to discover which of its list of target databases are using the oracle partitioning option. It is for license purposes. Oracle partitioning is installed by default, but it is not alw

  • Apps froze when trying to update via itunes

    I had to let my iPad go to sleep in the middle of updating several apps and now the apps are frozen I the middle of updating.  How do I comp,ete the update and u freeze the apps?