Oracle VM x86_64 causes NFS Mount to go Read Only

I need a bit of help. I have a 64-bit installation of Oracle VM on a Linux 5.8 server. I have a mount point (/u99) which is shared with two servers (the other server is also a 64-bit VM server). Every time I attempt to write something to this disk it goes to read only. My network guys unmount it, run fsck on the disk partition and then remount it only to have it go read only again and again. This initially started happening when I was attempting to do an RMAN backup but now it happens when I just routinely write something to the partition. It is mounted just as the other non-shared mounts are mounted only it is shared. Any suggestions? Thanks in advance

Are you talking about Virtualbox or Oracle VM 3.1.1?
If its Oracle VM 3.1.1
You shouldn't be mounting NFS shares directly from the Oracle VM server. The VM Manger should used to create NFS repositories.
If you're talking about Oracle 5.8 VM guest on a Oracle VM server.... then I had a similar problem a while back. The VM guest was doing a setgid on the file created. I ended up setting "Disable setuid/setgid file creation" on the storage side so the NFS wouldn't allow it.

Similar Messages

  • 10.5.2 mount USB-Drive read only while booting

    Hi,
    since the update to 10.5.2 my usb drive is always mounted read only if it is connected while booting.
    If I connect it after booting or unmount and remount it after booting everything works fine.
    Any ideas?
    thx
    Bapf
    Message was edited by: Bapf

    strangely enogh ... the last two boots it worked again
    I hope it stays that way

  • Auto mount externals as read only

    I'm setting up a small network for a friend who has a small business. He wants the workstations to force any and all external drives to be mounted read only but I can't think of a way to force this to happen automatically.
    All computers are eMacs running 10.4.9.
    Anyone know of a way to do this?

    Essentially what he is worried about is that at some point he may have an employee who brings in an iPod or flash drive and connects it their workstation. He's not against the employees copy music files, images, etc off the iPod and onto the computer, but he does not want them to be able to copy possible sensitive information onto the iPod or flash drive.
    I'm somewhat familiar with mount and the syntax for it, but have never used the -fur flags. If I'm reading the man pages correct:
    -f revokes the write status of a drive
    -u allows an already mounted drive to be changed so is needed in order for -f to be able to do it's thing
    -r sets the drive as read-only so not even root can write to it
    I'm familiar enough with fstab to be able to edit it to mount a known disk as read-only. Is there a way to do it if I don't know the volume name?
    For example, I notice that on my iBook my own iPods mount as:
    /dev/disk2s2 on /Volumes/POD ALMIGHT
    /dev/disk3s2 on /Volumes/MY IPOD
    So on the eMacs we want this to work on, I assume flash drives and iPods will similarly show as diskXs# with X being a number 2 or higher, and # varying depending on the partitioning scheme of the device. If that's correct, is there some way to force anything that mounts to /dev/diskXs# as read only?
    If it isn't correct, then where's my logic failing me?

  • Manually mounting afp volume read-only

    I have an automation server that will be accessing files on my servers. The account this server uses has access to all the files, and this can't be changed (long story). Is there a way to force an afp mount read-only? Is there a command line for making afp connections that I could "man"?

    I wonder if With Sharepoints...
    http://www.hornware.com/sharepoints/
    You could limit it to R/O, at least for certain Users/Groups?

  • Some thing in my code causing excel file to be READ-ONLY

    Hello:
    I am pretty certain problem is caused by left bottom part of the picture, but I am not sure why.  The software basically write a header to a .xls file at start called XXX-peak.xls, at every cycle I read everything from the .xls, do some array manipulation modify 2 columns to the last row and rewrite the whole thing back to the same file.  I originally thought perhaps I left the file unclosed after an access but I think that would cause an error, besides,  all the read/write file VIs have file close features inside of them.  If someone could give me a quick diagnostic, it would be great!
    JC  
    Attachments:
    snapshot1.jpg ‏3779 KB

    First of all, you're not reading or writing any files in Excel (*.xls) format. You're reading and writing plain ASCII spreadsheet data. (The fact that Excel will try its best to parse the content when you later try to open it with excel does not make it an Excel file ).
    It is not obvious from the picture where the problem is, especially since we don't see the code in all the other cases and you read your filename via a value property. Where else is the file name used? It might be more efficient to open the file once, then use low-level file I/O to read from it and write back to it at desired offset location without ever closing the file during the run of the program.
    Why don't you attach the actual VI so we can better see what you're doing?
    LabVIEW Champion . Do more with less code and in less time .

  • Ipod now mounts as read only filesystem.

    had to re-install base, as I ran amuk with mixing testing and non testing repos.
    after re-install, the only packages from testing I installed were amorak-base and it's dependancies, and libgpod.
    it recognizes my ipod when I plug it in, but mounts it as read only.  Even though my user has write access to the mount point.  Ipod is new classic, and I have followed the guide on amarok.  ipod is formated hfs+.  It was working great, until I re-installed the base system.  Everything I had put on the ipod under linux is still there, so the database shouldn't be borked.
    any help?  I modprobed hfs, hfsplus, and still read only

    The "problem" is libgpod. There is a new version in testing, it will be moved to extra this weekend. With the old version of libgpod it seems, that you write the songs to your ipod, but they don't appear after plugging it off. From the wiki of libgpod:
    Starting with the 2007 generation of iPods, libgpod needs an additional configuration step to correctly modify the iPod content. libgpod
    needs to know the so-called iPod "firewire id", otherwise the iPod won't recognize what libgpod wrote to it and will behave as if it's empty.
    Wait until the new libgpod is moved this weekend, and follow the steps here: http://gtkpod.wikispaces.com/Sysinfo+File#ClassicNano3g
    Then it should work again.
    Daniel

  • Floppy always mounts read-only.

    I've had this problem for a while with arch but it never bothered me. But now I really want to get my hands dirty with OpenBSD and cant bring myself to waste a cd for a 4meg iso. So i have to use the boot floppy. However when i mount any floppy, it mounts it as read-only. I've checked the on/off tab. I know the drive and floppy are both good as I can mount it and write to it in a gentoo live-cd. The fstab line is:
    /dev/fd0 /mnt/fl vfat user,noauto 0 0
    any thoughts?

    grab a livecd if that works (as user!) then check modules loaded against Arch ...
    If you can mount drive as root then its just a permissions thing .. udev maybe
    I cannot be too much help as my box does not have a floppy drive
    O the write protect thing I that missed that sorry ....

  • Root is mounted as read-only

    Hello,
    I 've installed arch 2009.08 but when i am booting into my system a message appears that my / is mounted as ro (read-only)
    I ve to run mount -n -o remount,rw / to make it writtable.
    I ve used ext4 for / and /home partitions and ext3 for /boot
    Is there any bug with the latest 2009.08 images ?
    Thanks

    x-tra wrote:
    I amm gettin the folowing message during boot
    #The superblock could not be read or does not describe a correct ext2 filesystem (FUNNY because i prepared ext4 partions for / and /home).
    I f the device is valid and it really contains ext2/ext3 then the superblock is corrupt"
    Please run efsck -b <block> /dev/sda3
    Then my root partition has to be mounted using the command
    mount -n -o remount,rw /
    Any idea  ?????
    I ve done the installation using netisnstall cd because with the core one i had kernel panics
    Here is my fstab
    # /etc/fstab: static file system information
    # <file system>        <dir>         <type>    <options>          <dump> <pass>
    none                   /dev/pts      devpts    defaults            0      0
    none                   /dev/shm      tmpfs     defaults            0      0
    /dev/cdrom             /media/cd   auto    ro,user,noauto,unhide   0      0
    /dev/dvd               /media/dvd  auto    ro,user,noauto,unhide   0      0
    #/dev/fd0               /media/fl   auto    user,noauto             0      0
    /dev/sda1 /boot ext3 defaults 0 1
    /dev/sda2 swap swap defaults 0 0
    /dev/sda3 / ext4 defaults 0 1
    /dev/sda4 /home ext4 defaults 0 1
    /dev/sdb1 /mnt/DATA ext4 defaults 0 1

  • Kinit mounts root filesystem as read only [HELP][solved]

    hello
    I've being messing around with my mkinitcpio trying to optimize my boot speed, i removed some of the hooks at the beginning i couldn't boot, but then now i can boot but the root filesystem mounts as read only, i tried everything my fstab looks fine, / exists with defaults i tried to mount it referencing by it's uuid or by it's name and i get the same results, it mounts the filesystem as root only all the time no mather what i do.
    There is not logs since i started playing with mkinitcpio, or anything i searched everywhere in this forum and around the internet, and i can't find any solution that would work, i restored all the hooks and modules on mkinitcpio and the result it's still the same. i also changed the menu.lst in grub to vga=773 but that's about it.
    Can anyone help with this please i can't seem to boot properly.
    Regards
    Last edited by ricardoduarte (2008-09-14 16:16:25)

    Hello
    Basically what happens it's that it loads all the uDev events then the loopback, it mounts the root read only, then when it checks filesystems it says
    /dev/sda4: clean, 205184/481440 files, 1139604/1920356 blocks [fail]
    ************FILESYSTEM CHECK FAILED****************
    * Please repair manually and reboot. Note that the root *
    * file system is currently mounted read-only. To remount *
    * it read-write: mount -n -o remount,rw / *
    * When you exit the maintenance shell the will *
    * reboot automatically. *
    Now what bugs me its that i can do that mount -n -o remount,rw / with no problems and when i do
    e2fsck -f /dev/sda4
    it doesn't return any errors just says that 0.9 non continuous.
    none of this makes sense to me!! thats why i though that the problem could be coming from mkinitcpio or something
    any ideas
    Thanks for your help, btw thanks for the quick reply
    Regards
    Last edited by ricardoduarte (2008-09-14 15:48:49)

  • [Solved] Read-Only mounting of USB drives

    Hello all,
    This issue has been raised in many threads, but still I am unable to sort it out!
    My USB drives (pen drive/ Hard Disk) are always mounted in read-only mode. These are in VFAT filesystem, so I dont think ntfs-3g is needed. But I installed that anyways.
    I also tried with disabling auto mounting in dconf-editor and keeping a rule file under /etc/udev/rules.d/ as mentioned in the wiki.
    Still no luck.
    Any ideas ?
    Thanks!
    Last edited by gagan_mishra (2012-03-03 19:11:09)

    This is the output :
    /dev/sda1 on /media/2004CEA904CE80F0 type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
    /dev/sdc1 on /media/ARCH_201108 type udf (ro,nosuid,nodev,relatime,uid=1000,gid=100,umask=77,iocharset=utf8,uhelper=udisks)
    Here /dev/sdc1 is the USB drive. As you can see, its mounted as 'ro' (read only).
    Thanks!

  • Unable to do expdp on NFS mount point in solaris Oracle db 10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dwExport: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd

    Hi Peter,
    Thanks for ur reply.. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>

  • Can you oracle 9.2 rman backup to NFS mounted disk?

    Running is disc space issues.
    We're using oracle 9.2, can we run rman backups to an NFS mounted disk?

    to RMAN there's no difference between local mounted disk or NFS mounted disk
    It's just a mount point with some directory structures.

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

Maybe you are looking for

  • Jabber for iPhone 9.6 Integration with OpenLDAP

         Hi everyone! i just found an issue when try to integrated jabber for iphone version 9.6 with OpenLDAP that seem like not working and want to describe my environment as below. Using BE6K 9.1 includes,      - CUCM 9.1.2      - IM&Presense 9.1     

  • Firefox keeps crashing while using the scroll button on my mouse but only on facebook Why?

    using facebook on firefox is a nightmare. When i scroll through my news feed it just crashes out of the blue i can take a video if necessary and upload it. I have already made a new profile before and it hasnt worked and in safe mode it still crashes

  • Getting .swf into to Device Central.

    Hi all,      I'm using flash CS5.5 (trail version) and want to export a movei file out to device central to test acceleromoter settings. When creating the file I : selected air fro andiod, made these changes to the publish settings : set upa  certifi

  • Deleting duplicate photos in iPhoto 6

    I want to delete duplicate photos in iPhoto 6.  Thanks for your help in answering this question. Bill

  • Unscheduling propagation does not work.

    When I try to unschedule the propagation, it does not work. I tried as following command: exec DBMS_AQADM.UNSCHEDULE_PROPAGATION(queue_name => 'TEST_Q1', destination => 'dbpm.phonon'); So I tried to shutdown the job_queue_process, but it did not perf