OVM 2.2 and NFS Repository Problems

Hi All.
I have recently started trying to upgrade our installation to 2.2
but have run into a few problems, mostly relating to the different
way that storage repositories are handled in comparison to 2.1.5.
We use NFS here to provide shared storage to the pools.
I wanted to setup a new two node server pool (with HA), so I upgraded
one of the servers to from 2.1.5 to 2.2 to act as pool master. That
worked ok and this server seems to be working fine in isolation:
master# /opt/ovs-agent-2.3/utils/repos.py -l
[ * ] 865a2e52-db29-48f1-98a0-98f985b3065c => augustus:/vol/OVS_pv_vpn
master# df /OVS
Filesystem 1K-blocks Used Available Use% Mounted on
augustus:/vol/OVS_pv_vpn
               47185920 16083008 31102912 35% /var/ovs/mount/865A2E52DB2948F198A098F985B3065C
(I then successfully launched a VM on it.)
The problem is when I try to add a second server to the pool. I did
a fresh install of 2.2 and configured the storage repository to be the
same as that used on the first node:
vm1# /opt/ovs-agent-2.3/utils/repos.py --new augustus:/vol/OVS_pv_vpn
vm1# /opt/ovs-agent-2.3/utils/repos.py -r 865a2e52-db29-48f1-98a0-98f985b3065c
vm1# /opt/ovs-agent-2.3/utils/repos.py -l
[ R ] 865a2e52-db29-48f1-98a0-98f985b3065c => augustus:/vol/OVS_pv_vpn
When I try to add this server into the pool using the management GUI, I get
this error:
OVM-1011 Oracle VM Server 172.22.36.24 operation HA Check Prerequisite failed: failed:<Exception: ha_precheck_storage_mount failed:<Exception: /OVS must be mounted.> .
Running "repos.py -i" yields:
Cluster not available.
Seems like a chicken and egg problem: I can't add the server to the pool without a
mounted /OVS, but mounting /OVS is done by adding it to the pool? Or do I have that
wrong?
More generally, I'm a bit confused at how the repositories are
supposed to be managed under 2.2.
For exaple, the /etc/init.d/ovsrepositories script is still present,
but is it still used? When I run it, it prints a couple of errors and
doesn't seem to mount anything:
vm1# service ovsrepositories start
/etc/ovs/repositories does not exist
Starting OVS Storage Repository Mounter...
/etc/init.d/ovsrepositories: line 111: /etc/ovs/repositories: No such file or directory
/etc/init.d/ovsrepositories: line 111: /etc/ovs/repositories: No such file or directory
OVS Storage Repository Mounter Startup: [  OK  ]
Should this service be turned off? It seem that ovs-agent now takes
responsibility for mounting the repositories.
As an aside, my Manager is still running 2.1.5 - is that part of the
problem here? Is it safe to upgrade the manager to 2.2 while I still
have a couple of pools running 2.1.5 servers?
Thanks in adavance,
Robert.

rns wrote:
Seems like a chicken and egg problem: I can't add the server to the pool without a
mounted /OVS, but mounting /OVS is done by adding it to the pool? Or do I have that
wrong?You have that wrong -- the /OVS mount point is created by ovs-agent while the server is added to the pool. You just need access to the shared storage.
For exaple, the /etc/init.d/ovsrepositories script is still present,
but is it still used?No, it is not. ovs-agent now handles the storage repositories.
As an aside, my Manager is still running 2.1.5 - is that part of the
problem here? Yes. You absolutely need to upgrade your Manager first to 2.2 before attempting to create/manage a 2.2-based pool. The 2.1.5 Manager doesn't know how to tell the ovs-agent how to create/join a pool properly. The upgrade process is detailed in [the ULN FAQ|https://linux.oracle.com/uln_faq.html#10].

Similar Messages

  • OVM 3.0.1 local repository problem

    Good morning all, i am really new in OVM and i am facing a big issue that stops me evaluating this product.
    I have a couple of servers, connected to a S.A.N. array. I can see from both the servers i added to a clustered pool, and i am able to create a shared repository without problems.
    I am not able to see local disks in the OVM manager administration and therefore i can't create local repositories. I tried all i found in this forum, but without success.
    Let's focus on server1: it has a couple of 146GB disks. I used one of them for OVS installation leaving the second disk alone, without partitioning it.
    Tried to create local repository in the clustered pool, but no way...
    So i created a single full-disk partition and retried to create repo: still no way
    Then i created an ocfs2 filesystem in the new partition but, again, i couldnt see physical local server1 disk.
    Every time i changed partitions configuration, i obviously did rescanning of physical disks.
    I all my tests, local physical disks selection list in Generic Local Storage Array @ node1 is always empty.
    Any hint about solving this issue? Any good pointer to an hands-on guide (official docs are not so good)? Any suggestion about what to look at in log files for debugging?
    Any answer is welcome...
    Thank you all!

    I was able to do this as follows
    1. have an untouched unformatted disk (no partitions, no file system)
    2. in hardware under the vmserver name , scan for the disk and it should show in the list
    3. in the repository section of home, add the repository as physical disk
    4. "present" (green up and down arrows) the physical disk on the vmserver itself (dont ask me why you have to do this but if you dont it wont find its own disk)

  • [OVM 2.2.1], NFS repository, lost DLM?

    Hi,
    I'm a little concerned that DLM may not be functioning properly in one of our OVM pools.
    dlm-dump.py shows (on all servers in the pool):
    [root@sscdevovmsvr01 ~]# /opt/ovs-agent-2.3/db/db_dump.py dlm
    2383_sscsupfmwap01-pv => {'hostname': '10.200.20.3', 'uuid': '99abb7ee-32a1-488e-bf64-037467d99c0a'}
    2369_ssctrnobiap01-pv => {'hostname': '10.200.20.3', 'uuid': '5a2c6d79-f47a-41e7-8ebe-4c58ed6f53d7'}
    1144_ssctestsiebap04-pv => {'hostname': '10.200.20.3', 'uuid': '449a62e4-20c7-41e8-a2ea-2edee66102fc'}
    10_10_SSCDEVDNS01-pv => {'hostname': '10.200.20.2', 'uuid': '3532cef1-397d-4417-9b82-f5e5dd5d5985'}
    13_SSCDEVDNS02-pv => {'hostname': '10.200.20.4', 'uuid': '32b77967-6e52-4562-8771-c97f35870162'}
    2390_sscdbaebizap01-pv => {'hostname': '10.200.20.3', 'uuid': 'f2cf163e-22a1-4d09-bead-065748b65b30'}
    2315_ssctestebizap01-pv => {'hostname': '10.200.20.3', 'uuid': '3c08c422-fb8c-4773-87f1-a3e3ceddc7a2'}
    2312_ssctestextap01-pv => {'hostname': '10.200.20.3', 'uuid': 'b4446a53-6e66-4dfe-aecb-42de47a0fc36'}
    105_sscdevfmwint01 => {'hostname': '10.200.20.3', 'uuid': 'f7bed67b-a5c7-4e38-94fc-9c94c33c7a63'}
    2337_ssctestfmwap01-pv => {'hostname': '10.200.20.3', 'uuid': '8dd59dc2-5c6e-41e7-a9f9-950c5eff7778'}
    480_ssctestfmwap03 => {'hostname': '10.200.20.3', 'uuid': '0857a797-e335-4a3c-95ca-7abeaa75ffdd'}
    2625_sscgstebizap01 => {'hostname': '10.200.20.3', 'uuid': 'd22acefc-5c87-4ee0-a299-8ecee04aa802'}
    35_sscdevadm01 => {'hostname': '10.200.20.2', 'uuid': '3f79665d-eeba-48b3-ace8-e6a3ab76c146'}
    2554_sscsupebizap02 => {'hostname': '10.200.20.3', 'uuid': 'cfa12422-8dc6-4ff7-8b73-0fef5f2d753b'}
    View from the OVM Manager:
    [root@sscdevovmmgr01 ~]# ovm -u <me> -p <password> vm ls -l
    Name Size(MB) Mem VCPUs Status Server Server_Pool
    ssctestldap 27241 4096 2 Powered Off sscdevpool1
    2517_ssctestebizap04 14241 8192 1 Running 10.200.20.1 sscdevpool1
    sscdevobiap01 23241 8192 2 Running 10.200.20.2 sscdevpool1
    sscmiobiap01 23241 12288 2 Running 10.200.20.5 sscdevpool1
    sscmioradb01 27241 16384 2 Running 10.200.20.5 sscdevpool1
    bisdevoradb01 27241 8192 2 Running 10.200.20.6 sscdevpool1
    sscpociip01 27241 12288 6 Running 10.200.20.6 sscdevpool1
    ssciipdevap01 27241 4096 2 Running 10.200.20.6 sscdevpool1
    ssciipdevdb01 27241 4096 2 Running 10.200.20.6 sscdevpool1
    sscdevodiap01 27241 4096 2 Running 10.200.20.4 sscdevpool1
    sscdevoel6u1x64 16001 8192 2 Running 10.200.20.3 sscdevpool1
    bisdevebizap01 27241 4096 2 Running 10.200.20.6 sscdevpool1
    sscdevfmwap01-pv 23241 16384 2 Running 10.200.20.1 sscdevpool1
    35_sscdevadm01 108594 4096 2 Running 10.200.20.4 sscdevpool1
    2676_sscdevw2k8-gplpv 40961 4096 1 Powered Off sscdevpool1
    13_SSCDEVDNS02-pv 20481 2048 1 Running 10.200.20.1 sscdevpool1
    150_ssctestoradb01 33481 16384 2 Running 10.200.20.4 sscdevpool1
    ssctestfmwap04 23241 8192 4 Running 10.200.20.1 sscdevpool1
    sscsupobiap01 23241 4096 2 Running 10.200.20.2 sscdevpool1
    2557_sscsupebizdb02 14241 8192 8 Running 10.200.20.2 sscdevpool1
    sscgstmidap01 23241 16384 2 Running 10.200.20.4 sscdevpool1
    2654_vmsscdtlucm07 24577 4096 1 Running 10.200.20.1 sscdevpool1
    2554_sscsupebizap02 14241 10240 6 Running 10.200.20.1 sscdevpool1
    bisdevoradb02 27241 8192 2 Running 10.200.20.6 sscdevpool1
    bisdevfmwap01 23241 4096 2 Running 10.200.20.4 sscdevpool1
    sscdevodidb01 27241 4096 2 Running 10.200.20.6 sscdevpool1
    sscdevload01 27241 8192 2 Running 10.200.20.3 sscdevpool1
    sscdevload02 27241 8192 2 Running 10.200.20.3 sscdevpool1
    sscdevoradb01 27241 12288 6 Running 10.200.20.2 sscdevpool1
    ssctestucmap03-pv 23241 4096 1 Running 10.200.20.4 sscdevpool1
    ssctestucmap04-pv 23241 4096 1 Running 10.200.20.2 sscdevpool1
    sscdevucm01 23241 4096 2 Running 10.200.20.5 sscdevpool1
    ssctestoradb03 27241 32768 4 Running 10.200.20.1 sscdevpool1
    ssctestoradb04 27241 32768 4 Running 10.200.20.4 sscdevpool1
    10_SSCDEVDNS01-pv 10241 2048 1 Running 10.200.20.1 sscdevpool1
    105_sscdevfmwint01 76801 4096 2 Running 10.200.20.1 sscdevpool1
    ssctestfmwap03 23241 8192 4 Running 10.200.20.4 sscdevpool1
    sscdevebizdb02-pv 27241 8192 2 Running 10.200.20.1 sscdevpool1
    sscdevebizap02-pv 27241 6144 1 Running 10.200.20.2 sscdevpool1
    ssctestucmfs1 71681 4096 1 Running 10.200.20.4 sscdevpool1
    2694_sscdevw2k8-opv 20481 4096 2 Powered Off sscdevpool1
    ssctestlw01 27241 4096 2 Running 10.200.20.6 sscdevpool1
    ssctestlw02 27241 4096 2 Powered Off sscdevpool1
    sscdevebizap04-pv 27241 8192 1 Running 10.200.20.2 sscdevpool1
    ssctestextap01-pv 27241 4096 2 Running 10.200.20.2 sscdevpool1
    ssctestebizap01-pv 27241 16384 1 Running 10.200.20.2 sscdevpool1
    ssctestebizdb01-pv 27241 12288 2 Running 10.200.20.2 sscdevpool1
    bisdevobiap01 23241 4096 2 Running 10.200.20.1 sscdevpool1
    ssctrnsiebap01-pv 23241 4096 2 Running 10.200.20.5 sscdevpool1
    ssctestfmwap01-pv 23241 8192 8 Running 10.200.20.2 sscdevpool1
    ssctrnoradb01-pv 27241 8192 1 Running 10.200.20.5 sscdevpool1
    sscgrantsd-pv 27241 4096 2 Powered Off sscdevpool1
    ssctrnebizdb01-pv 27241 8192 2 Running 10.200.20.1 sscdevpool1
    ssctrnebizap01-pv 27241 8192 1 Running 10.200.20.4 sscdevpool1
    ssctrnobiap01-pv 23241 4096 1 Running 10.200.20.4 sscdevpool1
    sscgstebizap01 27241 4096 2 Running 10.200.20.2 sscdevpool1
    sscsupebizap01-pv 25601 10240 1 Running 10.200.20.2 sscdevpool1
    sscsupebizdb01-pv 27241 16384 5 Running 10.200.20.5 sscdevpool1
    sscsupfmwap01-pv 23241 4096 2 Running 10.200.20.2 sscdevpool1
    sscdbaebizap01-pv 27241 6144 1 Running 10.200.20.6 sscdevpool1
    sscdbaebizdb01-pv 27241 8192 1 Running 10.200.20.5 sscdevpool1
    sscgstoradb01 27241 16384 4 Running 10.200.20.5 sscdevpool1
    sscsupgrid01 27241 4096 2 Running 10.200.20.1 sscdevpool1
    sscdevextap01-pv 27241 4096 2 Running 10.200.20.1 sscdevpool1
    sscdevgrid01 27241 4096 2 Running 10.200.20.1 sscdevpool1
    2514_ssctestebizap03 14241 8192 1 Running 10.200.20.4 sscdevpool1
    sscdevmail01 27241 4096 2 Running 10.200.20.6 sscdevpool1
    2658_vmsscdtlucm08 24577 4096 1 Running 10.200.20.2 sscdevpool1
    sscoelr5u464pv 27241 4096 2 Powered Off sscdevpool1
    sscdevebizdb04-PV 27241 12288 3 Running 10.200.20.2 sscdevpool1
    sscoelr5u432pv 23241 4096 2 Powered Off sscdevpool1
    ssctestobiap01-pv 23241 24576 1 Running 10.200.20.5 sscdevpool1
    ssctestsiebap03-pv 23241 8192 4 Running 10.200.20.5 sscdevpool1
    ssctestsiebap04-pv 23241 8192 4 Running 10.200.20.4 sscdevpool1
    Starting a powered off VM does not add to the dlm list, but the OVM manager sees it started correctly. I am able to start a VM from the command line on two different machines in the cluster concurrently without error O.o
    /dlm/ovm is either empty or does not exist on different servers in the pool (which has currently been up for 170 days).
    Any ideas gratefully received...
    Many thanks :)

    Hmm - very peculiar!
    Wiped cluster, installed OVM 2.2.2 (was previously on 2.2.1) - problem gone!
    Jeff

  • Internet Connection Sharing and NFS route problem?

    My new box is yet to gets its wireless card so following the wiki I have painlessly set up ICS - this works great.  The ICS host connects to my wireless router on wlan0 and to the ICS client on eth0.  The ICS client attaches on eth0.
    From the ICS client I can ping eth0 and wlan0 on the host and also the router but I can't access the NFS shares on the host - should I be able to access them on both wlan0 and eth0 - even if I bring down wlan0?  I'm guessing I need another iptables rule...

    I'm not quite sure I understand whats happening here....  So here is a "diagram":
    ICS cli --- eth0 ---> ICS host --- wlan0 ---> router ---> internet
    Yeah?  From this, there is no reason for wlan0 to affect NFS from cli to host, assuming everything has been set up properly with the routing tables.
    If you disable all iptables stuff (which is how I'm assuming you have the routing working), does NFS work?  Have you set up your /etc/exports properly?
    Can I see your iptables stuff as well?
    And on a side note: I recently chucked NFS in favour of sshfs, it's much better for me personally.  Considered that?

  • I have an ipad mini 1st gen 64 gb (wifi only) and i have problem with some of my apps. These apps have lags when you play with them for a few seconds the apps are having lags are call of duty strike team, gta San Andreas and nfs most wanted.pleas help me

    I have an ipad mini 1st gen 64 gb (wifi only) and i have problem with some of my apps. These apps have lags when you play with them for a few seconds the apps are having lags are call of duty strike team, gta San Andreas and nfs most wanted.pleas help me

    I'm going to guess videos buffer for a while also...
    Two possibilities, one is you should close apps on the iPad, the other is your internet speed is not able to handle the demands of a high speed device.
    To close apps:   Double click home button, swipe up to close the apps.
    To solve internet problem, contact your provider.   Basic internet is not enough for video games and movies.   Your router may be old and slow.

  • File sharing via NFS - permissions problem? SOLVED

    I'd like to share files between my two linux boxes, a desktop (DT) and a laptop (LT).  DT runs Xandros 3, LT runs Arch.  They are connected via a router.
    NFS works all right, up to a point.  Using NFS, I can access all filesystems on DT from LT but the reverse is not true.  Arch on LT resides in two partitions, / and /home.  From DT I can access all the directories in the root filesystem / of LT as well as their subdirectories, with two exceptions.  I cannot access any subdirectories in /home, including my home dir /home/robert/ which doesn't even show up, and in /mnt I cannot access the filesystems of other Linux distros that are mounted in Arch at these mountpoints (e.g. WinXP at /mnt/sda2, Xandros 4 at /mnt/sda5, Slackware 11 at /mnt/sda7) even though they can be accessed perfectly well from within Arch on LT.
    I've also exported the LT /home filesystem separately by adding the line '/home  DT_hostname(rw)' in /etc/exports on LT, and running # mount LT_hostname:/home /mnt/LT_hostname_home on DT.  When I do that /home/robert shows up in the file manager on DT but when I want to open this directory I get the error "Access denied".  The permissions for this LT directory, as seen when mounted on DT, are 'drwx--x--x 1000 users'.  When I try to make this directory fully accessible by running 'chmod a+rw /mnt/LT_hostname_home/robert' as root I get the error
    'chmod: changing permissions of `/mnt/LT_hostname_home/robert': Operation not permitted'.
    In short, while Xandros on DT is quite permissive in allowing me to access all of its filesystems in their entirety from within Arch on LT, Arch on LT is more finicky as it denies access to Xandros on DT to some critical subdirectories.
    I've also tried 'fish' in Konqueror, with similar results.  Running 'fish://DT_hostname' in Arch on LT gives me full access to filesystems on DT but when I'm running 'fish://LT_hostname' on DT, I get the error 'Could not connect to host LT_hostname', i.e. Arch rejects the connection attempt.
    To sum up, when I'm using NFS the permissions don't seem to be fully correct on Arch on LT, and I don't seem to be able to change them, and when I'm using 'fish' something is also fishy on the Arch side.
    On a side note, both systems run firewalls (DT: Firestarter, LT: Arno's FW) which I had to stop - without doing that nothing connects.  Also, both systems obviously run all necessary nfs and ssh daemons.
    How can I fix this problem?  Would shfs work any better?  Also, I'd prefer to keep my firewalls up all the time.
    Thanks for your help.
    Robert

    Thanks, FUBAR and tomk, for your tips.  I eventually managed to get my two boxes (DT with Xandros and LT with Arch) connected in such a way that DT can access all filesystems on LT and vice versa.  I experimented with three different ways of doing this, NFS, FISH and SHFS.
    Using NFS entailed the most involved configuration of the three.  FISH was the simplest to set up but SHFS wasn't that much more complicated.  My preference would be for SHFS.  See:  http://shfs.sourceforge.net/
    NFS
    Using NFS in Arch only requires installing portmap and nfs-utils; most of the NFS functionality has already been compiled into the kernel.  As FUBAR suspected, the uid's for user robert were different on the two machines: uid=1000 in Arch and uid=1001 in Xandros.  In NFS, I got around that by putting 'no_root_squash' in the export directives in /etc/exports, i.e.
    / hostname_DT(rw,no_root_squash,subtree_check)
    /home hostname_DT(rw,no_root_squash,subtree_check)
    /mnt/sda5 hostname_DT(rw,no_root_squash,subtree_check)
    /mnt/sda7 hostname_DT(rw,no_root_squash,subtree_check)
    Using NFS, one also has to add lines in /etc/hosts.allow for each of the daemons and programs used by NFS, specifying which hosts are allowed to use these services, e.g. in my case for portmap
    portmap: 192.168.0.5, 192.168.0.7 # you have to use IP addresses!
    and the same for nsfd, nfslock, lockd, rquotad, mountd, statd, mount, umount.  In Xandros, two of these have different names: rpc.nsfd and rpc.mountd.
    Also, to use NFS in Arch one has to add the services portmap, nfslock, nfsd to the DAEMONS line in /etc/rc.conf, e.g. right after network.  Finally, I have to stop the firewalls on both machines when I want to use NFS.  After doing all of that, I can use Konqueror as user robert to access all filesystems on the respective server (DT or LT) from the other machine as a client except for /home/robert and /mnt/sda7/home/robert (that's a Slackware install) on LT; for these I have to use Konqueror as root on DT.
    FISH
    Using FISH is very simple.  Remote filesystems don't have to be mounted, and the only thing that's required is that the sshd service is running on the file server.  I.e. in Arch one has to install openssh and put the service sshd in the DAEMONS line in /etc/rc.conf.  Firewalls must be stopped to set up the connection but once the connection is established it looks as though one can restart the firewalls.
    One should also add a line in /etc/hosts.allow for the hosts that are allowed to use sshd, i.e.
    sshd: 192.168.0.5, 192.168.0.7 (or sshd: ALL )
    and comment out the line ALL: ALL: DENY in Arch's /etc/hosts.deny.
    Once this is done, all that's needed to access the root filesystem of the server is to enter 'fish://root@hostname/' in the URL field of Konqueror as an ordinary user, followed by the root password.
    The drawback of FISH is that one is frequently asked for the password but I suppose one can avoid that by using SSH keys.
    SHFS
    SHFS needs to be installed and configured on the client side, not on the server side.  The server only needs to have a working sshd running.  If you run Arch as a client, install shfs in it (pacman -S shfs) and make sure sshd is running on the server and firewalls are stopped.
    Next, create a mount point for the remote filesystem, e.g.
    # mkdir -p /mnt/shfs
    Set the suid bit on /usr/bin/shfsmount and /usr/bin/shfsumount if you wish to enable all users to mount (umount) remote dirs using shfs.  You can do this in Konqueror or by running
    # chmod u+s /usr/bin/shfsmount
    # chmod u+s /usr/bin/shfsumount
    so that the permissions are: -rwsr-xr-x root root.
    Then mount the remote shell filesystem:
    # shfsmount root@remote_hostname:/ /mnt/shfs -o uid=robert
    [or you can use # mount -t shfs root@remote_hostname:/ /mnt/shfs -o uid=robert]
    Using the option -o uid=robert got me around the mismatch of uid's for robert on the two systems.
    At the 'root@remote_hostname's password:' prompt enter root's password.  You're ready then to access the remote filesystem as user robert at /mnt/shfs, even after the remote firewall is restarted.
    As with FISH, so with SHFS, it seems to be necessary that a line is added in /etc/hosts.allow for the hosts that are allowed to use sshd, i.e.
    sshd: 192.168.0.5, 192.168.0.7 (or sshd: ALL )
    and that the line ALL: ALL: DENY in Arch's /etc/hosts.deny is commented out or removed.
    I'm still a newbie with file sharing on Arch (and non-Arch Linux).  Forgive me if the above comes across as somewhat amateurish.
    Robert

  • NFS client problem "The document X could not be saved"

    Hi,
    Briefly: Debian Linux server (Lenny), OS X 10.5.7 client. NFS server config is simple enough:
    /global 192.168.72.0/255.255.255.0(rw,rootsquash,sync,insecure,no_subtreecheck)
    This works well without our Linux clients, and generally it is Ok with my OS X iMac. OS X NFS client is configured through Directory Utility, with no "Advanced" options. Client can authenticate with NIS nicely, and NFS, on the whole, works. I can manipulate files with Finder, and create files on the commandline with the usual tools.
    The problem is TextEdit, iWork and other Cocoa apps (not all). They can save a file once, but subsequently saving a file produces a "The document X.txt cannot be saved" error dialog. If I remove the file on the commandline and re-save, then the save succeeds. It is as if re-saving the document with the same name as an existing file causes issues. There seems to be no problem with file permissions. When I save in a non NFS exported directory everything is fine.
    Has anyone spotted this problem before?
    Lawrence

    I doubt that "OS X NFS is fundamentally broken" seeing as how many people use it successfully.
    tcpdump (or more preferably: wireshark) might be useful in tracking down what's happening between the NFS client and NFS server. Sometimes utilities like fs_usage can be useful in tracking down the application/filesystem interaction.
    It's usually a good idea to check the logs (e.g. /var/log/system.log) for possible clues in case an error/warning is getting logged around the same time as the failure. And if you can't reproduce the problem from the command line, then that can be a good indication that the issue is with the higher layers of the system.
    Oh, and if you think there's a bug in the OS, it never hurts to officially tell Apple that via a bug report:
    http://developer.apple.com/bugreporter/
    Even if it isn't a bug, they should still be able to work with you to help you figure out what's going on. They'll likely want know details about what exactly isn't working and will probably ask for things like a tcpdump capture file and/or an fs_usage trace.
    HTH
    --macko

  • Report node and reports repository

    what is the difference between report node and reports repository. I know that Process schedular picks the report from report repository in order to show in process monitor. Does report node and reports repository are same??

    After a report runs to success in runstatusYeah, but the files are send to the report repository even if it is not successful terminated (run status is not sucess). If the files are not sent to the report repository, then the problem doesn't come from the process itself, but from the definition, space availibity...
    When a report has been posted sucessfully (transfered from the process scheduler report repository - on the other hands the log_output dir, to the report repository defined into the report node), the status of the distribution status is Posted.
    The run status and distribution status are independant eachother.
    Nicolas.

  • 7110 OMG, CIF and NFS permission woes. I'm tired and I want to go home.

    OK, here's the dealio...
    I have share exported via CIFS and NFS from our 7110 array running 2010.02.09.2.1,1-1.18
    I have AD configured for CIFS Authentication.
    I have a UNIX desktop so I am using SMB authenticate via AD and talk to the CIF share on the array.
    I have the NFS share mounted using vers 3 on Solaris 10.
    Now, the problem..........
    PERMISSIONS!!!
    Here’s what I want to do,
    Create a file or folder on the CIF and preserve the username on NFS.
    Example, I login as myself via AD, bam I’m on the array.
    Create a file.
    Check the ownership of the file on the NFS mount and it’s suddenly become a series of numbers. Of which I assume are taken from my Windows SID. As Solaris can’t relate my SID to a UNIX username I’m left out in the dark.
    So, I then tried to set up some rule based identity mapping so my Windows login would be converted to my UNIX username, no luck still a series of numbers for me listed against my files.
    I could work around this if I could chown but I can’t even do that as it says chown: filename: Not owner
    What gives? How do I keep my username from CIFS to NFS? HELP!!!!

    Did you have any joy with this?
    I have never been able to determine a consistent configuration for NFS/CIFS sharing on a 7310. Ended up opening access to all on the NFS side (v4) and the CIFS just worked out of the box.
    I am using ID Mapping, with IDMU first, then rule based mapping next. The box picks up the correct UID/GID from AD but doesn't always inherit the user & group for the NFS side.
    Chris

  • Deadlocking issue with sshfs and nfs

    Okay, I've used both sshfs and nfs for remotely accessing the home partition on my fileserver, but I have been having a problem where the networking on the server suddenly cuts out.  Any processes that are accessing the folder I mounted nfs/sshfs with become deadlocked.  Any processes that try access my home directory, where the remote folder sits, are also deadlocked.  I cannot get into the machine with ssh.  I have to manually reboot it in order to get any networking at all.
    I have to also force-kill any known processes that are accessing the remote folder, and if I don't know what they are, I have to forcibly unmount it.  This issue has been occuring with this specific fileserver since I got it.  It is running Arch Linux i686, but has had the same problem with the server editions of both Fedora and Ubuntu.
    I don't know where to begin with fixing this problem, nor do I know how to diagnose it.

    Consider "soft" mount option for NFS.

  • [SOLVED] Netbooting with PXE, TFTP and NFS / Numerous errors

    Greetings all, hope you can help me out.
    Been given a task by my company of making a network bootable ICA client (with X and Firefox, with the Citrix ICA client installed) as small as possible to minimize network traffic (as 440 workstations would be downloading the end-product simultaneously, so it'd beat ten bells of proverbial out of the core and edge switches for a little while). I discovered two options. One being to integrate everything in side a cloop image directly inside the INITRD. I have stacks of working INITRDs with their matched kernels yet being my first dabble in to extracting the INITRD, my faffing with CPIO has resulted in me nuking my base layout (Thank god for snapshotting in VMware Workstation!) 4 times, and either getting "Premature end of file" or a copius amount of lines stating "cpio: Malformed Number: <strange characters>" finally ending with "Premature end of file". As a result I went in search of another option, which would be booting off an NFS share. I followed the guide:
    http://wiki.archlinux.org/index.php/Dis … t_NFS_root
    ...in order to set up a network booted install of Arch and hit a few snags along the way, probably a result of using multiple operating systems for the TFTP and NFS server as opposed to using what the guide recommends, but I'm not sure as these seem solvable, although I don't know how right now.
    The set up:
    DHCP is provided by a Microsoft Windows Server 2003 VM (AD Integrated) on 172.16.10.17 on a box called "Rex".
    TFTP is provided by another Windows Server 2003 VM by "TFTPd32" which is a free download. This is located on 172.16.10.158 on a box called "Terra".
    The NFS store is provided by OpenFiler 2.3 which is a specialized version of rPath Linux designed specifically for turning boxes in to dedicated NAS stores. This is located on 172.16.10.6, and is called "frcnet-nas-1".
    The problem:
    DHCP is correctly configured with a Boot Host Name (Which is 172.16.10.158) and a boot file name of "pxelinux.0". This is confirmed as working.
    Client gets the kernel and INITRD from TFTP and boots up fine until it hits "Waiting for devices to settle..." by which point it echos out "Root device /dev/nfs doesn't exist, attempting to create it...", which it seems to do so fine. It then passes control over to kinit and echos "INIT: version 2.86 booting" and the archlinux header, and immediately after that it prints:
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    /bin/mknod: '/dev/null': File exists
    /bin/mknod: '/dev/zero': File exists
    /bin/mknod: '/dev/console': File exists
    /bin/mkdir: cannot create directory '/dev/pts': File exists
    /bin/mkdir: cannot create directory '/dev/shm': File exists
    /bin/grep: /proc/cmdline: No such file or directory
    /etc/rc.sysinit: line 72: /proc/sys/kernel/hotplug: No such file or directory
    :: Using static /dev filesystem [DONE]
    :: Mounting Root Read-only [FAIL]
    :: Checking Filesystems [BUSY]
    /bin/grep: /proc/cmdline: No such file or directory
    :: Mounting Local Filesystems
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    [DONE]
    :: Activating Swap [DONE]
    :: Configuring System Clock [DONE]
    :: Removing Leftover Files [DONE]
    :: Setting Hostname: myhost [DONE]
    :: Updating Module Dependencies [DONE]
    :: Setting Locale: en_US.utf8 [DONE]
    :: Setting Consoles to UTF-8 mode[BUSY]
    /etc/rc.sysinit: line 362: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 362: /dev/vc/1: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/1: No such file or directory
    ... all the way down to vc/63 ...
    :: Loading Keyboard Map: us [DONE]
    INIT: Entering runlevel: 3
    :: Starting Syslog-NG [DONE]
    Error opening file for reading; filename='/proc/kmsg', error='No such file or directory (2)'
    Error initializing source driver; source='src'
    :: Starting Network...
    Warning: cannot open /proc/net/dev (No such file or directory). Limited output.
    eth0: dhcpcd 4.0.3 starting
    eth0: broadcasting inform for 172.16.10.154
    eth0: received approval for 172.16.10.154
    eth0: write_lease: Permission denied
    :: Mounting Network Filesystems
    mount: only root can do that
    [FAIL]
    :: Starting Cron Daemon [DONE]
    ...and, nothing after that, it just stops. Kernel doesn't panic, and hitting ctrl+alt+delete does what you'd expect, a clean shutdown minus a few errors about filesystems not being mounted. It seems /proc isn't getting mounted because init apparently doesn't have the appropriate permissions, and /proc not being mounted causes a whole string of other issues. Thing is, proc gets created at boot time as it contains kernel specific information about the system and the kernel's capabilities, right? Why can't it create it? How come init doesn't have the same privileges as root as it usually would, and how would I go about fixing it?
    I admit, while I'm fairly competent in Linux, this one has me stumped. Anyone have any ideas?
    Last edited by PinkFloydYoshi (2008-11-22 12:29:01)

    The idea behind the Windows DHCP and TFTP is that we'd be using an existing server and a NetApp box with NFS license to serve everything off. I would have loved to make a new server which is completely Linux, but my boss, nor the other technician have ever used Linux so if I left for any reason, they'd be stuck if ever they ran in to trouble, which is why I've struggled to get Linux to penetrate our all Windows infrastructure.
    During my hunting around on Google I found a lot of information on making my own initrd, and a lot of it using all manner of switches. I can make them fine, but I figure that I would need to look at extracting the current working one first, adding X, Firefox and the ICA client to it, then compressing it again. Cloop came about when I was looking at DSL's internals. The smaller the initrd, the better, so utilizing this could possibly be a plus too.
    The reason I'm doing this with Archlinux is that I know Arch's internals quite well (and pacman is just wonderous, which is more than I can say for yum), so if I run in to a small problem I'm more likely to fix it without consulting Google. Fair enough though, the NFS booting method is giving me issues I never thought were possible. Ahh, sods law strikes again.
    Addendum: I've noticed something which struck me as odd. Files in the NFS share are somehow owned by 96:scanner instead of root:root. Upon attempting changing, it's telling me "Operation Not Permitted". Further prodding has led me to believe it's an Openfiler thing where GID/UID 96 on the OpenFiler box is "ofgroup"/"ofguest". Chowning / to root:root puts NFS boot right ahead and gives me a prompt, however I cannot log in as root. I've also discovered that chrooting in to the base from my Arch workstation and creating a directory makes the directory owned by ofgroup:ofguest again, so it's an Openfiler thing after all this time. Prodding further.
    Addendum two: For anyone using Openfiler out there, when you allow guest access to the NFS share, be sure to set the Anonymous GID and Anonymous UID to 0. By default it's 96 and as a result when trying to boot you get the errors I experienced. This is insecure and you should use some sort of network/host/ip range restriction. Because the root filesystem has 96:96 as the owner of everything after you install the base layout using pacman (and any changes you make afterward) init and root no longer have the appropriate permissions, user 96:96 (which is "scanner" in Archlinux) has the permissions instead and init, in order to complete boot would need to be "scanner" in order to boot completely.
    Solution is to set Anon GID and Anon UID to 0, chown the entire diskless root filesystem to root, then use a linux desktop to mount the diskless root filesystem, mount /proc, /sys and mount bind /dev, then chroot in to the diskless root filesystem. At this point to clear up any problems with bad passwords, use passwd to change your password. Exit the chroot environment then unmount the diskless proc, sys and dev. Boot up via the network and use your chosen password to log in as root. At this point, start clearing up permissions from the en masse filesystem chown and you should then have a usable diskless root.
    I'll experiment further and clear up some of the remaining permission errors that occured during boot and report on my progress in fixing it. Didn't like the idea of chowning the entire share as root. :S
    Last edited by PinkFloydYoshi (2008-11-21 19:28:15)

  • Slow ZFS-share performance (both CIFS and NFS)

    Hello,
    After upgrading my OpenSolaris file server (newest version) to Solaris 11 Express, the read (and write)-performance on my CIFS and NFS-shares dropped from 40-60MB/s to a few kB/s. I upgraded the ZFS filesystems to the most recent version as well.
    dmsg and /var/log/syslog doesn't list anything abnormal as far as I can see.. I'm not running any scrubs on the zpools, and they are listed as online. top doesn't reveal any process utilizing the CPU more than 0.07%.
    The problem is probably not at the client side, as the clients are 100% untouched when it comes to configuration.
    Where should I start looking for errors (logs etc.)? Any recommended diagnostic tools?
    Best regards,
    KL

    Hi!
    Check Link speed.
    dladm show-dev
    Check for collisions and wrong network packets:
    netstat -ia
    netstat -ia 2 10 ( when file transfered)
    Check for lost packets :
    ping -s <IP client> ( whait more 1 min )
    Check for retransmit, latency for respond:
    snoop -P -td <IP client> ( when file transfered)
    Try replace network cable.
    Regards.

  • IDM 6.0SP1 & Oracle as a repository problems

    We use IDM 6.0SP1 + WebSphere 6 + Oracle 9 (+ oracle JDBC driver v10.x) as a repository with about 1 600 000 IDM accounts.
    Each IDM account has at most 2 resource accounts.
    We're facing two problems:
    1) IDM is used to read changes (user creations and modifications) in 10 different DB2 tables (thanks to 10 DB2 active sync adapters) and provision a single LDAP directory based on those changes. We've about 10 changes per second to consume. The Oracle repository is about 32 Gigs .
    We sometimes restart IDM but the adapters which are supposed to start automatically don't seem to start, or they start so slowly they look frozen.
    We suspect bad Oracle response times and also we suspect each adapter triggers a full (Oracle) database scan when starting, which may take a while, in spite we pass a statistics script every night on Oracle.
    We've already applied all the suggested/documented repository optimizations, so we wonder what else could we possibly do to improve IDM's interaction with its repository ? For example, is there any thing we can tune with the IDM RepositoryConfiguration XML object ?
    2) In case of "long" (> 5 minutes) repository or network outage (provided that IDM and its repository reside on different servers), we noticed IDM adapters don't restart well automatically while configured to or they look frozen. We have to manually "try" different things in order for the adapters to start.
    Most of commercial software relying on network or databases automatically deal with such outages so that they recover automatically, or at least, they just loose unsaved stuffs but they can restart anyway.
    Is there such a feature with IDM (6.0SP1 or later) ? If not, what's the recommanded actions to take in order for the adapters to start and process the remaining
    DB2 changes ?

    IDM does some cleanup in its database when it starts. Perhaps that is what is slowing you down.
    How many rows do you have in your "TASK" table?
    Edited by: PaulHilchey on Mar 6, 2008 6:15 PM

  • LDAP and NFS mounts/setup OSX Lion iMac with Mac Mini Lion Server

    Hello all,
    I have a local account on my iMac (Lion), and I also have a Mac Mini (Lion Server) and I want to use LDAP and NFS to mount the /Users directory, but am having trouble.
    We have a comination of Linux (Ubuntu), Windows 7 and Macs on this network using LDAP and NFS, except the windows computers.
    We have created users in workgroup management on the server, and we have it working on a few Macs already, but I wasnt there to see that process. 
    Is there a way to keep my local account separate, and still have NFS access to /Users on the server and LDAP for authentification?
    Thanks,
    -Matt

    It would make a great server. Bonus over Apple TV for example is that you have access via both wired ethernet and wireless. Plus if you load tools from XBMC, Firecore and others you have a significant media server. Cost is right too.
    Many people are doing this - google mac mini media server or other for more info.
    Total downside to any windows based system - dealing with constant anti-virus, major security hassels, lack of true media integration and PITA to update, etc.
    You should be aware that Lion Server is not ready for prime time - it stil has significant issues if you are migrating from SNL 10.6.8. If you buy an apple fresh Lion Server mac mini you should have no problems.
    You'll probably be pleased.

  • How do I Help Apple Care Stop Warring with Each Other and Fix the Problem with My iPhone that They Acknowledge Creating?

    How Do I Help Apple US & Apple Europe Stop Warring With Each Other And Fix The Problem They Created?
    PROBLEM
    Apple will not replace, as promised, the iPhone 5 (A1429 GSM model) that they gave me in London, UK, with an iPhone 5 (A1429 CDMA model).
    BACKGROUND
    My iPhone 5 (A1429 CDMA model) was purchased this year in September on an existing Verizon Wireless (VZW) line using an upgrade. The purchase took place in California and the product was picked up using Apple Personal Pickup through the Cerritos Apple Retail Store. I will refer to this phone at my "original" phone.
    The original phone was taken into the Apple Store Regent Street in London, England, UK on November 15, 2012. The reason for this visit was that my original phone's camera would not focus.
    The Apple Store Regent Street verified there was a hardware problem but was unable to replace the part.
    The Apple Store Regent Street had me call the US AppleCare. At first they denied support, but then a supervisor, name can be provided upon request, approved the replacement of my original phone with an iPhone 5 (A1429 GSM model) as a temporary solution until I got back in the US. And approved that the GSM model would be replaced with a CDMA model when I came back to the US. I will refer to the GSM model as the "replacement". They gave me the case number --------.
    The Apple Store Regent Street gave me the replacement and took the original. The first replacement did not work for reasons I do not understand. They switched out the replacement several times until they got one that worked on the T-Mobile nano SIM card that I had purchased in England, UK. Please refer to the repair IDs below to track the progression of phones given to me at the Apple Store Regent Street:
    Repair ID ----------- (Nov 15)
    Repair ID ----------- (Nov 16)
    Repair ID ----------- (Nov 16)
    The following case number was either created in the UK or France between November 15 to November 24. Case number -----------
    On November 19, 2012, I went to France and purchased an Orange nano SIM card. The phone would not activate like the first two repair IDs above.
    On November 24, 2012, I went to the Apple Store Les Quatre Temps. The Genius told me that my CDMA phone should not have been replaced with a GSM model in the UK and that this was clearly Apple's fault. They had me call the AppleCare UK.
    My issue was escalated to a tier 2 UK AppleCare agent. His contact information can be provided upon request. He gave me the case number -----------.
    The UK tier 2 agent became upset when he heard that I was calling from France and that the France Apple Store or France AppleCare were not helping me. He told me that my CDMA phone should not have been replaced with a GSM model in the UK and that this was clearly Apple's fault.
    The UK tier 2 agent said he was working with engineers to resolve my problem and would call me back the next day on November 25, 2012.
    While at the Apple Store Les Quatre Temps, a Genius switched the phone given to from repair ID ----------- with a new one that worked with the French nano SIM card.
    Also, while at the Apple Store Les Quatre Temps, I initiated a call with AppleCare US to get assistance because it seems that AppleCare UK was more upset that France was not addressing the issue rather than helping me. I have email correspondance with the AppleCare US representative.
    A Genius at the Apple Store Les Quatre Temps switched the replacement with a new GSM model that worked on the French SIM card but would not work if restored, received a software update, or had the SIM card changed. This is the same temporary solution I received from the Apple Store Regent Street in the UK.
    By this point, I had spent between 12-14 hours in Apple Store or on the phone with an AppleCare representative.
    Upon arriving in the US, I went to my local Apple Store Brea Mall to have the replacement switched with a CDMA model. They could not support me. He told me that my CDMA phone should not have been replaced with a GSM model in the UK and that this was clearly Apple's fault. My instructions were to call AppleCare US again.
    My call with AppleCare US was escalated to a Senior Advisor, name can be provided upon request, and they gave me the case number -----------. After being on the phone with him for over an hour, his instructions were to call the Apple Store Regent Street and tell them to review my latest notes. They were to process a refund for a full retail priced iPhone 5 64BG black onto my credit card so that I could use that money to buy a new iPhone 5 64GB black at the Apple Store Brea Mall to reoslve the problem.
    The Apple Store Regent Street did not process my refund. He, name can be provided upon request, told me that the AppleCare US did not do a good job reviewing my case, that they were incapable of getting to the bottom of it like they were, and instructed me to call AppleCare US and tell them to review this case number and this repair id. I asked if he read the notes from the AppleCare US Senior Advisor and he would not confirm nor deny. When I offered to give him the case number he accepted but it seemed like would do no good. Our call was disconnected. When I tried calling back the stores automated system was turned on and I could not get back through.
    Now I have the full retail price of an iPhone 5 64GB black CDMA on my credit card and Apple will not process the refund as they said they would.
    I've, at this point, spent between 14-16 hours at Apple Stores or on the phone with AppleCare representatives, and still do not have the problem resolved.
    SOLUTION
    AppleCare US and AppleCare Europe need to resolve their internal family issues without further impacting their customers.
    Apple is to process a refund to my credit card for the cost of a full retail priced iPhone 5 64GB black.
    DESIRED OUTCOMES
    I have an iPhone 5 (A1429 CDMA model) that works in the US on VZW as it did before I received the replacement phone in the UK.
    Apple covers the cost of the solution because I did not create the problem.
    Apple resolves their internal issue without costing me more time, energy, or money.
    This becomes a case study for AppleCare so that future customers are not impacted like I have been by their support system.
    Does anyone have recommendations for me?
    Thank you!
    <Edited by Host>

    Thanks, but I've been on the phone with AppleCare US (where I am and live) and AppleCare UK. They continue bouncing me back and forth without helping resolve the problem.
    Perhaps someones knows how to further escalate the issue at Apple?

Maybe you are looking for