[OVM 2.2.1], NFS repository, lost DLM?
Hi,
I'm a little concerned that DLM may not be functioning properly in one of our OVM pools.
dlm-dump.py shows (on all servers in the pool):
[root@sscdevovmsvr01 ~]# /opt/ovs-agent-2.3/db/db_dump.py dlm
2383_sscsupfmwap01-pv => {'hostname': '10.200.20.3', 'uuid': '99abb7ee-32a1-488e-bf64-037467d99c0a'}
2369_ssctrnobiap01-pv => {'hostname': '10.200.20.3', 'uuid': '5a2c6d79-f47a-41e7-8ebe-4c58ed6f53d7'}
1144_ssctestsiebap04-pv => {'hostname': '10.200.20.3', 'uuid': '449a62e4-20c7-41e8-a2ea-2edee66102fc'}
10_10_SSCDEVDNS01-pv => {'hostname': '10.200.20.2', 'uuid': '3532cef1-397d-4417-9b82-f5e5dd5d5985'}
13_SSCDEVDNS02-pv => {'hostname': '10.200.20.4', 'uuid': '32b77967-6e52-4562-8771-c97f35870162'}
2390_sscdbaebizap01-pv => {'hostname': '10.200.20.3', 'uuid': 'f2cf163e-22a1-4d09-bead-065748b65b30'}
2315_ssctestebizap01-pv => {'hostname': '10.200.20.3', 'uuid': '3c08c422-fb8c-4773-87f1-a3e3ceddc7a2'}
2312_ssctestextap01-pv => {'hostname': '10.200.20.3', 'uuid': 'b4446a53-6e66-4dfe-aecb-42de47a0fc36'}
105_sscdevfmwint01 => {'hostname': '10.200.20.3', 'uuid': 'f7bed67b-a5c7-4e38-94fc-9c94c33c7a63'}
2337_ssctestfmwap01-pv => {'hostname': '10.200.20.3', 'uuid': '8dd59dc2-5c6e-41e7-a9f9-950c5eff7778'}
480_ssctestfmwap03 => {'hostname': '10.200.20.3', 'uuid': '0857a797-e335-4a3c-95ca-7abeaa75ffdd'}
2625_sscgstebizap01 => {'hostname': '10.200.20.3', 'uuid': 'd22acefc-5c87-4ee0-a299-8ecee04aa802'}
35_sscdevadm01 => {'hostname': '10.200.20.2', 'uuid': '3f79665d-eeba-48b3-ace8-e6a3ab76c146'}
2554_sscsupebizap02 => {'hostname': '10.200.20.3', 'uuid': 'cfa12422-8dc6-4ff7-8b73-0fef5f2d753b'}
View from the OVM Manager:
[root@sscdevovmmgr01 ~]# ovm -u <me> -p <password> vm ls -l
Name Size(MB) Mem VCPUs Status Server Server_Pool
ssctestldap 27241 4096 2 Powered Off sscdevpool1
2517_ssctestebizap04 14241 8192 1 Running 10.200.20.1 sscdevpool1
sscdevobiap01 23241 8192 2 Running 10.200.20.2 sscdevpool1
sscmiobiap01 23241 12288 2 Running 10.200.20.5 sscdevpool1
sscmioradb01 27241 16384 2 Running 10.200.20.5 sscdevpool1
bisdevoradb01 27241 8192 2 Running 10.200.20.6 sscdevpool1
sscpociip01 27241 12288 6 Running 10.200.20.6 sscdevpool1
ssciipdevap01 27241 4096 2 Running 10.200.20.6 sscdevpool1
ssciipdevdb01 27241 4096 2 Running 10.200.20.6 sscdevpool1
sscdevodiap01 27241 4096 2 Running 10.200.20.4 sscdevpool1
sscdevoel6u1x64 16001 8192 2 Running 10.200.20.3 sscdevpool1
bisdevebizap01 27241 4096 2 Running 10.200.20.6 sscdevpool1
sscdevfmwap01-pv 23241 16384 2 Running 10.200.20.1 sscdevpool1
35_sscdevadm01 108594 4096 2 Running 10.200.20.4 sscdevpool1
2676_sscdevw2k8-gplpv 40961 4096 1 Powered Off sscdevpool1
13_SSCDEVDNS02-pv 20481 2048 1 Running 10.200.20.1 sscdevpool1
150_ssctestoradb01 33481 16384 2 Running 10.200.20.4 sscdevpool1
ssctestfmwap04 23241 8192 4 Running 10.200.20.1 sscdevpool1
sscsupobiap01 23241 4096 2 Running 10.200.20.2 sscdevpool1
2557_sscsupebizdb02 14241 8192 8 Running 10.200.20.2 sscdevpool1
sscgstmidap01 23241 16384 2 Running 10.200.20.4 sscdevpool1
2654_vmsscdtlucm07 24577 4096 1 Running 10.200.20.1 sscdevpool1
2554_sscsupebizap02 14241 10240 6 Running 10.200.20.1 sscdevpool1
bisdevoradb02 27241 8192 2 Running 10.200.20.6 sscdevpool1
bisdevfmwap01 23241 4096 2 Running 10.200.20.4 sscdevpool1
sscdevodidb01 27241 4096 2 Running 10.200.20.6 sscdevpool1
sscdevload01 27241 8192 2 Running 10.200.20.3 sscdevpool1
sscdevload02 27241 8192 2 Running 10.200.20.3 sscdevpool1
sscdevoradb01 27241 12288 6 Running 10.200.20.2 sscdevpool1
ssctestucmap03-pv 23241 4096 1 Running 10.200.20.4 sscdevpool1
ssctestucmap04-pv 23241 4096 1 Running 10.200.20.2 sscdevpool1
sscdevucm01 23241 4096 2 Running 10.200.20.5 sscdevpool1
ssctestoradb03 27241 32768 4 Running 10.200.20.1 sscdevpool1
ssctestoradb04 27241 32768 4 Running 10.200.20.4 sscdevpool1
10_SSCDEVDNS01-pv 10241 2048 1 Running 10.200.20.1 sscdevpool1
105_sscdevfmwint01 76801 4096 2 Running 10.200.20.1 sscdevpool1
ssctestfmwap03 23241 8192 4 Running 10.200.20.4 sscdevpool1
sscdevebizdb02-pv 27241 8192 2 Running 10.200.20.1 sscdevpool1
sscdevebizap02-pv 27241 6144 1 Running 10.200.20.2 sscdevpool1
ssctestucmfs1 71681 4096 1 Running 10.200.20.4 sscdevpool1
2694_sscdevw2k8-opv 20481 4096 2 Powered Off sscdevpool1
ssctestlw01 27241 4096 2 Running 10.200.20.6 sscdevpool1
ssctestlw02 27241 4096 2 Powered Off sscdevpool1
sscdevebizap04-pv 27241 8192 1 Running 10.200.20.2 sscdevpool1
ssctestextap01-pv 27241 4096 2 Running 10.200.20.2 sscdevpool1
ssctestebizap01-pv 27241 16384 1 Running 10.200.20.2 sscdevpool1
ssctestebizdb01-pv 27241 12288 2 Running 10.200.20.2 sscdevpool1
bisdevobiap01 23241 4096 2 Running 10.200.20.1 sscdevpool1
ssctrnsiebap01-pv 23241 4096 2 Running 10.200.20.5 sscdevpool1
ssctestfmwap01-pv 23241 8192 8 Running 10.200.20.2 sscdevpool1
ssctrnoradb01-pv 27241 8192 1 Running 10.200.20.5 sscdevpool1
sscgrantsd-pv 27241 4096 2 Powered Off sscdevpool1
ssctrnebizdb01-pv 27241 8192 2 Running 10.200.20.1 sscdevpool1
ssctrnebizap01-pv 27241 8192 1 Running 10.200.20.4 sscdevpool1
ssctrnobiap01-pv 23241 4096 1 Running 10.200.20.4 sscdevpool1
sscgstebizap01 27241 4096 2 Running 10.200.20.2 sscdevpool1
sscsupebizap01-pv 25601 10240 1 Running 10.200.20.2 sscdevpool1
sscsupebizdb01-pv 27241 16384 5 Running 10.200.20.5 sscdevpool1
sscsupfmwap01-pv 23241 4096 2 Running 10.200.20.2 sscdevpool1
sscdbaebizap01-pv 27241 6144 1 Running 10.200.20.6 sscdevpool1
sscdbaebizdb01-pv 27241 8192 1 Running 10.200.20.5 sscdevpool1
sscgstoradb01 27241 16384 4 Running 10.200.20.5 sscdevpool1
sscsupgrid01 27241 4096 2 Running 10.200.20.1 sscdevpool1
sscdevextap01-pv 27241 4096 2 Running 10.200.20.1 sscdevpool1
sscdevgrid01 27241 4096 2 Running 10.200.20.1 sscdevpool1
2514_ssctestebizap03 14241 8192 1 Running 10.200.20.4 sscdevpool1
sscdevmail01 27241 4096 2 Running 10.200.20.6 sscdevpool1
2658_vmsscdtlucm08 24577 4096 1 Running 10.200.20.2 sscdevpool1
sscoelr5u464pv 27241 4096 2 Powered Off sscdevpool1
sscdevebizdb04-PV 27241 12288 3 Running 10.200.20.2 sscdevpool1
sscoelr5u432pv 23241 4096 2 Powered Off sscdevpool1
ssctestobiap01-pv 23241 24576 1 Running 10.200.20.5 sscdevpool1
ssctestsiebap03-pv 23241 8192 4 Running 10.200.20.5 sscdevpool1
ssctestsiebap04-pv 23241 8192 4 Running 10.200.20.4 sscdevpool1
Starting a powered off VM does not add to the dlm list, but the OVM manager sees it started correctly. I am able to start a VM from the command line on two different machines in the cluster concurrently without error O.o
/dlm/ovm is either empty or does not exist on different servers in the pool (which has currently been up for 170 days).
Any ideas gratefully received...
Many thanks :)
Hmm - very peculiar!
Wiped cluster, installed OVM 2.2.2 (was previously on 2.2.1) - problem gone!
Jeff
Similar Messages
-
OVM 2.2 and NFS Repository Problems
Hi All.
I have recently started trying to upgrade our installation to 2.2
but have run into a few problems, mostly relating to the different
way that storage repositories are handled in comparison to 2.1.5.
We use NFS here to provide shared storage to the pools.
I wanted to setup a new two node server pool (with HA), so I upgraded
one of the servers to from 2.1.5 to 2.2 to act as pool master. That
worked ok and this server seems to be working fine in isolation:
master# /opt/ovs-agent-2.3/utils/repos.py -l
[ * ] 865a2e52-db29-48f1-98a0-98f985b3065c => augustus:/vol/OVS_pv_vpn
master# df /OVS
Filesystem 1K-blocks Used Available Use% Mounted on
augustus:/vol/OVS_pv_vpn
47185920 16083008 31102912 35% /var/ovs/mount/865A2E52DB2948F198A098F985B3065C
(I then successfully launched a VM on it.)
The problem is when I try to add a second server to the pool. I did
a fresh install of 2.2 and configured the storage repository to be the
same as that used on the first node:
vm1# /opt/ovs-agent-2.3/utils/repos.py --new augustus:/vol/OVS_pv_vpn
vm1# /opt/ovs-agent-2.3/utils/repos.py -r 865a2e52-db29-48f1-98a0-98f985b3065c
vm1# /opt/ovs-agent-2.3/utils/repos.py -l
[ R ] 865a2e52-db29-48f1-98a0-98f985b3065c => augustus:/vol/OVS_pv_vpn
When I try to add this server into the pool using the management GUI, I get
this error:
OVM-1011 Oracle VM Server 172.22.36.24 operation HA Check Prerequisite failed: failed:<Exception: ha_precheck_storage_mount failed:<Exception: /OVS must be mounted.> .
Running "repos.py -i" yields:
Cluster not available.
Seems like a chicken and egg problem: I can't add the server to the pool without a
mounted /OVS, but mounting /OVS is done by adding it to the pool? Or do I have that
wrong?
More generally, I'm a bit confused at how the repositories are
supposed to be managed under 2.2.
For exaple, the /etc/init.d/ovsrepositories script is still present,
but is it still used? When I run it, it prints a couple of errors and
doesn't seem to mount anything:
vm1# service ovsrepositories start
/etc/ovs/repositories does not exist
Starting OVS Storage Repository Mounter...
/etc/init.d/ovsrepositories: line 111: /etc/ovs/repositories: No such file or directory
/etc/init.d/ovsrepositories: line 111: /etc/ovs/repositories: No such file or directory
OVS Storage Repository Mounter Startup: [ OK ]
Should this service be turned off? It seem that ovs-agent now takes
responsibility for mounting the repositories.
As an aside, my Manager is still running 2.1.5 - is that part of the
problem here? Is it safe to upgrade the manager to 2.2 while I still
have a couple of pools running 2.1.5 servers?
Thanks in adavance,
Robert.rns wrote:
Seems like a chicken and egg problem: I can't add the server to the pool without a
mounted /OVS, but mounting /OVS is done by adding it to the pool? Or do I have that
wrong?You have that wrong -- the /OVS mount point is created by ovs-agent while the server is added to the pool. You just need access to the shared storage.
For exaple, the /etc/init.d/ovsrepositories script is still present,
but is it still used?No, it is not. ovs-agent now handles the storage repositories.
As an aside, my Manager is still running 2.1.5 - is that part of the
problem here? Yes. You absolutely need to upgrade your Manager first to 2.2 before attempting to create/manage a 2.2-based pool. The 2.1.5 Manager doesn't know how to tell the ovs-agent how to create/join a pool properly. The upgrade process is detailed in [the ULN FAQ|https://linux.oracle.com/uln_faq.html#10]. -
Connection to system REPOSITORY using application REPOSITORY lost.
Connection to system REPOSITORY using application REPOSITORY lost. Detailed information: Error accessing "http://<host>:<port>/rep/query/int?container=any" with user "USER01". Response code is 401, response message is "Unauthorized".
USER01 is locked, but i want to change this conection user to PIDIRUSER.
Do you know where this connection user can be changed?Hello there.
Please check the note below according to your system:
#999962 - PI 7.10: Change passwords of PI service users
#936093 - XI 7.0: Changing the passwords of XI service users
#721548 - XI 3.0: Changing the passwords of the XI service users
Regards,
Caio Cagnani -
Connection to system REPOSITORY using application REPOSITORY lost. Detailed
Connection to system REPOSITORY using application REPOSITORY lost. Detailed information: Error accessing "http://ECC:50000/rep/query/int?container=any" with user "PIDIRUSER". Response code is 401, response message is "Unauthorized"
This problem occurs when user is locked. Unlock yhe user in su01.
unable to access repository / SLD
See this guide may help you.
https://www.dw.dhhs.state.nc.us/wi/OnlineGuides/EN/ErrorsEN.pdf
Rewards if helpful.
BR,
Alok -
OVM 3.3.1: NFS storage is not available during repository creation
Hi, I have OVM Manager running on a separate machines managing 3 servers running OVM server in a server pool. One of the server also exports a NFS share that all other machines are able to mount and read/write to. I want to use this NFS share to create a OVM repository but so far unable to get it to work.
From this first screen shot we can see that the NFS file system was successfully added under storage tab and refreshed.
https://www.dropbox.com/s/fyscj2oynud542k/Screenshot%202014-10-11%2013.40.00.png?dl=0
But its is not available when adding a repository as shown below. What can I did to make it show up here.
https://www.dropbox.com/s/id1eey08cdbajsg/Screenshot%202014-10-11%2013.40.19.png?dl=0
No luck with CLI either. Any thoughts?
OVM> create repository name=myrepo fileSystem="share:/" sharepath=myrepo - Configurable attribute by this name can't be found.
== NFS file system refreshed via CLI ===
OVM> refresh fileServer name=share
Command: refresh fileServer name=share
Status: Success
Time: 2014-10-11 13:28:14,811 PDT
JobId: 1413059293069
== file system info
OVM> show fileServer name=share
Command: show fileServer name=share
Status: Success
Time: 2014-10-11 13:28:28,770 PDT
Data:
FileSystem 1 = ff5d21be-906d-4388-98a2-08cb9ac59b43 [share]
FileServer Type = Network
Storage Plug-in = oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0) [Oracle Generic Network File System]
Access Host = 1.2.3.4
Admin Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31 [dev1]
Refresh Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31 [dev1]
Refresh Server 2 = 44:45:4c:4c:47:00:10:31:80:51:b8:c0:4f:35:48:31 [dev2]
Refresh Server 3 = 44:45:4c:4c:33:00:10:34:80:38:c4:c0:4f:53:4b:31 [dev3]
UniformExports = Yes
Id = 0004fb0000090000fb2cf8ac1968505e [share]
Name = share
Description = NFS exported /dev/sda1 (427GB) on dev1
Locked = false
== version details ==
OVM server:3.3.1-1065
Agent Version:3.3.1-276.el6.7Kernel Release:3.8.13-26.4.2.el6uek.x86_64
Oracle VM Manager
Version: 3.3.1.1065
Build: 20140619_1065Actually, OVM, as is with all virtualization servers, is usually only the head on a comprehensive infrastructure. OVM seems quite easy from the start, but I'd suggest, that you at least skim trough the admin manual, to get some understanding of the conecpts behind it. OVS thus usually only provides the CPU horse power, but not the storage, unless you only want to setup a single-server setup. If you plan on having a real multi-server setup, then you will need shared storage.
The shared storage for the server pool, as well as the storage repository can be served from the same NFS server without issues. If you want to have a little testbed, then NFS is for you. It lacks some features that OCFS2 benefits from, like thin provisioning, reflinks and sparse files.
If you want to remove the NFS storage, then you'll need to remove any remainders of any OVM object, like storage repositories or server pool filesystems. Unpresent and storage repo and delete it afterwards… Also, I hope that you didn't create the NFS export directly on the root of the drive, since OVM wants to remove any file on the NFS export and on any root of ony volume there's the lost-found folder, which OVM, naturally, can't remove. Getting rid of such a storage repo can be a bit daunting…
Cheers,
budy -
Microsoft NFS repository for ISOs?
Does anyone know of a way to add an existing NFS mount to OVM3 that contains ISO images?
We have a large Microsoft NFS (Windows 2003 server) export that contains all of our ISO images, I would like to have that inside OVM so I can present those ISOs to servers.
When I added it as a file server, the share did not show up in OVM3, I tested on a node using showmount -e <ip> and the share is infact exported and shows up there, but I think because its just a NFS share with nothing more than ISO images, its being ignored. I also know that to create a repository, OVM wants to lay down its typical directory structure, etc. Is there any way to simply mount a export for ISO storage, versus preping an entire SR for it?Dave wrote:
Does anyone know of a way to add an existing NFS mount to OVM3 that contains ISO images?This is currently not possible. Great idea, though. If you have Oracle Support, you should log an SR for this as an enhancement request. -
OVM 3.0.1 local repository problem
Good morning all, i am really new in OVM and i am facing a big issue that stops me evaluating this product.
I have a couple of servers, connected to a S.A.N. array. I can see from both the servers i added to a clustered pool, and i am able to create a shared repository without problems.
I am not able to see local disks in the OVM manager administration and therefore i can't create local repositories. I tried all i found in this forum, but without success.
Let's focus on server1: it has a couple of 146GB disks. I used one of them for OVS installation leaving the second disk alone, without partitioning it.
Tried to create local repository in the clustered pool, but no way...
So i created a single full-disk partition and retried to create repo: still no way
Then i created an ocfs2 filesystem in the new partition but, again, i couldnt see physical local server1 disk.
Every time i changed partitions configuration, i obviously did rescanning of physical disks.
I all my tests, local physical disks selection list in Generic Local Storage Array @ node1 is always empty.
Any hint about solving this issue? Any good pointer to an hands-on guide (official docs are not so good)? Any suggestion about what to look at in log files for debugging?
Any answer is welcome...
Thank you all!I was able to do this as follows
1. have an untouched unformatted disk (no partitions, no file system)
2. in hardware under the vmserver name , scan for the disk and it should show in the list
3. in the repository section of home, add the repository as physical disk
4. "present" (green up and down arrows) the physical disk on the vmserver itself (dont ask me why you have to do this but if you dont it wont find its own disk) -
Paravirtualized machine hanging (VM Server 2.1, NFS based repository)
Hi,
I have a problem with a VM server.
I have local disks, that are kind of slow (initially my images were on OCFS2 based /OVS, after some problem with it wi migrated /OVS to ext3), but because of insufficient space we want to use NFS.
I created an NFS repository with:
/usr/lib/ovs/ovs-makerepo 172.16.32.51:/lv_raid5_fs1/OVS 1 raid5_nfs
then I created a HVM wirtual machine with OEL4U5 (using installation from ISO images) - it works relatively fine (it hanged just once).
I tried creating a PVM from the template OVM_EL4U5_X86_PVM_10GB.
I did that using Oracle VM Manager. The template was created, and after the Power On command the VM started.
I then wanted to test disk operations performance with simple
dd if=/dev/zero of=/root/test_prs.dat bs=1048576 count=3000
but it actually hanged (domU hanged, the iostat command that I had started continue to work though - showing no IO operations were going on, but iowait is 100%.
Also xentop in dom0 hanged - it didn't refresh for 12 hours.
The whole dom0 doesn't respond to new ssh requests (the existing one with xentop is not closed).
The domU with the PVM allowed me to run some commands in another open sheel via ssh, but then it hang also..
The NFS server I am using is a SLES9 SP3 + online updates (2.6.5-7.276-bigsmp). It is attached to SCSI Storage array.
The exportfs options are "(rw,wdelay,no_root_squash)". The exported filesystem is reiserfs.
The mount options in dom0 are "vers=3,tcp", I cannot find them all right now, because the dom0 is hanged.
the connection between NFS client and server is 1Gbit.
NFS Server hasn't shown any errors.
The last screen from xentop is below:
xentop - 19:31:19 Xen 3.1.1
3 domains: 1 running, 2 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 8387988k total, 3825360k used, 4562628k free CPUs: 8 @ 1995MHz
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR 90_soa b- 362 0.1 2097152 25.0 2097152 25.0 2 1 34083 3236 1 0 20981 45823 VBD BlkBack 768 [ 3: 0] OO: 0 RD: 20981 WR: 45823
Domain-0 -----r 286 1.7 524288 6.3 no limit n/a 8 8 2461308 2509950 0 0 0 0 linux_hv_1 b- 268 2.3 1056644 12.6 1064960 12.7 1 1 0 0 1 0 0 0 VBD BlkBack 768 [ 3: 0] OO: 0 RD: 0 WR: 0
Now my time is 11:26:00 (so more than 15 hours - no refresh of the screen). We've seen such a behaviour previously, but after 20-30 minutes it all started working again.
What can I do to improve the situation? What could be the problem?
Please help...
Regards,
Mihail DaskalovHi,
1) What does it mean "supported" - there are no specific requirements published. As I said my "filer" is another Linux machine which is exporting a file system via NFS.
2) I already tested the NFS using another real machine (not a virtualized one) and it works perfect. I also tested the NFS mount point from dom0 on the same VM server and it worked...
3) I have problem with the paravirtualized machine (from template)
any other suggestions? -
Repository after reinstall server and ovm manager
I have problem
I have one server connected FC to SAN storage. I have repo with vm on storage luns. After power crash i must to reinstall ovm on server and also reinstall ovm manager.
Now can i connect old reository to new/reinstall server witouth loosing any data?Server with manager had hardware failure. Idont have backup so i have to install ovm manager on new machine.
Vm server have problem with file system, dont seen the luns also xend dont start. So i reinstall also this server.
I know that new ovm manager dont recognize old repository disk, so i create new lun on storage and create new repository on it. Now from CLI i map old ocfs repository and copy virtual disk and vm.cfg to new repository, but ovm manager dosn see any of them. -
Can't create repository as iSCSI physical disk
Using OVM 3.1.1
I am using a ZFS Storage Appliance (simulated under virtualbox for testing) as a SAN.
I created two iSCSI LUN devices on the appliance in the same ZFS pool:- a LUN for a server pool, and a LUN for a repository.
After creating an Access Group that two of my OVM servers could use to see this pool, I was able to create a pool using the LUN I made for the server pool.
Of course I have already installed the ZFS-SA pugin on my two OVM nodes, so this all works.
When I tried to create the repository, I use the other iSCSI LUN and I get the message box with spinning timer icon telling me that it is creating the repository.
However, the process times out and fails. The details of the failure I cut and pasted here.
The interesting this is that instead of an iSCSI LUN, I can create an NFS share on the ZFS-SA, mount that, and use that as a repository and have that work.
That's not what I want, however.
What's going on? The detailed log output gives me no clue whatsoever as to what is wrong. Looks like it's clashing with OCFS2 or something.
Job Construction Phase
begin()
Appended operation 'File System Construct' to object '0004fb0000090000de3c84de0325cbb2 (Local FS ovm-dev-01)'.
Appended operation 'Cluster File System Present' to object 'ec686c238f27311b'.
Appended operation 'Repository Construct' to object '0004fb000003000027ca3c09e0f30673 (SUN (2))'.
commit()
Completed Step: COMMIT
Objects and Operations
Object (IN_USE): [Cluster] ec686c238f27311b
Operation: Cluster File System Present
Object (CREATED): [LocalFileSystem] 0004fb0000050000e580a3d171ecf6c1 (fs_repo01)
Object (IN_USE): [LocalFileServer] 0004fb00000900008e232246f9e4b224 (Local FS ovm-dev-02)
Object (CREATED): [Repository] 0004fb000003000027ca3c09e0f30673 (repo01)
Operation: Repository Construct
Object (IN_USE): [LocalFileServer] 0004fb0000090000de3c84de0325cbb2 (Local FS ovm-dev-01)
Operation: File System Construct
Object (IN_USE): [StorageElement] 0004fb00001800009299c1a46c0e3979 (SUN (2))
Job Running Phase at 00:39 on Wed, Jun 20, 2012
Job Participants: [34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)]
Actioner
Starting operation 'Cluster File System Present' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
Completed operation 'Cluster File System Present' completed with direction ==> DONE
Starting operation 'Repository Construct' on object '0004fb000003000027ca3c09e0f30673 (repo01)'
Completed operation 'Repository Construct' completed with direction ==> LATER
Starting operation 'File System Construct' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
Job: 1340118585250, aborted post-commit by user: admin
Write Methods Invoked
Class=InternalJobDbImpl vessel_id=1450 method=addTransactionIdentifier accessLevel=6
Class=LocalFileServerDbImpl vessel_id=675 method=createFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setFoundryContext accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=onPersistableCreate accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setLifecycleState accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setRollbackLifecycleState accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setRefreshed accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setBackingDevices accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setUuid accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setPath accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setStorageDevice accessLevel=6
Class=StorageElementDbImpl vessel_id=1273 method=addLayeredFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
Class=LocalFileServerDbImpl vessel_id=921 method=addFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=addLocalFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setCluster accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setAsset accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=createRepository accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setName accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setFoundryContext accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=onPersistableCreate accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setLifecycleState accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setRollbackLifecycleState accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setRefreshed accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setDom0Uuid accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setSharePath accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=addRepository accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setManagerUuid accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setVersion accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=addJobOperation accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setDescription accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setCompletedStep accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setAssociatedHandles accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=setCurrentJobOperationComplete accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setCurrentOperationToLater accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000de3c84de0325cbb2] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:894)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
at com.oracle.ovm.mgr.api.physical.storage.LocalFileServerProxy.executeCurrentJobOperationAction(Unknown Source)
at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:890)
... 27 more
Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
... 30 more
FailedOperationCleanup
Starting failed operation 'File System Construct' cleanup on object 'fs_repo01'
Complete rollback operation 'File System Construct' completed with direction=fs_repo01
Rollbacker
Executing rollback operation 'Cluster File System Present' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
Complete rollback operation 'Cluster File System Present' completed with direction=DONE
Executing rollback operation 'File System Construct' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
Complete rollback operation 'File System Construct' completed with direction=DONE
Objects To Be Rolled Back
Object (IN_USE): [Cluster] ec686c238f27311b
Object (CREATED): [LocalFileSystem] 0004fb0000050000e580a3d171ecf6c1 (fs_repo01)
Object (IN_USE): [LocalFileServer] 0004fb00000900008e232246f9e4b224 (Local FS ovm-dev-02)
Object (CREATED): [Repository] 0004fb000003000027ca3c09e0f30673 (repo01)
Object (IN_USE): [LocalFileServer] 0004fb0000090000de3c84de0325cbb2 (Local FS ovm-dev-01)
Object (IN_USE): [StorageElement] 0004fb00001800009299c1a46c0e3979 (SUN (2))
Write Methods Invoked
Class=InternalJobDbImpl vessel_id=1450 method=addTransactionIdentifier accessLevel=6
Class=LocalFileServerDbImpl vessel_id=675 method=createFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setFoundryContext accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=onPersistableCreate accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setLifecycleState accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setRollbackLifecycleState accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setRefreshed accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setBackingDevices accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setUuid accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setPath accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setStorageDevice accessLevel=6
Class=StorageElementDbImpl vessel_id=1273 method=addLayeredFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
Class=LocalFileServerDbImpl vessel_id=921 method=addFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=addLocalFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setCluster accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=setAsset accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=createRepository accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setName accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setFoundryContext accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=onPersistableCreate accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setLifecycleState accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setRollbackLifecycleState accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setRefreshed accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setDom0Uuid accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setSharePath accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=addRepository accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setManagerUuid accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setVersion accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=addJobOperation accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setDescription accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setCompletedStep accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setAssociatedHandles accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=setCurrentJobOperationComplete accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=setCurrentOperationToLater accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
Class=LocalFileServerDbImpl vessel_id=675 method=nextJobOperation accessLevel=6
Class=InternalJobDbImpl vessel_id=1450 method=setFailedOperation accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=1459 method=nextJobOperation accessLevel=6
Class=LocalFileServerDbImpl vessel_id=921 method=nextJobOperation accessLevel=6
Class=RepositoryDbImpl vessel_id=1464 method=nextJobOperation accessLevel=6
Class=LocalFileServerDbImpl vessel_id=675 method=nextJobOperation accessLevel=6
Class=StorageElementDbImpl vessel_id=1273 method=nextJobOperation accessLevel=6
Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
Class=LocalFileServerDbImpl vessel_id=675 method=nextJobOperation accessLevel=6
Completed Step: ROLLBACK
Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000de3c84de0325cbb2] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000de3c84de0325cbb2] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:894)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
at sun.reflect.GeneratedMethodAccessor728.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
at com.oracle.ovm.mgr.api.physical.storage.LocalFileServerProxy.executeCurrentJobOperationAction(Unknown Source)
at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
at sun.reflect.GeneratedMethodAccessor1229.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
Wed Jun 20 00:41:47 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:890)
... 27 more
Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
Wed Jun 20 00:41:47 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
... 30 more
End of Job
Edited by: 941491 on Jun 19, 2012 5:54 PM
Edited by: 941491 on Jun 19, 2012 5:55 PM
Edited by: 941491 on Jun 19, 2012 5:56 PMTo frustrate me further, trying to remove my repository LUN from the access group hasn't worked.
I begin by editing the storage access group and swapping the repository LUN I created back out and onto the left panel (where it is to be unused I imagine).
What's ridiculous is that the LUN isn't mounted or used by anything (it didn't work, remember?). If it was easy to add to the group... why so hard to remove?
...to my surprise, I get this:-
Perhaps it's because I am attempting to do something before something else has been properly cleaned up.
It would be nice to be told of that, rather than have error messages like this thrust at you.
Job Construction Phase
begin()
Appended operation 'Storage Element Teardown' to object '34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)'.
Appended operation 'Storage Element Teardown' to object '34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)'.
Appended operation 'Storage Element UnPresent' to object '0004fb00001800000e2fa7b367ddf334 (repo01)'.
commit()
Completed Step: COMMIT
Objects and Operations
Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)
Operation: Storage Element Teardown
Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:fe7bba90add3 : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)
Operation: Storage Element Teardown
Object (IN_USE): [AccessGroup] group01 @ 0004fb00000900003b330fb87739bbfe (group01)
Object (IN_USE): [IscsiStorageTarget] iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643
Object (IN_USE): [StorageElement] 0004fb00001800000e2fa7b367ddf334 (repo01)
Operation: Storage Element UnPresent
Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:fe7bba90add3
Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:43e520f2e5f : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:43e520f2e5f
Job Running Phase at 02:56 on Wed, Jun 20, 2012
Job Participants: [34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)]
Actioner
Starting operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)'
Sending storage element teardown command to server [ovm-dev-02] for element whose page 83 id is [3600144f0c17a765000004fe138810007]
Completed operation 'Storage Element Teardown' completed with direction ==> DONE
Starting operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)'
Sending storage element teardown command to server [ovm-dev-01] for element whose page 83 id is [3600144f0c17a765000004fe138810007]
Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [teardown] failed for storage server [{access_grps=[{grp_name=default, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f], grp_modes=[]}, {grp_name=group01, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f, iqn.1988-12.com.oracle:fe7bba90add3], grp_modes=[]}], passwd=null, admin_passwd=W1a,1bT7, storage_id=[iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643], chap=false, access_host=zfs-app.icesa.catholic.edu.au, storage_server_id=2b34e1ce-5465-ecd1-9a10-c17a7650ba08, vol_groups=[{vol_alloc_sz=0, vol_free_sz=0, vol_used_sz=0, vol_name=ovmpool/local/default, vol_total_sz=0, vol_desc=}], username=null, name=0004fb00000900003b330fb87739bbfe, admin_user=cesa_ovm, uuid=0004fb00000900003b330fb87739bbfe, extra_info=OVM-iSCSI,OVM-iSCSI-Target, access_port=3260, storage_type=iSCSI, admin_host=zfs-app.icesa.catholic.edu.au}] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012...
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:75)
at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
at com.oracle.ovm.mgr.api.physical.ServerProxy.executeCurrentJobOperationAction(Unknown Source)
at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:821)
at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:71)
... 25 more
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:817)
... 26 more
Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
... 29 more
FailedOperationCleanup
Starting failed operation 'Storage Element Teardown' cleanup on object 'ovm-dev-01'
Complete rollback operation 'Storage Element Teardown' completed with direction=ovm-dev-01
Rollbacker
Executing rollback operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)'
Complete rollback operation 'Storage Element Teardown' completed with direction=DONE
Executing rollback operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)'
Complete rollback operation 'Storage Element Teardown' completed with direction=DONE
Objects To Be Rolled Back
Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)
Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:fe7bba90add3 : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)
Object (IN_USE): [AccessGroup] group01 @ 0004fb00000900003b330fb87739bbfe (group01)
Object (IN_USE): [IscsiStorageTarget] iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643
Object (IN_USE): [StorageElement] 0004fb00001800000e2fa7b367ddf334 (repo01)
Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:fe7bba90add3
Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:43e520f2e5f : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:43e520f2e5f
Write Methods Invoked
Class=InternalJobDbImpl vessel_id=2295 method=addTransactionIdentifier accessLevel=6
Class=StorageElementDbImpl vessel_id=1923 method=unpresent accessLevel=6
Class=ServerDbImpl vessel_id=728 method=teardownStorageElements accessLevel=6
Class=ServerDbImpl vessel_id=1686 method=teardownStorageElements accessLevel=6
Class=AccessGroupDbImpl vessel_id=1941 method=removeStorageElement accessLevel=6
Class=IscsiStorageInitiatorDbImpl vessel_id=845 method=deleteStoragePath accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2070 method=setLifecycleState accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2070 method=setRollbackLifecycleState accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2070 method=onPersistableClean accessLevel=6
Class=StorageElementDbImpl vessel_id=1923 method=removeStoragePath accessLevel=6
Class=IscsiStorageTargetDbImpl vessel_id=580 method=removeStoragePath accessLevel=6
Class=IscsiStorageInitiatorDbImpl vessel_id=1803 method=deleteStoragePath accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2132 method=setLifecycleState accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2132 method=setRollbackLifecycleState accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2132 method=onPersistableClean accessLevel=6
Class=StorageElementDbImpl vessel_id=1923 method=removeStoragePath accessLevel=6
Class=IscsiStorageTargetDbImpl vessel_id=580 method=removeStoragePath accessLevel=6
Class=InternalJobDbImpl vessel_id=2295 method=setCompletedStep accessLevel=6
Class=InternalJobDbImpl vessel_id=2295 method=setAssociatedHandles accessLevel=6
Class=ServerDbImpl vessel_id=728 method=setCurrentJobOperationComplete accessLevel=6
Class=ServerDbImpl vessel_id=728 method=nextJobOperation accessLevel=6
Class=ServerDbImpl vessel_id=1686 method=nextJobOperation accessLevel=6
Class=InternalJobDbImpl vessel_id=2295 method=setFailedOperation accessLevel=6
Class=ServerDbImpl vessel_id=728 method=nextJobOperation accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2132 method=nextJobOperation accessLevel=6
Class=ServerDbImpl vessel_id=1686 method=nextJobOperation accessLevel=6
Class=AccessGroupDbImpl vessel_id=1941 method=nextJobOperation accessLevel=6
Class=IscsiStorageTargetDbImpl vessel_id=580 method=nextJobOperation accessLevel=6
Class=StorageElementDbImpl vessel_id=1923 method=nextJobOperation accessLevel=6
Class=IscsiStorageInitiatorDbImpl vessel_id=1803 method=nextJobOperation accessLevel=6
Class=IscsiStoragePathDbImpl vessel_id=2070 method=nextJobOperation accessLevel=6
Class=IscsiStorageInitiatorDbImpl vessel_id=845 method=nextJobOperation accessLevel=6
Class=ServerDbImpl vessel_id=728 method=nextJobOperation accessLevel=6
Class=ServerDbImpl vessel_id=1686 method=nextJobOperation accessLevel=6
Completed Step: ROLLBACK
Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [teardown] failed for storage server [{access_grps=[{grp_name=default, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f], grp_modes=[]}, {grp_name=group01, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f, iqn.1988-12.com.oracle:fe7bba90add3], grp_modes=[]}], passwd=null, admin_passwd=W1a,1bT7, storage_id=[iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643], chap=false, access_host=zfs-app.icesa.catholic.edu.au, storage_server_id=2b34e1ce-5465-ecd1-9a10-c17a7650ba08, vol_groups=[{vol_alloc_sz=0, vol_free_sz=0, vol_used_sz=0, vol_name=ovmpool/local/default, vol_total_sz=0, vol_desc=}], username=null, name=0004fb00000900003b330fb87739bbfe, admin_user=cesa_ovm, uuid=0004fb00000900003b330fb87739bbfe, extra_info=OVM-iSCSI,OVM-iSCSI-Target, access_port=3260, storage_type=iSCSI, admin_host=zfs-app.icesa.catholic.edu.au}] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012...
Wed Jun 20 02:56:08 CST 2012
com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [teardown] failed for storage server [{access_grps=[{grp_name=default, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f], grp_modes=[]}, {grp_name=group01, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f, iqn.1988-12.com.oracle:fe7bba90add3], grp_modes=[]}], passwd=null, admin_passwd=W1a,1bT7, storage_id=[iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643], chap=false, access_host=zfs-app.icesa.catholic.edu.au, storage_server_id=2b34e1ce-5465-ecd1-9a10-c17a7650ba08, vol_groups=[{vol_alloc_sz=0, vol_free_sz=0, vol_used_sz=0, vol_name=ovmpool/local/default, vol_total_sz=0, vol_desc=}], username=null, name=0004fb00000900003b330fb87739bbfe, admin_user=cesa_ovm, uuid=0004fb00000900003b330fb87739bbfe, extra_info=OVM-iSCSI,OVM-iSCSI-Target, access_port=3260, storage_type=iSCSI, admin_host=zfs-app.icesa.catholic.edu.au}] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012...
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:75)
at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
at sun.reflect.GeneratedMethodAccessor835.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
at com.oracle.ovm.mgr.api.physical.ServerProxy.executeCurrentJobOperationAction(Unknown Source)
at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
at sun.reflect.GeneratedMethodAccessor1118.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:821)
at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:71)
... 25 more
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:817)
... 26 more
Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
Wed Jun 20 02:56:08 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
... 29 more
End of Job
Edited by: 941491 on Jun 19, 2012 7:59 PM
Edited by: 941491 on Jun 19, 2012 8:03 PM
Edited by: 941491 on Jun 19, 2012 8:06 PM -
OVM 3.0.1 is slow
Hi Guys,
Here is my setup,
1x OVM Server, hostname ovmsvr01 with 16 cores and 16GB RAM
1x OVM Manager , ovmm as guest of OVM server, with 4 cores and 4GB RAM
2x Oracle Linux 5 , poc1 and poc2 with 4 cores and 4GB RAM
1x NFS repository from Netapp filer , keep all the virtual disk
test makefile in poc1 (with sit on NFS repository)
[root@poc1 u01]# time dd if=/dev/zero of=test bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 107.39 seconds, 9.5 MB/s
real 1m47.464s
user 0m0.686s
sys 0m20.715s
[root@pe3poc1 u01]# time dd if=/dev/zero of=test bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 1.51417 seconds, 67.6 MB/s
real 0m1.532s
user 0m0.048s
sys 0m1.475s
[root@pe3poc1 u01]# time dd if=/dev/zero of=test bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 0.152474 seconds, 67.2 MB/s
real 0m0.158s
user 0m0.007s
sys 0m0.152s
OVM Server test make file on NFS mountpoint
[root@ovmserver1 0004fb00000300001ceea46a5faadf33]# time dd if=/dev/zero of=test bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 1.51741 seconds, 67.5 MB/s
real 0m1.553s
user 0m0.028s
sys 0m0.356s
[root@ovmserver1 0004fb00000300001ceea46a5faadf33]# time dd if=/dev/zero of=test bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 16.7652 seconds, 61.1 MB/s
real 0m16.876s
user 0m0.672s
sys 0m4.728s
Question:
Assume that max throughput is 66MB/s, why inside the poc, making small file have comparable speed, but large file will 6 times slower.
Any idea??
Thanks/DylanHi all,
I also did the dd test, on the local disk repository instead of NFS share. Guess wat? inside ovm guest still slower, but close.
I tested to repeatly dd the same file with same size and yes, first dd is slower, subsequence is close.
ovm server:
[root@ovmsvr302b /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 13.3552 seconds, 123 MB/s
real *1m16.096s*
user 0m0.000s
sys 0m2.632s
[root@ovmsvr302b /]#
[root@ovmsvr302b /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 13.3933 seconds, 122 MB/s
real 0m13.520s
user 0m0.000s
sys 0m2.656s
[root@ovmsvr302b /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 13.7771 seconds, *119 MB/s*
real 0m13.954s
user 0m0.000s
sys 0m2.776s
ovm guess:
[root@OL01 /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 42.0582 seconds, 39.0 MB/s
real 0m45.591s
user 0m0.001s
sys 0m3.241s
[root@OL01 /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 17.7642 seconds, 92.2 MB/s
real 0m18.087s
user 0m0.000s
sys 0m3.375s
[root@OL01 /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 15.1268 seconds, 108 MB/s
real 0m15.446s
user 0m0.001s
sys 0m3.354s
[root@OL01 /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 16.1879 seconds, 101 MB/s
real 0m23.364s
user 0m0.000s
sys 0m3.603s
[root@OL01 /]# time dd if=/dev/zero of=test bs=204800000 count=8
8+0 records in
8+0 records out
1638400000 bytes (1.6 GB) copied, 15.2975 seconds, *107 MB/s*
real 0m16.438s
user 0m0.000s
sys 0m3.387s
I will do the NFS test and feedback later. -
NFS disk performance after upgrade to 3.1.1
Hello,
after OVS upgrade from 3.0.3 to 3.1.1 I noticed performance problems on virtual disks placed on nfs repository. Before upgrade, on 3.0.3 I can read from xvda disk at around 60MB/s, after upgrade on 3.1.1 it fall down to around 1.5 MB/s, with 1MB block size:
dd if=/dev/xvda of=/dev/null bs=1024k count=1000
^C106+0 records in
105+0 records out
110100480 bytes (110 MB) copied, 79.0509 seconds, 1.4 MB/s
Repository is on nfs share attached through dedicated 1Gbit/s ethernet network with MTU=8900. The same configuration was used before upgrade. Only change was upgrade from 3.0.3 to 3.1.1.
Test machines are OEL5 with latest UEK kernels, running PVM mode.
Repository on 3.1.1 is mounted without additional NFS options like rsize,wsize,tcp or proto=3:
192.168.100.10:/mnt/nfs_storage_pool on /OVS/Repositories/0004fb0000030000a75ccd9ef5a238c3 type nfs (rw,addr=192.168.100.10)
I don't find a way to change it, but don't know if it may cause performance issues.
Any idea why is with 3.1.1 so high performance decrease from 60 to 1.5 MB/s?
Thanks.Oracle Support didn't respond to a SR1? Phew… I have noticed very slow performance from Oracle's OVM support regarding SR feedback, but I'd imagine that they respond to a SR1 in time.
But to post something useful, I have just checked the NFS speeds from one of my VM servers to a NFS share hosted on one of my Solaris boxes and the thoughput seems pretty reasonable:
[root@oraclevms01 OrcleVM]# dd if=OVM_EL5U5_X86_PVM_10GB.tar of=/dev/null
3715540+0 records in
3715540+0 records out
1902356480 bytes (1.9 GB) copied, 22.7916 seconds, 83.5 MB/s
I haven't done anything special to either the NFS mount or the NFS export on my Solaris box. All that I could think of is some driver issues, maybe? -
Issue with backup NCS via NFS (Cisco Prime NCS 1.2.0)
Hello,
Does someone have issue with backup NCS via externally mounted location (NFS)?
I have Cisco Prime NCS 1.2.0 and tried backup it to external resources, but I have issue with my free space:
NCS/admin# backup ncs repository backup_nfs
% Creating backup with timestamped filename: ncs-130131-0534.tar.gpg
INFO : Cannot configure the backup directory size settings as the free space available is less than the current database size.
You do not have enough disk space available in your repository to complete this backup.
DB size is 25 GB
Available size is 12 GB
Please refer to the command reference guide for NCS and look at the /backup-staging-url/ command reference to setup the backup repository on an externally mounted location
Stage 5 of 7: Building backup file ...
-- complete.
Stage 6 of 7: Encrypting backup file ...
-- complete.
Stage 7 of 7: Transferring backup file ...
-- complete.
I have tried to add additional space and use command backup-staging-url (my configuration: backup-staging-url nfs://server2008:/nfs), but it didn't help me.
NFS share works perfect. I have checked it via NFS repository:
repository backup_nfs
url nfs://server2008:/nfs
+++++++++++++++++++++++++++++++++++++++
NCS/admin# show repository backup_nfs
NCS-130130-1135.tar.gpg
NCS-130130-1137.tar.gpg
NCS-130130-1157.tar.gpg
NCS-130130-1158.tar.gpg
test-130130-1210.tar.gz
Everytime when I try create backup I receive error message "You do not have enough disk space available in your repository to complete this backup".
Does someone know how can I backup NCS system?
Thank youHow much space is availabe on that NFS mount point? It looks like to me from the error message that there is only 12 GB....
The backup-staging-url is just for a space used to stage the backup before it is written----- -
Hi,
I'm having an issue creating a storage repository in VM manager 3.0.1. I have 2 blades connected via fiber channel to SAN storage. When discovering the server, the storage gets detected automatically as an "unmanaged fiber channel array". I am however able to see the LUNs presented. In this case its only one 300GB LUN. I followed the documentation and was able to
1. Discover the servers (2 blades)
2. Register the storage (1x300GB LUN)
3. Create the VM network (2 VLAN networks)
4. Create the VNICs (4)
5. Create the server pool
However when I try to create the repository and select a physical disk, it does list any physical disks under the Fiber Channel storage. It should display the 300GB disk but that's missing. Has anyone seen the same issue before?
ThanksI was able to resolve the issue. Looks like when there is an existing file system on a LUN, OVM doesn't create the repository. You have to delete and recreate the LUN and that resolved the issue
Thanks -
ACS 5.1 backup via nfs issues
Hi all,
we ran into problems when launching an immediate backup via nfs:
server is windows (nfs share), ACS is 5.1.0.44
windows log files show successful nfs mount/unmount when we try to save the ACS backup to a nfs repository but ACS reports that the backup could not be written after sucessfully generating the .gpg archive.
show repository for a nfs repository always returns errors on the ACS console, either "could not mount" or "invalid directory" - in both cases the windows log file report a successful nfs mount (we turned on nfs logs on the windows server).
the same operations, backing up and show repository, are sucessful with a test ftp-server (we just changed the protocol and adjusted the credentials in the repository config).
I have carefully read throug the available documention and the sucessful nfs mount/unmount on the windows logs show that basically everything is configured correctly and no restrictions on the server block access to the nfs.
any idea on that?
rgds,
MiKa
PS the url used for nfs follows the syntax:
nfs://{server}:/{nfs-share-name}/
the colon has to be there, according to the documentation
Message was edited by: m.kafkaHi all,
we ran into problems when launching an immediate backup via nfs:
server is windows (nfs share), ACS is 5.1.0.44
windows log files show successful nfs mount/unmount when we try to save the ACS backup to a nfs repository but ACS reports that the backup could not be written after sucessfully generating the .gpg archive.
show repository for a nfs repository always returns errors on the ACS console, either "could not mount" or "invalid directory" - in both cases the windows log file report a successful nfs mount (we turned on nfs logs on the windows server).
the same operations, backing up and show repository, are sucessful with a test ftp-server (we just changed the protocol and adjusted the credentials in the repository config).
I have carefully read throug the available documention and the sucessful nfs mount/unmount on the windows logs show that basically everything is configured correctly and no restrictions on the server block access to the nfs.
any idea on that?
rgds,
MiKa
PS the url used for nfs follows the syntax:
nfs://{server}:/{nfs-share-name}/
the colon has to be there, according to the documentation
Message was edited by: m.kafka
Maybe you are looking for
-
I just got a look at the receipt that was emailed to me from today's purchase. It probably wasn't a good idea to have it emailed instead of getting a chance to look at it in the store! I was charged $379 ( I think for the 3 year Apple Care) and $99 f
-
Marketing Calender Query field is not populating
Hi Experts, I'm using CRM5.0 and EP7.0 . I've a problem with marketing calener. I've added it to portal but I'm not getting data from the backend in the query field(Show label). I'm getting all the views in view field but not the query field. I got s
-
Percentage calculation on query designer (7.0)
Hi experts, I´m new on Query designer (7.0) and I´m trying to do a finnancial report, so for the lines I create and structure using G/L accounts and in the columns I got the amount for each level in my structure for a month. Now I´m trying to create
-
ATG Search not available in ATG Commerce Service Center 10.1
Can anyone please tell me, how to enable this? I am getting the following message. *"Search is unavailable. Contact your system administrator for more details"* Thanks, MBS
-
BUG: NEWLINE IN CAPTION CAUSES EDIT IN PHOTOSHOP NOT TO WORK
OSX 10.5.4 Lightroom 2.0 CS3 10.0.1 Put a newline character in the Caption field. Like Barrio District Tucson, Arizona Then "Edit in Photoshop", Photoshop will fire up, but no image appears. I take the newline out of the caption and WOW it works... P