OVM 3.1.1: OVMM won't rediscover iSCSI LUNS

Hi,
I have setup an new OVMS with multipathing iSCSI. When I first discovered that server, I was able to see 6 IETD volumes, where 4 being the 4 paths to the actual two volumes. Since I wanted to have a more readable name I did the following:
- remove all IETD volumes under Storage -> SAN Servers ->Unmanaged iSCSI Storage Array -> iSCSI Volume Group
- then I edited the multipath.conf on the VM server
- re-discovered the server
However, the iSCSI volumes won't show up again and I wonder how I can get them to show up again.
Any ideas, anyone?

So, it turned out, that the ovs-agent didn't like the changes, that I made to the multipath.conf file. I have exchanged the multipath.conf with the original one and now, my IET devices show up again in OVMM.

Similar Messages

  • Install OVM server 3.2.1 on iSCSI LUN

    Hello friends,
    I have a Cisco Blade and would like to boot OVM server from an iSCSI lun. I create a LUN from a SAN and presented the lun to the blade. When the server boots up, it sees the iSCSI lun just fine but when I tried to install OVM server 3.2.1, it did not detect that LUN. I tried to install Oracle Linux 6.3 and it sees the LUN ok. Is there a way to make OVM server to see that LUN.
    thanks,
    TD

    I have done to many oel and ovs install lately such that they are blurring together...
    On the OVM 3.2.1 install watch closely for a place to add support for non local storage. I remember seeing a small prompt in some of the installs but it may have been from some of the OEL installs (I have also been doing OEL 4, 5, & 6) & not OVM.
    Sorry I can't remember right now. If I get a moment and try to run the install process I will and report back if no one else does.

  • OVM 3.1.1 iSCSI multipathing

    I am setting up an OVM 3.1.1 environment at a customer site who is presenting iSCSI LUNs from an EMC Filer. I have a few questions:
    * what is the proper way to discover a set of iSCSI LUNs when the storage unit has 4 unique IP addresses on 2 different VLAN's? If I discover all 4 paths, they present in the GUI as 4 separate SAN servers. The LUNs seem to show up scattered across all 4 SAN servers. By my simple logic, my thinking is that if I were to lose access to one of those SAN servers, that the LUNs that happen to be presented via that SAN server would disappear and not be accessible. I know this isn't the case however because multipath -ll on the OVM server shows me that there are 4 distinct paths to each LUN that I'm expecting to see- and I've verified that multipath is working by downing one of the two NICs that are allocated to iSCSI and I can see that two paths of four are failed, but I can still access the disk just fine. Is this just me not setting things up the right way in the GUI, or is the GUI implemented poorly here and needs to be redesigned so it's clear to both myself AND the customer?
    * has anyone used the storage connect plugins for either iSCSI or Fiber Channel storage with OVM? What does it actually do for you and is it easy or easier than unmanaged storage to implement? Is it worth the hassle?

    Here are the notes I had written down:
    == change iSCSI default timeout in /etc/iscsi/iscsid.conf for any future connections ==
    * change node.session.timeo.replacement_timeout from 120 to 5
    #node.session.timeo.replacement_timeout = 120
    node.session.timeo.replacement_timeout = 5
    == identify iSCSI lun's ==
    # iscsiadm -m session
    tcp: [1] xx.xx.xx.xx:3260,4 iqn.1992-04.com.emc:cx.apm00115000338.b9
    tcp: [2] xx.xx.xx.xx:3260,3 iqn.1992-04.com.emc:cx.apm00115000338.b8
    tcp: [3] xx.xx.xx.xx:3260,1 iqn.1992-04.com.emc:cx.apm00115000338.a8
    tcp: [4] xx.xx.xx.xx:3260,2 iqn.1992-04.com.emc:cx.apm00115000338.a9
    == confirm current active timeout value before the change ==
    cat /sys/class/iscsi_session/session*/recovery_tmo
    120
    120
    120
    120
    == manually change timeout on each iSCSI lun for current active connections ==
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.b9 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.b8 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.a8 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm00115000338.a9 -p xx.xx.xx.xx:3260 -o update -n node.session.timeo.replacement_timeout -v 5
    == restart iscsi to make changes take effect ==
    service iscsi stop
    service iscsi start
    NOTE: service iscsi restart and /etc/init.d/iscsi restart doesn't seem to work. Only by stopping, then implicitly starting the iscsi service does it seem to work consistently.
    == restart multipathd ==
    # service multipathd restart
    Stopping multipathd daemon: [  OK  ]
    Starting multipathd daemon: [  OK  ]
    == Verify new timeout value on active sessions ==
    cat /sys/class/iscsi_session/session*/recovery_tmo
    5
    5
    5
    5

  • ISCSI Initator still connects to NAS after deleting OVM Storage Array

    I've got a 3.0.2 test rig with a couple of VM servers, a VM Manager and an Iomega NAS
    I have a working solution using NFS storage, so the next step was to get a working solution with iSCSI
    I started by enabling iSCSI and creating LUNs on an Iomega NAS
    I then created an iSCSI Storage Array in OVM which pointed at the NAS
    I registered a couple of iSCSI Initators in the OVM configuration.
    I then created a server pool etc and everything looks good
    Time to cleanup
    I deleted ISCSI based repository contents and then the serverpool.
    I deleted the LUNS from the OVM Storage Array definition
    I tried to remove the iSCSI Initators but it said it couldn't do that with an iSCSI generic initator
    I deleted the OVM Storage array
    I then went to the Iomega NAS to delete the LUNs and it says that there are still VM Server iSCSI Initators connected.
    I suspect I will be able to get rid of the iSCSI initiators connecting to the NAS by shutting down my two VM Servers and then rebooting the NAS. Since the VM servers aren't running there will be no connections coming from them so I will be able to delete the LUNS and turn off iSCSI.
    Should the delete of the OVM storage Array definition have deleted the iSCSI Initators or did I miss a step somewhere ?
    At this point, is there an easier to way to cleanup without rebooting everything ?

    After hooking up a couple of OVM3.0.2 servers setup to an HP P4000 VSA - I noticed that when deleting iSCSI LUNs first from VM Manager, and then from the VSA - after rebooting the hosts - the OVM servers would try to connect to the LUNs even though they had been deleted.
    If the LUN was deleted from VM Manager, but not the SAN - the LUN would just re-appear.
    If the LUN was deleted from VM Manager, along with the iSCSI Disk Array registration - the LUN would end up in 'unmanaged iSCSI array'.
    For some reason I can't seem to un-assign a server initiator from the arrays 'default access group'. I guess perhaps this is the problem.
    The only way to get around this is to delete the entire array registration!
    I created an SR and the latest news on that is that it's been filed as a bug.....
    Jeff

  • OVM Server 3.2.8 - LUN Larger Than Allowed By The Host Adapter

    Heya,
    We have 2 blades that where recently updated to OVM Server 3.2.8. We had 2x100GB LUN's assigned to those servers ... however, when trying to to discover them we get the following :
    Oct 27 14:34:43 XXXXXXXXXX kernel: scsi: host 0 channel 3 id 0 lun4194304 has a LUN larger than allowed by the host adapter
    Oct 27 14:34:43 XXXXXXXXXX kernel: scsi: host 0 channel 3 id 0 lun4194560 has a LUN larger than allowed by the host adapter
    These blades where previously on 3.1.1 and where updated along with the OVM Manager. Everything is working fine on the blades and VM's there. We have had storage assigned previously from the SAN but when on 3.1.1.
    Any idea what needs to be changed ?
    Regards

    What make of HBA is this? This LUN numbers are actually unusually high. And what storage HW provides these LUNs?
    Cheers,
    budy

  • Ovs database recreation

    Hi forum
    We are having trouble migrating to ver 3.1.1 from 3.0.2
    The upgrade process seems to hang.We tried many things and now we are thinking of recreating the ovs schema.
    We plan to use the same UUID,as the 3.0.2 version.
    Will we be able to discover again the repository,the Oracle VM Servers and the Oracle Virtual Machines?
    Any ideas?
    Thanks,
    Rudi

    Ok, seems like we have some progress.
    Tell me this, did you install OVM Manager 3.1.1 then try to discover your old OVM 3.0.2 servers and that was unsuccessful? And after that you've upgraded servers to 3.1.1 version?
    I'm asking this because it might be that ovs schema have some data from old version.
    Virtual IP cannot be changed, but if you had VIP in previous version there's no reason why it's not showing under pool properties.
    Try again reinstalling the OVM Manager 3.1.1(complete uninstall, everything) with your uuid.
    After installation you can apply patch if you want:
    https://updates.oracle.com/Orion/PatchDetails/process_form?aru=14972593
    Also, as root in terminal enter:
    cd /u01/app/oracle/ovm-manager-3/machine1/base_adf_domain/bin
    *./stopWebLogic.sh*
    when script ends enter:
    *./startWebLogic.sh* (wait until weblogic server reaches RUNNING status, and then minimize terminal, don't close it!)
    Don't know why, but after installation of OVM Manager, by default ovmm service is running and when I log into Manager, every operation returned
    me "Aborted" status. So I've restarted weblogic and then everythig went fine.
    So, when you login into OVM Manager do next:
    - first discover your 2 servers, one by one, and when that operations are Completed, you should be able to see your Pool, Repository, Network
    and Storage that you have created earlier
    - next, click on Repositories tab, select your repository, click on green arrows(Present/Unpresent...) and check is your repository is
    presented to servers you want
    - then, click on blue arrows to Refresh selected repository and after that is Completed you should get items in your repository
    - back to Servers and VMs tab, right click on each of your servers and select Rediscover server
    - if there is no VMs on your servers, check Unassigned Virtual Machines they should be there
    I've done this couple of times, and sholud work for you too ;)
    Regards.

  • Unable to install Oracle VM 3.0.2 to an iSCSI disk

    Dear users,
    I am trying to install Oracle VM 3.0.2 Server on an iSCSI disk without luck. Every time that i try it, console (alt+f4) shows me this error:
    "connection1:0: Could not create connection due to crc32c loading error. Make sure the crc32 module is built as a module or into the kernel"
    "session1: couldn't create a new connection"
    "Connection1:0 to [target iqn........, portal: 1.1.1.1,3260] through [iface: default] is shutdown.
    Seeing console alt+f3 appears:
    "iSCSI initiator name iqn......"
    "iSCSI startup"
    It seems like anaconda installer doesn't sees ip address configuration. I have tried to pass "asknetwork" option when I launch setup, but anaconda never demands network configruation. After that I have configured ethernet device on console alt+f2. Maybe is this the problem or is not possible to install Oracle VM 3.0.2 on an iSCSI disk??
    Thanks.

    Just to let you know that I actually have an SR open with Oracle about this.
    I managed to track down that the problem is due to OVM 3's inability to read iBFT (iSCSI Boot Firmware Table) during install.
    This was also the case with OVM 2.x - but as Oracle Linux 5.x boots via iSCSI fine via IBFT (we have OVM 3 Manager running OL6 booting from SAN fine) - we stupidly assumed that with OVM3 - being based on a newer kernel, the iBFT stuff would just work.
    It didn't!
    The only iSCSI booting Oracle support currently is via a full blown HBA. To clarify this is not one of those new NICs with an embedded iSCSI initiator in the ROM - we're talking about an uber expensive HBA that can load the entire iSCSI stack and presents your iSCSI LUN as a local disk.
    My SR is now over a month old and unfortunately there has been very little progress made with it.
    But when it comes I will be sure to post it here.....
    I'm really surprised more people haven't asked for this functionality. Seems a bit of a no brainer....
    Cheers,
    Jeff

  • NFS vs ISCSI for Storage Repositories

    Anyone have any good guidance in using NFS vs ISCSI for larger production deployments of OVM 3?
    My testing has been pretty positive with NFS but other than the documented "its not as fast as block storage" and the fact that there is no instant clones (no OCFS2), has anyone else contemplated between the two for OVM? If so, what did you choose and why?
    Currently we are testing using NFS thats presented from a Solaris HA Cluster servicing a ZFS pool (basically mimicking ZFS 73xx and 74xx appliances) but I don't know how the same setup would perform if the ZFS pool grew to 10TB of running virtual disk images.
    Any feedback?
    Thanks
    Dave

    Dave wrote:
    Would you personally recommend against using one giant NFS mount to storage VM disk images?I don't recommend against it, it's just most often the slowest possible storage solution in comparison to other mechanisms. NFS cannot take advantage of any of the OCFS2 reflinking, so guests must be fully copied from the template, which is time consuming. Loop-mounting a disk image on NFS is less efficient than loop-mounting it via iSCSI or directly in the guest. FC-SAN is the usually the most efficient storage, but bonded-10Gbps interfaces for NFS or iSCSI may now be faster. If you have dual-8Gpbs FC HBAs vs dual 1Gbps NICs for NFS/iSCSI, the FC SAN will win.
    Essentially, you have to evaluate what your critical success factors are and then make storage decisions based on that. As you have a majority of Windows guests, you need to present the block devices via Oracle VM, so you need to use either virtual disk images (which are the slowest, but easiest to manage) or FC/iSCSI LUNs presented to the guest (which are much faster, but more difficult to manage).

  • Ix4-300d iSCSI disconnect Windows 2012R2

    Hello Everyone,
    I just recently purchased (3 days ago) an ix4-300D and updated it to the latest patch.  I have 4 x WD 3TB Disks and they are configured with a raid 0 configuration.  I then have 2 NICs (not bonded) and using the second port for the iSCSI LAN (192.168.100.100, Jumbo Frames 9000) and the 1st port for accessing the device (192.168.1.100).  I carve out 3 TB iSCSI LUN (with CHAP) and give it a name. My Windows Server sees this (under the 192.168.100.X network) and I"m able to bring it online, mount it, format it and use the LUN.  I then start copying about 2.5 TB of Data off an ix2 back to this Ix4 and about 1/4 through the copy, the iSCSI disconnects and the actual unit is frozen., It won't respond to web interface on the 192.168.1.x LAN either. I have to actually unplug it and then plug it back in for the unit to respond.   I know that iSCSI works on the Window Server as I replaced a custom NAS Server using OpenFiler and worked for over 2 years w/o issues/reboots or anything else. The only thing that has changed is the NAS box (the ix4-300d).  Can anyone think of anything I could be doing wrong? What to check? I am thinking it is a bad unit but I want to check everything first.
    I will say the only thing that *could* be out of line (but is VERY doubtful is the problem) is that 3 of the WD drives are 3TB Reds and 1 is a 3 TB Black. I'm going to remove the black drive and redo the entire unit to see if that resolves the problem but I doubt it will.  I know the Reds are "certified".
    I have already removed the Jumbo Frames just in case (though the windows networking card supports Jumbo Frames 9000 since the old SAN was working with these w/o issue).
    Thoughts anyone???

    Hello CPXGreg
    Using a mixed disk array can cause instabilities, so I do recommend that you try using the unit without the WD Black disk for at least testing.
    Next, there are various factors that could cause or contribute to a firmware lock like this.  Usually this is caused by media server crashing, which then escalates to a full firmware lock.  I recommend checking that media server has been turned off for now and any unused protocols.  If the unit still locks up I recommend that you contact LenovoEMC support to have a dump report from the unit reviewed.  This way we can check what may be at the root of the issue.
    LenovoEMC Contact Information is region specific. Please select the correct link then access the Contact Us at the top right:
    US and Canada: https://lenovo-na-en.custhelp.com/
    Latin America and Mexico: https://lenovo-la-es.custhelp.com/
    EU: https://lenovo-eu-en.custhelp.com/
    India/Asia Pacific: https://lenovo-ap-en.custhelp.com/
    http://support.lenovoemc.com/

  • Can't create repository as iSCSI physical disk

    Using OVM 3.1.1
    I am using a ZFS Storage Appliance (simulated under virtualbox for testing) as a SAN.
    I created two iSCSI LUN devices on the appliance in the same ZFS pool:- a LUN for a server pool, and a LUN for a repository.
    After creating an Access Group that two of my OVM servers could use to see this pool, I was able to create a pool using the LUN I made for the server pool.
    Of course I have already installed the ZFS-SA pugin on my two OVM nodes, so this all works.
    When I tried to create the repository, I use the other iSCSI LUN and I get the message box with spinning timer icon telling me that it is creating the repository.
    However, the process times out and fails. The details of the failure I cut and pasted here.
    The interesting this is that instead of an iSCSI LUN, I can create an NFS share on the ZFS-SA, mount that, and use that as a repository and have that work.
    That's not what I want, however.
    What's going on? The detailed log output gives me no clue whatsoever as to what is wrong. Looks like it's clashing with OCFS2 or something.
    Job Construction Phase
    begin()
    Appended operation 'File System Construct' to object '0004fb0000090000de3c84de0325cbb2 (Local FS ovm-dev-01)'.
    Appended operation 'Cluster File System Present' to object 'ec686c238f27311b'.
    Appended operation 'Repository Construct' to object '0004fb000003000027ca3c09e0f30673 (SUN (2))'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [Cluster] ec686c238f27311b
    Operation: Cluster File System Present
    Object (CREATED): [LocalFileSystem] 0004fb0000050000e580a3d171ecf6c1 (fs_repo01)
    Object (IN_USE): [LocalFileServer] 0004fb00000900008e232246f9e4b224 (Local FS ovm-dev-02)
    Object (CREATED): [Repository] 0004fb000003000027ca3c09e0f30673 (repo01)
    Operation: Repository Construct
    Object (IN_USE): [LocalFileServer] 0004fb0000090000de3c84de0325cbb2 (Local FS ovm-dev-01)
    Operation: File System Construct
    Object (IN_USE): [StorageElement] 0004fb00001800009299c1a46c0e3979 (SUN (2))
    Job Running Phase at 00:39 on Wed, Jun 20, 2012
    Job Participants: [34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)]
    Actioner
    Starting operation 'Cluster File System Present' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
    Completed operation 'Cluster File System Present' completed with direction ==> DONE
    Starting operation 'Repository Construct' on object '0004fb000003000027ca3c09e0f30673 (repo01)'
    Completed operation 'Repository Construct' completed with direction ==> LATER
    Starting operation 'File System Construct' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
    Job: 1340118585250, aborted post-commit by user: admin
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=1450 method=addTransactionIdentifier accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=675 method=createFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setFoundryContext accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=onPersistableCreate accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setRollbackLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setRefreshed accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setBackingDevices accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setUuid accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setPath accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setStorageDevice accessLevel=6
    Class=StorageElementDbImpl vessel_id=1273 method=addLayeredFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=921 method=addFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=addLocalFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setCluster accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setAsset accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=createRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setName accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setFoundryContext accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=onPersistableCreate accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setRollbackLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setRefreshed accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setDom0Uuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setSharePath accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=addRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setManagerUuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setVersion accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=addJobOperation accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setDescription accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setAssociatedHandles accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=setCurrentJobOperationComplete accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000de3c84de0325cbb2] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:894)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.storage.LocalFileServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:890)
    ... 27 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 30 more
    FailedOperationCleanup
    Starting failed operation 'File System Construct' cleanup on object 'fs_repo01'
    Complete rollback operation 'File System Construct' completed with direction=fs_repo01
    Rollbacker
    Executing rollback operation 'Cluster File System Present' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
    Complete rollback operation 'Cluster File System Present' completed with direction=DONE
    Executing rollback operation 'File System Construct' on object '0004fb0000050000e580a3d171ecf6c1 (fs_repo01)'
    Complete rollback operation 'File System Construct' completed with direction=DONE
    Objects To Be Rolled Back
    Object (IN_USE): [Cluster] ec686c238f27311b
    Object (CREATED): [LocalFileSystem] 0004fb0000050000e580a3d171ecf6c1 (fs_repo01)
    Object (IN_USE): [LocalFileServer] 0004fb00000900008e232246f9e4b224 (Local FS ovm-dev-02)
    Object (CREATED): [Repository] 0004fb000003000027ca3c09e0f30673 (repo01)
    Object (IN_USE): [LocalFileServer] 0004fb0000090000de3c84de0325cbb2 (Local FS ovm-dev-01)
    Object (IN_USE): [StorageElement] 0004fb00001800009299c1a46c0e3979 (SUN (2))
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=1450 method=addTransactionIdentifier accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=675 method=createFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setFoundryContext accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=onPersistableCreate accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setRollbackLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setRefreshed accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setBackingDevices accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setUuid accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setPath accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setStorageDevice accessLevel=6
    Class=StorageElementDbImpl vessel_id=1273 method=addLayeredFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setSimpleName accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=921 method=addFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=addFileServer accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=addLocalFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setCluster accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=setAsset accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=createRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setName accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setFoundryContext accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=onPersistableCreate accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setRollbackLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setRefreshed accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setDom0Uuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setSharePath accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=addRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setManagerUuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setVersion accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=addJobOperation accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setDescription accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setAssociatedHandles accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=setCurrentJobOperationComplete accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setTuringMachineFlag accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=675 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=1450 method=setFailedOperation accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=1459 method=nextJobOperation accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=921 method=nextJobOperation accessLevel=6
    Class=RepositoryDbImpl vessel_id=1464 method=nextJobOperation accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=675 method=nextJobOperation accessLevel=6
    Class=StorageElementDbImpl vessel_id=1273 method=nextJobOperation accessLevel=6
    Class=ClusterDbImpl vessel_id=1374 method=nextJobOperation accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=675 method=nextJobOperation accessLevel=6
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000de3c84de0325cbb2] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000de3c84de0325cbb2] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:894)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at sun.reflect.GeneratedMethodAccessor728.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.storage.LocalFileServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at sun.reflect.GeneratedMethodAccessor1229.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    Wed Jun 20 00:41:47 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:890)
    ... 27 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000e580a3d171ecf6c1 /dev/mapper/3600144f0c17a765000004fe11a280004 0, Status: java.lang.InterruptedException
    Wed Jun 20 00:41:47 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 30 more
    End of Job
    Edited by: 941491 on Jun 19, 2012 5:54 PM
    Edited by: 941491 on Jun 19, 2012 5:55 PM
    Edited by: 941491 on Jun 19, 2012 5:56 PM

    To frustrate me further, trying to remove my repository LUN from the access group hasn't worked.
    I begin by editing the storage access group and swapping the repository LUN I created back out and onto the left panel (where it is to be unused I imagine).
    What's ridiculous is that the LUN isn't mounted or used by anything (it didn't work, remember?). If it was easy to add to the group... why so hard to remove?
    ...to my surprise, I get this:-
    Perhaps it's because I am attempting to do something before something else has been properly cleaned up.
    It would be nice to be told of that, rather than have error messages like this thrust at you.
    Job Construction Phase
    begin()
    Appended operation 'Storage Element Teardown' to object '34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)'.
    Appended operation 'Storage Element Teardown' to object '34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)'.
    Appended operation 'Storage Element UnPresent' to object '0004fb00001800000e2fa7b367ddf334 (repo01)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)
    Operation: Storage Element Teardown
    Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:fe7bba90add3 : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
    Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)
    Operation: Storage Element Teardown
    Object (IN_USE): [AccessGroup] group01 @ 0004fb00000900003b330fb87739bbfe (group01)
    Object (IN_USE): [IscsiStorageTarget] iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643
    Object (IN_USE): [StorageElement] 0004fb00001800000e2fa7b367ddf334 (repo01)
    Operation: Storage Element UnPresent
    Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:fe7bba90add3
    Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:43e520f2e5f : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
    Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:43e520f2e5f
    Job Running Phase at 02:56 on Wed, Jun 20, 2012
    Job Participants: [34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)]
    Actioner
    Starting operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)'
    Sending storage element teardown command to server [ovm-dev-02] for element whose page 83 id is [3600144f0c17a765000004fe138810007]
    Completed operation 'Storage Element Teardown' completed with direction ==> DONE
    Starting operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)'
    Sending storage element teardown command to server [ovm-dev-01] for element whose page 83 id is [3600144f0c17a765000004fe138810007]
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [teardown] failed for storage server [{access_grps=[{grp_name=default, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f], grp_modes=[]}, {grp_name=group01, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f, iqn.1988-12.com.oracle:fe7bba90add3], grp_modes=[]}], passwd=null, admin_passwd=W1a,1bT7, storage_id=[iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643], chap=false, access_host=zfs-app.icesa.catholic.edu.au, storage_server_id=2b34e1ce-5465-ecd1-9a10-c17a7650ba08, vol_groups=[{vol_alloc_sz=0, vol_free_sz=0, vol_used_sz=0, vol_name=ovmpool/local/default, vol_total_sz=0, vol_desc=}], username=null, name=0004fb00000900003b330fb87739bbfe, admin_user=cesa_ovm, uuid=0004fb00000900003b330fb87739bbfe, extra_info=OVM-iSCSI,OVM-iSCSI-Target, access_port=3260, storage_type=iSCSI, admin_host=zfs-app.icesa.catholic.edu.au}] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012...
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:75)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.ServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:821)
    at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:71)
    ... 25 more
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:817)
    ... 26 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 29 more
    FailedOperationCleanup
    Starting failed operation 'Storage Element Teardown' cleanup on object 'ovm-dev-01'
    Complete rollback operation 'Storage Element Teardown' completed with direction=ovm-dev-01
    Rollbacker
    Executing rollback operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)'
    Complete rollback operation 'Storage Element Teardown' completed with direction=DONE
    Executing rollback operation 'Storage Element Teardown' on object '34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)'
    Complete rollback operation 'Storage Element Teardown' completed with direction=DONE
    Objects To Be Rolled Back
    Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:33:42:42 (ovm-dev-02)
    Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:fe7bba90add3 : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
    Object (IN_USE): [Server] 34:35:33:33:33:30:43:4e:37:37:34:37:30:32:53:35 (ovm-dev-01)
    Object (IN_USE): [AccessGroup] group01 @ 0004fb00000900003b330fb87739bbfe (group01)
    Object (IN_USE): [IscsiStorageTarget] iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643
    Object (IN_USE): [StorageElement] 0004fb00001800000e2fa7b367ddf334 (repo01)
    Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:fe7bba90add3
    Object (DELETING): [IscsiStoragePath] iqn.1988-12.com.oracle:43e520f2e5f : iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643 (3600144f0c17a765000004fe138810007)
    Object (IN_USE): [IscsiStorageInitiator] iqn.1988-12.com.oracle:43e520f2e5f
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=2295 method=addTransactionIdentifier accessLevel=6
    Class=StorageElementDbImpl vessel_id=1923 method=unpresent accessLevel=6
    Class=ServerDbImpl vessel_id=728 method=teardownStorageElements accessLevel=6
    Class=ServerDbImpl vessel_id=1686 method=teardownStorageElements accessLevel=6
    Class=AccessGroupDbImpl vessel_id=1941 method=removeStorageElement accessLevel=6
    Class=IscsiStorageInitiatorDbImpl vessel_id=845 method=deleteStoragePath accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2070 method=setLifecycleState accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2070 method=setRollbackLifecycleState accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2070 method=onPersistableClean accessLevel=6
    Class=StorageElementDbImpl vessel_id=1923 method=removeStoragePath accessLevel=6
    Class=IscsiStorageTargetDbImpl vessel_id=580 method=removeStoragePath accessLevel=6
    Class=IscsiStorageInitiatorDbImpl vessel_id=1803 method=deleteStoragePath accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2132 method=setLifecycleState accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2132 method=setRollbackLifecycleState accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2132 method=onPersistableClean accessLevel=6
    Class=StorageElementDbImpl vessel_id=1923 method=removeStoragePath accessLevel=6
    Class=IscsiStorageTargetDbImpl vessel_id=580 method=removeStoragePath accessLevel=6
    Class=InternalJobDbImpl vessel_id=2295 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=2295 method=setAssociatedHandles accessLevel=6
    Class=ServerDbImpl vessel_id=728 method=setCurrentJobOperationComplete accessLevel=6
    Class=ServerDbImpl vessel_id=728 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=1686 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=2295 method=setFailedOperation accessLevel=6
    Class=ServerDbImpl vessel_id=728 method=nextJobOperation accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2132 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=1686 method=nextJobOperation accessLevel=6
    Class=AccessGroupDbImpl vessel_id=1941 method=nextJobOperation accessLevel=6
    Class=IscsiStorageTargetDbImpl vessel_id=580 method=nextJobOperation accessLevel=6
    Class=StorageElementDbImpl vessel_id=1923 method=nextJobOperation accessLevel=6
    Class=IscsiStorageInitiatorDbImpl vessel_id=1803 method=nextJobOperation accessLevel=6
    Class=IscsiStoragePathDbImpl vessel_id=2070 method=nextJobOperation accessLevel=6
    Class=IscsiStorageInitiatorDbImpl vessel_id=845 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=728 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=1686 method=nextJobOperation accessLevel=6
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [teardown] failed for storage server [{access_grps=[{grp_name=default, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f], grp_modes=[]}, {grp_name=group01, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f, iqn.1988-12.com.oracle:fe7bba90add3], grp_modes=[]}], passwd=null, admin_passwd=W1a,1bT7, storage_id=[iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643], chap=false, access_host=zfs-app.icesa.catholic.edu.au, storage_server_id=2b34e1ce-5465-ecd1-9a10-c17a7650ba08, vol_groups=[{vol_alloc_sz=0, vol_free_sz=0, vol_used_sz=0, vol_name=ovmpool/local/default, vol_total_sz=0, vol_desc=}], username=null, name=0004fb00000900003b330fb87739bbfe, admin_user=cesa_ovm, uuid=0004fb00000900003b330fb87739bbfe, extra_info=OVM-iSCSI,OVM-iSCSI-Target, access_port=3260, storage_type=iSCSI, admin_host=zfs-app.icesa.catholic.edu.au}] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012...
    Wed Jun 20 02:56:08 CST 2012
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [teardown] failed for storage server [{access_grps=[{grp_name=default, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f], grp_modes=[]}, {grp_name=group01, grp_entries=[iqn.1988-12.com.oracle:43e520f2e5f, iqn.1988-12.com.oracle:fe7bba90add3], grp_modes=[]}], passwd=null, admin_passwd=W1a,1bT7, storage_id=[iqn.1986-03.com.sun:02:a0386184-4a49-eaeb-f2f0-b3b9070f3643], chap=false, access_host=zfs-app.icesa.catholic.edu.au, storage_server_id=2b34e1ce-5465-ecd1-9a10-c17a7650ba08, vol_groups=[{vol_alloc_sz=0, vol_free_sz=0, vol_used_sz=0, vol_name=ovmpool/local/default, vol_total_sz=0, vol_desc=}], username=null, name=0004fb00000900003b330fb87739bbfe, admin_user=cesa_ovm, uuid=0004fb00000900003b330fb87739bbfe, extra_info=OVM-iSCSI,OVM-iSCSI-Target, access_port=3260, storage_type=iSCSI, admin_host=zfs-app.icesa.catholic.edu.au}] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012...
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:75)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at sun.reflect.GeneratedMethodAccessor835.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.ServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at sun.reflect.GeneratedMethodAccessor1118.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_deviceTeardown] failed for storage server [0004fb00000900003b330fb87739bbfe] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:821)
    at com.oracle.ovm.mgr.op.physical.storage.StorageElementTeardown.action(StorageElementTeardown.java:71)
    ... 25 more
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ovm-dev-01 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.StoragePluginAction.teardownStorageElement(StoragePluginAction.java:817)
    ... 26 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_deviceTeardown oracle.s7k.SCSIPlugin.SCSIPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to tear down multipath device /dev/mapper/3600144f0c17a765000004fe138810007: device-mapper: remove ioctl failed: Device or resource busy\nCommand failed\n'
    Wed Jun 20 02:56:08 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 29 more
    End of Job
    Edited by: 941491 on Jun 19, 2012 7:59 PM
    Edited by: 941491 on Jun 19, 2012 8:03 PM
    Edited by: 941491 on Jun 19, 2012 8:06 PM

  • Repository Creation fails with Time Out Error

    Hi
    I am trying a test setup of OVM3.1.1-305. I am using OpenFiler for iSCSI disks and have created 20GB iSCSI disk for Pool FileSystem and another 260GB iSCSI disk for Data Repository. The server pool gets created fine with the Pool FS and HA and Cluster options enabled. But while exposing the iSCSI disk for data store the job times out. Details of the error message is as below, please help!
    The Job has timed out while executing, and will be aborted.
    Job Construction Phase
    begin()
    Appended operation 'File System Construct' to object '0004fb000009000001b30fa7bc24bef1 (Local FS ibcovs1)'.
    Appended operation 'Cluster File System Present' to object 'fb5840a9a3a78f51'.
    Appended operation 'Repository Construct' to object '0004fb00000300009bfb51738df3f282 (OPNFILER (2))'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [LocalFileServer] 0004fb000009000001b30fa7bc24bef1 (Local FS ibcovs1)
    Operation: File System Construct
    Object (CREATED): [Repository] 0004fb00000300009bfb51738df3f282 (OVSRepos)
    Operation: Repository Construct
    Object (IN_USE): [LocalFileServer] 0004fb0000090000a8d525219658c28a (Local FS ibcovs2)
    Object (IN_USE): [Cluster] fb5840a9a3a78f51
    Operation: Cluster File System Present
    Object (CREATED): [LocalFileSystem] 0004fb00000500000cd1d94d87e829e8 (fs_OVSRepos)
    Object (IN_USE): [StorageElement] 0004fb0000180000bf6e8d3ff957b21b (OPNFILER (2))
    Job Running Phase at 04:01 on Mon, Dec 3, 2012
    Job Participants: [44:45:4c:4c:48:00:10:44:80:52:b4:c0:4f:33:42:53 (ibcovs1), 44:45:4c:4c:48:00:10:46:80:32:c6:c0:4f:34:42:53 (ibcovs2)]
    Actioner
    Starting operation 'File System Construct' on object '0004fb00000500000cd1d94d87e829e8 (fs_OVSRepos)'
    Job: 1354487506506, aborted post-commit by user: admin
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=2477 method=addTransactionIdentifier accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=788 method=createFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setFoundryContext accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=onPersistableCreate accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setRollbackLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setRefreshed accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setBackingDevices accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setUuid accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setPath accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setSimpleName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=addFileServer accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setStorageDevice accessLevel=6
    Class=StorageElementDbImpl vessel_id=1121 method=addLayeredFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setSimpleName accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=922 method=addFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=addFileServer accessLevel=6
    Class=ClusterDbImpl vessel_id=1340 method=addLocalFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setCluster accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=setAsset accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=createRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setName accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setFoundryContext accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=onPersistableCreate accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setRollbackLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setRefreshed accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setDom0Uuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setSharePath accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=2486 method=addRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setManagerUuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setVersion accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=addJobOperation accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=2492 method=setDescription accessLevel=6
    Class=InternalJobDbImpl vessel_id=2477 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=2477 method=setAssociatedHandles accessLevel=6
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb000009000001b30fa7bc24bef1] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ibcovs1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500000cd1d94d87e829e8 /dev/mapper/14f504e46494c4552637a6f5154372d7643426d2d49333763 0, Status: java.lang.InterruptedException
    Mon Dec 03 04:03:48 IST 2012
    Mon Dec 03 04:03:48 IST 2012] OVMAPI_4010E Attempt to send command: dispatch to server: ibcovs1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500000cd1d94d87e829e8 /dev/mapper/14f504e46494c4552637a6f5154372d7643426d2d49333763 0, Status: java.lang.InterruptedException
    Mon Dec 03 04:03:48 IST 2012
    Mon Dec 03 04:03:48 IST 2012
    Mon Dec 03 04:03:48 IST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:894)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.storage.LocalFileServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: ibcovs1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500000cd1d94d87e829e8 /dev/mapper/14f504e46494c4552637a6f5154372d7643426d2d49333763 0, Status: java.lang.InterruptedException
    Mon Dec 03 04:03:48 IST 2012
    Mon Dec 03 04:03:48 IST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:890)
    ... 27 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500000cd1d94d87e829e8 /dev/mapper/14f504e46494c4552637a6f5154372d7643426d2d49333763 0, Status: java.lang.InterruptedException
    Mon Dec 03 04:03:48 IST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 30 more
    FailedOperationCleanup
    Starting failed operation 'File System Construct' cleanup on object 'fs_OVSRepos'
    Complete rollback operation 'File System Construct' completed with direction=fs_OVSRepos
    Rollbacker
    Executing rollback operation 'File System Construct' on object '0004fb00000500000cd1d94d87e829e8 (fs_OVSRepos)'
    Complete rollback operation 'File System Construct' completed with direction=DONE
    Objects To Be Rolled Back
    Object (IN_USE): [LocalFileServer] 0004fb000009000001b30fa7bc24bef1 (Local FS ibcovs1)
    Object (CREATED): [Repository] 0004fb00000300009bfb51738df3f282 (OVSRepos)
    Object (IN_USE): [LocalFileServer] 0004fb0000090000a8d525219658c28a (Local FS ibcovs2)
    Object (IN_USE): [Cluster] fb5840a9a3a78f51
    Object (CREATED): [LocalFileSystem] 0004fb00000500000cd1d94d87e829e8 (fs_OVSRepos)
    Object (IN_USE): [StorageElement] 0004fb0000180000bf6e8d3ff957b21b (OPNFILER (2))
    Thanks & Regards

    check your messages log and etc on the VM server. The job log doesn't go into enough detail to tell what's causing the issue. It did fail cleanup. Meaning.... that your ocfs2 filesystem is probably orphaned. You have have to clear the iSCSI LUN before trying again.

  • Can not attach older iSCSI lun to repository

    I am a current Virtual Iron user and have set up OVM to replace my aging system. I have everything set up on some new hardware and up to a point where I want to set up some VMs.
    I have an older VM under Virtual Iron I am no longer using, so I thought this would be a good test subject.
    I assigned both my OVM servers to the volume in my iSCSI san.
    The SAN and volume show correctly within the storage tab.
    I have rescanned for physical disks on each server and found the volume.
    I can not create a VM just using the physical disk....I wants me to use a repository...
    Ok, so I tried to create a repository using the existing volume that contains my server....no go...get this error
    (10/16/2012 12:38:54:518 PM)
    OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000df4ea0c58186307e] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: OVMS1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500003beb40ffab937285 /dev/mapper/36000eb346b4e2d99000000000000012e 0, Status: OSCPlugin.InvalidValueEx:'The backing device /dev/mapper/36000eb346b4e2d99000000000000012e is not allowed to contain partitions'
    Tue Oct 16 12:38:54 CDT 2012
    Tue Oct 16 12:38:54 CDT 2012] OVMAPI_4010E Attempt to send command: dispatch to server: OVMS1 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb00000500003beb40ffab937285 /dev/mapper/36000eb346b4e2d99000000000000012e 0, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.InvalidValueEx:'The backing device /dev/mapper/36000eb346b4e2d99000000000000012e is not allowed to contain partitions'
    Tue Oct 16 12:38:54 CDT 2012
    Tue Oct 16 12:38:54 CDT 2012
    Tue Oct 16 12:38:54 CDT 2012
    Looks like it will not use the volume because something is on it.
    In Virtual Iron, I could use a virtual storage (repository), local disks, or iSCSI luns for my VMs.
    So how am I to use my older server within OVM?

    Ok. Found that someone else was having similar problems....
    Oracle VM Manager 3.1.1: Discovering SAN Servers
    My solution was to scrap the whole thing....and start over with the new beta 3.2.1. (build 258) This has given me the ability to see all my SANs now.
    Edited by: Looney128 on Nov 13, 2012 8:32 AM

  • Re-connect to SAN filesystem

    So my OV 3.1 environment had some serious issues. The two VM servers kept rebooting repeatedly - I tried booting into rescue mode but I couldn't find any issues with any config files. It just looked like the OVS agent on each server was losing the heartbeat. If I took the agent down (quickly) the server would stay up, but I couldn't correct the problem.
    At the same time, the VM Manager server was pegged and I wasn't able to acknowledge any events in the GUI. I also couldn't remove the rebooting servers from the server pool.
    So I first decided to rebuild the servers. That was pretty straight-forward. But the screwed up VM manager wouldn't let me do anything with those servers. Rebooting the VM manager server again didn't help. So then I decided to remove and reinstall VM manager. Again, that was pretty straight forward and I used the uuid when reinstalling.
    So I was able to recreate the Network and Storage settings in VM manager and it found the server pool filesystem LUN and the storage LUN that has the VMs on it. I created the Server Pool and it used the server pool filesystem LUN OK. But now I can't access the storage LUN. I don't know how to reconnect it to the VM manager. I tried mounting it manually on a VM server just to see if there was something like a config file with a uuid that I could modify, but when I tried that I got the message "*mount.ocfs2: Cluster name is invalid while trying to join the group*". If I try to create a new repository, the LUN doesn't appear in the list of physical disks. I've tried refreshing and rebooting but that doesn't work. So I'm not sure what to do next.

    OK, so Oracle Support helped me solve this one. I sent them vmpinfo output and this is what they had me do:
    1. On the master server, find the device for the LUN in question (you can use 'multipath -ll') and run:
    fsck.ocfs2 /dev/mapper/36090a098805639d7c5fef443bc01f049 <----That's the device on my system, yours will be different
    2. Update the cluster id:
    tunefs.ocfs2 --update-cluster-stack /dev/mapper/36090a098805639d7c5fef443bc01f049
    3. Check that the cluster id has been modified:
    mounted.ocfs2 -d
    5. In VM Manager, refresh storage. This didn't solve the problem, so I rebooted my VM Servers and the VM Manager at this point but they didn't say I had to. The OV servers were logged out of the iSCSI LUNs at this point, so I ran 'iscsiadm -m node -l' on the OV servers.
    6. After I gave Oracle Support a new vmpinfo output, they had me back up my VM Manager database before proceeding further.
    7. Support then provided me with a file, setcluster.zip, that I had to unzip under $ORACLE_BASE/ovm-manager-3 (on the OVM server). It created the 'runner' subdirectory and I cd'd to that directory.
    8. I ran './setcluster -u admin -p <password>'. I had to choose the LUN in question, then the Server Pool it should belong to, and then confirm that I wanted to attach the repository to the server pool. This made the Repository show up in VM Manager, but it was empty and the LUN wasn't mounted on the OV Servers.
    9. Finally, I had to Present the OV servers to the Repository and then Refresh the Repository. Everything showed up in the repository after that - all my VMs, Templates, etc.
    10. Oh, one more thing. The Virtual Machine network showed up with a hexadecimal name so I just renamed it and added the VM Servers to it. This allowed my VMs to successfully boot.
    I hope this info helps someone if they find themselves in the same situation.
    -Rich
    Edited by: ratdude on Jul 16, 2012 2:00 PM

  • Oracle VM 3.1.1: Using iSCSI LUN's as Device-Backed Virtual Machine Disks?

    Will Oracle VM Manager 3.1.1 support the use of iSCSI LUN's as device-backed virtual machine disks? i.e., Can iSCSI LUN's be treated like directly-attached storage (DAS) and used as virtual machine disks?
    Eric Pretorious
    Truckee, CA

    user12273962 wrote:
    I don't personally use ISCSI. BUT, I don't see why you can't create physical pass through attachments to virtual guests using ISCSI LUNS. It is no different than FC or direct attach storage. Worst case scenerio... you can create a repo on the ISCSI LUN and then create virtual disks.
    Oracle VM Getting Started Guide for Release 3.1.1Chapter 7. Create a Storage Repository
    A storage repository is where Oracle VM resources may reside... Resources include virtual machines, templates for virtual machine creation, virtual machine assemblies, ISO files (DVD image files), shared virtual disks, and so on. This would have the effect of using file-backed virtual machine disks but at least they'd be centrally-located on shared storage (and, therefore, the virtual machine could run as an HA guest)
    budachst wrote:
    I have kicked this idea around as well, but I decided do it slightly different, by providing the storage repo via multipathed iSCSI and use iSCSI within my guests if needed for better speed or lower disk latency.
    If you want to use iSCSI as the virtual devices for your guests in a clustered server pool, you'd have to grant access to all of your iSCSI targets to any VM server and I was not feeling comfortable with that. I rather grant only the guest directly access to "its" iSCSI target. So I will keep all of my guests system images on the clustered server pool - and additionally also the virtual disks that don't need a high performance and have the really heavy-duty vdisks as separate iSCSI targets.That seems logical, Budy:
    <ul>
    <li>Use a multipathed iSCSI repository for hosting the virtual machine's disk image, and then;
    <li>Utilize iSCSI LUN's for "more performant" parts of the guest's file system (e.g., /var).
    </ul>
    I'm still puzzled about that image, though. +i.e., the fourth image from the bottom of Chapter 7.7, "Creating a Virtual Machine" ("To add a physical disk:...") where it's clear that the exising "physical" disks are actually iSCSI LUN's that are attached to the OVM server.+
    I suppose that I'll have to configure some iSCSI LUN's and try it out to be certain. Just another tool that I don't have +but will need to add to my toolbox+.

  • LUN repository without clustering

    Hi,
    I try to create a repository on a single OVM server (3.0.3).
    This repository would be stored on a iSCSI LUN.
    Unfortunately, i get this error :
    OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS vrs1. Its server is not in a cluster
    Wed Apr 18 17:55:44 CEST 2012
    I only have one server, so it makes no sense to use clustering.
    Is there a workaround ?
    Thanks,
    TM

    923677 wrote:
    Because with this option, I need to choose a "Storage for server pool", and my manager doesn't have access to my iSCSI server (and i only have one OVS server).The Manager doesn't require access to the iSCSI server, just the Server does.
    Concerning the iSCSI configuration, I added my NetApp storage with the manager, but I needed to use discovery with iscsiadm CLI to see the LUNs appears.You don't need to do this: just add your server to the default initiator group within the Manager. All iSCSI components are managed and configured from the Manager.
    You need to make sure iSCSI is properly configured in the Manager first, so that the Manager knows the physical disk is being provided via iSCSI and can format it appropriately. Check the documentation for more information.

Maybe you are looking for

  • I am trying to download itunes and keep getting an error message cannot access network location %appdata%

    I am trying to download itunes to windows 7 and keep getting the error message "cannot access network location $APPDATA%\." I have contacted an Apple advisor and was not able to resolve the problem. What to do?

  • Downloading photos from email to iphoto

    I have downloaded photos from an email in gmail (all .jpg) and they are in finder (actually in the iphoto file) and will open in preview, but I cannot transfer to iphoto library. I am using iphoto 6.06, OS 10.4.1. I get the message: unreadable files,

  • Enhancement Point in ABAP code

    Hi , I am working on an upgrade from 4.6 to ECC ,so have a  have a standard FM with an enhancement point , how do i know if there is an implementaion for that or not and can i debugg the code in the enhancement point . When i click on the enhancement

  • MacBook Pro and Thunderbolt Monitor

    Hello, I have a MacBook Pro and a Thunderbolt Monitor. The Lion and the Windows7 operating systems are installed. My questions are: 1. Why I can't use the Thunderbolt display if I don't connect the MagSafe power which comes from Thunderbolt monitor t

  • Logistics Execution Optimiser

    Hi Experts, Can anybody explains the functionality and configuration of LEO in SAP TM ? Thanks, Shakti