ISCSI

One alternative our NAS provider offers to NFS is iSCSI - (which is SCSI encapsulated in ethernet). It requires an iSCSI initiator in the client environment (Solaris), apparently several initiators are open source. It has been described as a poor-man's fiber-channel SAN.
Is it possible to use an iSCSI target as the iMS mail store? Would this be better than using NFS - which isn't supported on my device?
The reason I ask is because our NAS (snap 4200) is redundant RAID 5 with lots of space, etc. and I would feel better about having the mailstore in a failover environment.
Thanks for any insights,
s7
.

Well, I can tell you that no testing has been done with iSCSI. That doesn't mean it won't work, but . . .
We have had one report that, "it works fine" in this forum.
You might want to post your query in the ims-ms list, found at arnold.com. That's where our developers hang out......

Similar Messages

  • Unable to install Oracle VM 3.0.2 to an iSCSI disk

    Dear users,
    I am trying to install Oracle VM 3.0.2 Server on an iSCSI disk without luck. Every time that i try it, console (alt+f4) shows me this error:
    "connection1:0: Could not create connection due to crc32c loading error. Make sure the crc32 module is built as a module or into the kernel"
    "session1: couldn't create a new connection"
    "Connection1:0 to [target iqn........, portal: 1.1.1.1,3260] through [iface: default] is shutdown.
    Seeing console alt+f3 appears:
    "iSCSI initiator name iqn......"
    "iSCSI startup"
    It seems like anaconda installer doesn't sees ip address configuration. I have tried to pass "asknetwork" option when I launch setup, but anaconda never demands network configruation. After that I have configured ethernet device on console alt+f2. Maybe is this the problem or is not possible to install Oracle VM 3.0.2 on an iSCSI disk??
    Thanks.

    Just to let you know that I actually have an SR open with Oracle about this.
    I managed to track down that the problem is due to OVM 3's inability to read iBFT (iSCSI Boot Firmware Table) during install.
    This was also the case with OVM 2.x - but as Oracle Linux 5.x boots via iSCSI fine via IBFT (we have OVM 3 Manager running OL6 booting from SAN fine) - we stupidly assumed that with OVM3 - being based on a newer kernel, the iBFT stuff would just work.
    It didn't!
    The only iSCSI booting Oracle support currently is via a full blown HBA. To clarify this is not one of those new NICs with an embedded iSCSI initiator in the ROM - we're talking about an uber expensive HBA that can load the entire iSCSI stack and presents your iSCSI LUN as a local disk.
    My SR is now over a month old and unfortunately there has been very little progress made with it.
    But when it comes I will be sure to post it here.....
    I'm really surprised more people haven't asked for this functionality. Seems a bit of a no brainer....
    Cheers,
    Jeff

  • Windows 2012 Nodes - Slow CSV Performance - Need help to resolve my iSCSI issue configuration

    I spent weeks going over the forums and the net for any publications and advice on how to optimize iSCSI connections and i'm about to give up.  I really need some help in determining if its something i'm not configuring right or maybe its an equipment
    issue. 
    Hardware:
    2x Windows 2012 Hosts with 10 Nics (same NIC configuration) in a Failover Cluster sharing a CSV LUN. 
    3x NICs Teamed for Host/Live Migration (192.168.0.x)
    2x NICS teamed for Hyper-V Switch 1 (192.168.0.x)
    1x NIC teamed for Hyper-V Switch 2 (192.168.10.x)
    4x NICs for iSCSI traffic (192.168.0.x, 192.168.10.x, 192.168.20.x 192.168.30.x)
    Jumbo frames and flow control turned on all the NICs on the host.  IpV6 disabled.  Client for Microsoft Network, File/Printing Sharing Disabled on iSCSI NICs. 
    MPIO Least Queue selected.  Round Robin gives me an error message saying "The parameter is incorrect.  The round robin policy attempts to evenly distribute incoming requests to all processing paths. "
    Netgear ReadyNas 3200
    4x NICs for iSCSI traffic ((192.168.0.x, 192.168.10.x, 192.168.20.x 192.168.30.x)
    Network Hardware:
    Cisco 2960S managed switch - Flow control on, Spanning Tree on, Jumbo Frames at 9k - this is for the .0 subnet
    Netgear unmanaged switch - Flow control on, Jumbo Frames at 9k - this is for .10 subnet
    Netgear unmanaged switch - Flow control on, Jumbo Frames at 9k - this is for .20 subnet
    Netgear unmanaged switch - Flow control on, Jumbo Frames at 9k - this is for .30 subnet
    Host Configuration (things I tried turning on and off):
    Autotuning 
    RSS
    Chimney Offload
    I have 8 VMs stored in the CSV.  When try to load all 8 up at the same time, they bog down.  Each VM loads very slowly and when they eventually come up, most of the important services did not start.  I have to load
    them up 1 or 2 at a time.  Even then the performance is nothing like if they were loading up on the Host itself (VHD stored on the host's hdd).  This is what prompted me to add in more iSCSI connections to see if I can improve the VM's
    performance.  Even with 4 iSCSI connections, I feel nothing has changed.  The VMs still start up slowly and services do not load right.  If I distribute the load with 4 VMs on Host 1 and 4 VMs on Host 2, the load up
    times do not change. 
    As a manual test for file copy speed, I moved the cluster resources to Host 1 and copied a VM from the CSV and onto the Host.   The speed would start out around 250megs/sec and then eventually drop down to about 50/60 megs/sec.  If I turn
    off all iSCSI connections except one, it get the same speed.  I can verify from the Windows Performance Tab under Task Manager that all the NICS are distributing traffic evenly, but something is just limiting the flow.  Like what I stated on top,
    I played around with autotuning, RSS and chimney offload and none of it makes a difference. 
    The VMs have been converted to VHDx and to fixed size.  That did not help.   
    Is there something I'm not doing right?   I am working with Netgear support and they are puzzled as well.  The ReadyNas device should easily be able to handle it. 
    Please help!  I pulled my hair out over this for the past two months and I'm about to give up and just ditch clustering all together and just run the VMs off the hosts themselves. 
    George

    A few things...
    For starters, I recommend opening a case with Microsoft support.  They will be able to dig in and help you...
    Turn on the CSV Cache, it will boost your performance 
    http://blogs.msdn.com/b/clustering/archive/2012/03/22/10286676.aspx
    A file copy has no resemblance of the unbuffered I/O a VM does... so don't use that as a comparison, as you are comparing apples to oranges.
    Do you see any I/O performance difference between the coordinator node and the non-coordinator nodes?  Basically, see which node owns the cluster Physical Disk resource... measure the performance.  Then move the Physical Disk resource for the
    CSV volume to another node, and repeat the same measure of performance... then compare them.
    Your IP addressing seems odd...  you show multiple networks on 192.168.0.x and also on 192.168.10.x.   Remember that clustering only recognizes and uses 1 logical interface per IP subnet.  I would triple check all your IP schemes...
    to ensure they are all different logical networks.
    Check you binding order
    Make sure you NIC drivers and NIC firmware are updated
    Make sure you don't have IPsec enabled, that will significantly impact your network performance
    For the iSCSI Software Initiator, when you did your connection... make sure you didn't do a 'Quick Connect'... that will do a wildcard and connect over any network.  You want to specify your dedicated iSCSI network
    No idea what the performance capabilities of the ReadyNas is...  this could all likely be associated with the shared storage.
    What speed NIC's are you using?   I hope at least 10 GB...
    Hope that helps...
    Elden
    Hi Elden,
    2. CSV is turned on, I have 4GB dedicated from each host to it.  With IOmeter running within the VMs, I do see the read speed jumped up 4-5x fold but the write speed stays the same (which according to the doc it should).  But even with the read
    speed that high, the VMs are not starting up quickly.  
    4. I do not see any difference with IO with coordinator and non coordinator nodes.  
    5.  I'm not 100% sure what your saying about my IPs.  Maybe if I list it out, you can help explain further.  
    Host 1 - 192.168.0.241 (Host/LM IP), Undefined IP on the 192.168.0.x network (Hyper-V Port 1), Undefined IP on the 192.168.10.x network (Hyper- V port 2), 192.168.0.220 (iSCSI 1), 192.168.10.10 (iSCSI2), 192.168.20.10(iSCSI 3), 192.168.30.10 (iSCSI 4)
    The Hyper-V ports are undefined because the VMs themselves have static ips.  
    0.220 host NIC connects with the .231 NIC of the NAS
    10.10 host NIC connects with the 10.100 NIC of the NAS
    20.10 host NIC connects with the 20.100 NIC of the NAS
    30.10 host NIC connects with the 30.100 NIC of the NAS
    Host 2 - 192.168.0.245 (Host/LM IP), Undefined IP on the 192.168.0.x network (Hyper-V Port 1), Undefined IP on the 192.168.10.x network (Hyper- V port 2), 192.168.0.221 (iSCSI 1), 192.168.10.20 (iSCSI2), 192.168.20.20(iSCSI 3), 192.168.30.20 (iSCSI 4)
    The Hyper-V ports are undefined because the VMs themselves have static ips.  
    0.221 host NIC connects with the .231 NIC of the NAS
    10.20 host NIC connects with the 10.100 NIC of the NAS
    20.20 host NIC connects with the 20.100 NIC of the NAS
    30.20 host NIC connects with the 30.100 NIC of the NAS
    6. Binding orders are all correct.
    7. Nic drivers are all updated.  Didn't check the firmware.
    8. I do not know about IPSec...let me look into it.  
    9. I did not do quick connect, each iscsi connection is defined using a specific source ip and specific target ip.  
    These are all 1gigabit nics, which is the reason why I have so many NICs...otherwise there would be no reason for me to have 4 iscsi connections.  

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • OVM 3.0 iSCSI setup generally and how do i find the iqn

    Hi all,
    I have just setup an HP P2000 iSCSI SAN and created a few LUN's on it that I want to present to a vm server. How do I do this?
    In the SAN SMU utility to add a host I seem to need an iqn - where can I find it. What docs explain this (sorry again for greeness)
    I look at wim's blog here:
    http://blogs.oracle.com/wim/entry/using_linux_iscsi_targets_with
    and a lot of the tools dont seems to be present in my dom0
    i look at the official docs :http://download.oracle.com/docs/cd/E20065_01/doc.30/e18549/storage.htm#autoId4
    and they dont really tell me much
    where do you suggest to look for guidelines for iSCSI setup with ovm 3.0?
    Thanks

    Martin Brambley wrote:
    I have just setup an HP P2000 iSCSI SAN and created a few LUN's on it that I want to present to a vm server. How do I do this?
    In the SAN SMU utility to add a host I seem to need an iqn - where can I find it. What docs explain this (sorry again for greeness)Create the iSCSI Storage Array in Oracle VM Manager and edit the Default Access Group. It'll show you the IQNs for each of your Oracle VM Servers. Once you've mapped LUNs to those IQNs, you can refresh the iSCSI Storage Array to see the physical disks. I've noticed that some arrays don't refresh and may require a reboot of Oracle VM Server to see the disks.

  • Why does my 10GB iSCSI setup seem see such high latency and how can I fix it?

    I have a iscsi server setup with the following configuration
    Dell R510
    Perc H700 Raid controller
    Windows Server 2012 R2
    Intel Ethernet X520 10Gb
    12 near line SAS drives
    I have tried both Starwind and the built in Server 2012 iscsi software but see similar results.  I am currently running the latest version of starwinds free
    iscsi server.
    I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers.  I have a dedicated vlan just for iscsi and have enabled
    jumbo frames on the vlan.
    I frequently see very high latency on my iscsi storage.  So much so that it can timeout or hang vmware.  I am not sure why.  I can run IOmeter and
    get some pretty decent results.
    I am trying to determine why I see such high latency 100'ms.  It doesn't seem to always happen, but several times throughout the day, vmware is complaining
    about the latency of the datastore.  I have a 10Gb iscsi connection between the servers.  I wouldn't expect the disks to be able to max that out.  The highest I could see when running IO meter was around 5Gb.  I also don't see much load
    at all on the iscsi server when I see the high latency.  It seems network related, but I am not sure what settings I could check.  The 10Gb connect should be plenty as I said and it is no where near maxing that out.
    Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this.  I
    am not able to find what is causing it.  I reference this document and for changes to my iscsi settings 
    http://en.community.dell.com/techcenter/extras/m/white_papers/20403565.aspx
    Thank you for your time.

    I have a iscsi server setup with the following configuration
    Dell R510
    Perc H700 Raid controller
    Windows Server 2012 R2
    Intel Ethernet X520 10Gb
    12 near line SAS drives
    I have tried both Starwind and the built in Server 2012 iscsi software but see similar results.  I am currently running the latest version of starwinds free
    iscsi server.
    I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers.  I have a dedicated vlan just for iscsi and have enabled
    jumbo frames on the vlan.
    I frequently see very high latency on my iscsi storage.  So much so that it can timeout or hang vmware.  I am not sure why.  I can run IOmeter and
    get some pretty decent results.
    I am trying to determine why I see such high latency 100'ms.  It doesn't seem to always happen, but several times throughout the day, vmware is complaining
    about the latency of the datastore.  I have a 10Gb iscsi connection between the servers.  I wouldn't expect the disks to be able to max that out.  The highest I could see when running IO meter was around 5Gb.  I also don't see much load
    at all on the iscsi server when I see the high latency.  It seems network related, but I am not sure what settings I could check.  The 10Gb connect should be plenty as I said and it is no where near maxing that out.
    Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this.  I
    am not able to find what is causing it.  I reference this document and for changes to my iscsi settings 
    http://en.community.dell.com/techcenter/extras/m/white_papers/20403565.aspx
    Thank you for your time.
    If both StarWind and MSFT target show the same numbers I can guess it's network configuration issue. Anything higher then 30 ms is a nightmare :( Did you properly tune your network stacks? What numbers (x-put and latency) you get for raw TCP numbers (NTtcp
    and Iperf are handy to show)?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How to Configure Software iSCSI including Multipath

    Oracle Linux 6.6 3.8.13-44.1.1.el6uek.x86_64
    Installed with a production network interface card and two 10G network connections for iSCSI traffic only.  Both iSCSI cards are on the same vLAN with different IP addresses on a range.
    Connecting to a DELL Compellent SAN with 1TB of storage presented to the machine.
    Connections can be successfully made via iSCSI and the target can be logged into however in the disk utility window on the server the storage shows as a single 1.1TB multipath drive as dm-0 along with 4 separate connections between sdc and sdf.
    The output from multipath -ll is as follows.
    I'm guessing that something in the underlying configuration is incorrect and would appreciate any assistance that can be provided.

    Hi
    I've got 2 10G SFP+ cables attached to the switch each with their own IP address. 
    eth0      Link encap:Ethernet  HWaddr 2C:44:FD:87:60:AC
              inet addr:10.160.10.26  Bcast:10.160.10.255  Mask:255.255.255.0
              inet6 addr: fe80::2e44:fdff:fe87:60ac/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:35315 errors:0 dropped:0 overruns:0 frame:0
              TX packets:698 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:3204261 (3.0 MiB)  TX bytes:102647 (100.2 KiB)
              Interrupt:32
    eth1      Link encap:Ethernet  HWaddr 2C:44:FD:87:60:AD
              inet6 addr: fe80::2e44:fdff:fe87:60ad/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:34534 errors:0 dropped:0 overruns:0 frame:0
              TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:3129510 (2.9 MiB)  TX bytes:270 (270.0 b)
              Interrupt:36
    eth2      Link encap:Ethernet  HWaddr 2C:44:FD:87:60:AE
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
              Interrupt:32
    eth3      Link encap:Ethernet  HWaddr 2C:44:FD:87:60:AF
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
              Interrupt:36
    eth4      Link encap:Ethernet  HWaddr 38:EA:A7:34:09:60
              inet addr:10.160.12.46  Bcast:10.160.13.255  Mask:255.255.254.0
              inet6 addr: fe80::3aea:a7ff:fe34:960/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:30412 errors:0 dropped:0 overruns:0 frame:0
              TX packets:36352 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:8802884 (8.3 MiB)  TX bytes:3280896 (3.1 MiB)
    eth5      Link encap:Ethernet  HWaddr 38:EA:A7:34:09:61
              inet addr:10.160.12.47  Bcast:10.160.13.255  Mask:255.255.254.0
              inet6 addr: fe80::3aea:a7ff:fe34:961/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:8229 errors:0 dropped:0 overruns:0 frame:0
              TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:2150770 (2.0 MiB)  TX bytes:704 (704.0 b)
    eth0-3 are the production network.  eth4 and 5 are the iSCSI network.

  • Error while Registering a iSCSI drive on to Storage array

    Hi,
    I've installed Oracle VM Manager 3.0.2 on Oracle Linux 6.1. When I tried to register a iSCSI drive on to Storage array, I go the following error:
    "java.lang.NullPointerException
    ADF_FACES-60097:For more information, please see the server's error log for an entry beginning with: ADF_FACES-60096:Server Exception during PPR, #4"
    Also there is no items to be selected in the Storage Plug-in.
    I read that Storage plug-in is in-built with version 3.0 and above.
    Any help in this regard will be greatly appreciated.
    Thanks,
    Prajeesh

    Prajeesh wrote:
    Also there is no items to be selected in the Storage Plug-in.Have you discovered at least one Oracle VM Server yet? You can't add any storage until you do that successfully.

  • OVM 3.0.3 - register iSCSI server

    Probably a basic thing that I'm overlooking, for which I apologize in advance. Stumped and turning to wider community while I recheck docs. Any ideas appreciated.
    A) Problem: Receiving a OVMAPI_B000E when attempting to Refresh Storage Array. The specific message is
    OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000d7551a2353d303ce] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: socket.gaierror:(-2, 'Name or service not known')
    B) Environment:
    OVS 3.0.3 fresh install
    - hostname caballo5
    - host is a Dell Precision P490 dual CPU, quad core w 20GB RAM
    - 3 NICs, one on primary subnet (192.168.0), two unassigned
    OVM 3.0.3 fresh instal
    - hostname gandalfl
    - host is a Dell Optiplex 755 w 8GB RAM, 500 GB OS disk,
    - Oracle Enterprise Linux 5 update 6
    - also houses Cloud Control 12.1.0
    - Production install
    OpenFiler 2.99 fresh install
    - hostname blackmax
    - host is a custom with 4x 1.5TB SATA for storage + 1x 300GB SATA for OS
    - iSCSI confiigured with no CHAP for discovery.
    I can discover the openfiler iSCSI services from Windows iSCSI initiator as well as from OVM & OVS hosts using iscsiadm.  (Added below)
    I use FQDN everywhere, but have removed domain in the info.
    C) Full details:
    Job Construction Phase
    begin()
    Appended operation 'Storage Server Discover Capabilities' to object '0004fb0000090000d7551a2353d303ce (blackmax)'.
    Appended operation 'Storage Server Discover Access Groups' to object '0004fb0000090000d7551a2353d303ce (blackmax)'.
    Appended operation 'Storage Array Discover Info' to object '0004fb0000090000d7551a2353d303ce (blackmax)'.
    Appended operation 'Storage Array Discover Storage Elements' to object '0004fb0000090000d7551a2353d303ce (blackmax)'.
    Appended operation 'ISCSI Storage Array Discover Targets' to object '0004fb0000090000d7551a2353d303ce (blackmax)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [IscsiStorageArray] 0004fb0000090000d7551a2353d303ce (blackmax)
    Operation: Storage Server Discover Capabilities
    Operation: Storage Server Discover Access Groups
    Operation: Storage Array Discover Info
    Operation: Storage Array Discover Storage Elements
    Operation: ISCSI Storage Array Discover Targets
    Job Running Phase at 03:23 on Sat, Dec 31, 2011
    Job Participants: [44:45:4c:4c:50:00:10:52:80:46:b8:c0:4f:36:46:31 (caballo5)]
    Actioner
    Starting operation 'Storage Server Discover Capabilities' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Setting Context to model only in job with id=1325327039664
    Completed operation 'Storage Server Discover Capabilities' completed with direction ==> DONE
    Starting operation 'Storage Server Discover Access Groups' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Completed operation 'Storage Server Discover Access Groups' completed with direction ==> LATER
    Starting operation 'Storage Array Discover Info' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Setting Context to model only in job with id=1325327039664
    Completed operation 'Storage Array Discover Info' completed with direction ==> DONE
    Starting operation 'Storage Array Discover Storage Elements' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Completed operation 'Storage Array Discover Storage Elements' completed with direction ==> LATER
    Starting operation 'ISCSI Storage Array Discover Targets' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Setting Context to model only in job with id=1325327039664
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000d7551a2353d303ce] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011] OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1325)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:208)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:30)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:19)
    at com.oracle.ovm.mgr.discover.ovm.DiscoverHandler.execute(DiscoverHandler.java:50)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.handleDiscover(StorageServerDiscover.java:72)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.discoverStorageServer(StorageServerDiscover.java:52)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.discoverIscsiStorageArrayTargets(IscsiStorageArrayDiscoverTargets.java:47)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.action(IscsiStorageArrayDiscoverTargets.java:38)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:193)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:264)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1090)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:247)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:207)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:751)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:475)
    at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:427)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:369)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:204)
    ... 23 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:753)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:471)
    ... 26 more
    FailedOperationCleanup
    Starting failed operation 'ISCSI Storage Array Discover Targets' cleanup on object 'blackmax'
    Complete rollback operation 'ISCSI Storage Array Discover Targets' completed with direction=blackmax
    Rollbacker
    Executing rollback operation 'Storage Array Discover Info' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Complete rollback operation 'Storage Array Discover Info' completed with direction=DONE
    Executing rollback operation 'Storage Server Discover Capabilities' on object '0004fb0000090000d7551a2353d303ce (blackmax)'
    Complete rollback operation 'Storage Server Discover Capabilities' completed with direction=DONE
    Objects To Be Rolled Back
    Object (IN_USE): [IscsiStorageArray] 0004fb0000090000d7551a2353d303ce (blackmax)
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=2292 method=addTransactionIdentifier accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=refresh accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setAssociatedHandles accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setTuringMachineFlag accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=lock accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setTuringMachineFlag accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setContext accessLevel=6
    Class=InternalJobDbImpl vessel_id=2292 method=setFailedOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=2274 method=nextJobOperation accessLevel=6
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000d7551a2353d303ce] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011] OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000d7551a2353d303ce] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011] OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1325)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:208)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:30)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:19)
    at com.oracle.ovm.mgr.discover.ovm.DiscoverHandler.execute(DiscoverHandler.java:50)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.handleDiscover(StorageServerDiscover.java:72)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.discoverStorageServer(StorageServerDiscover.java:52)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.discoverIscsiStorageArrayTargets(IscsiStorageArrayDiscoverTargets.java:47)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.action(IscsiStorageArrayDiscoverTargets.java:38)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:193)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:264)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1090)
    at sun.reflect.GeneratedMethodAccessor1324.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:247)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:207)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:751)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: caballo5 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    Sat Dec 31 03:24:00 MST 2011
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:475)
    at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:427)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:369)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:204)
    ... 23 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: socket.gaierror:(-2, 'Name or service not known')
    Sat Dec 31 03:24:00 MST 2011
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:753)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:471)
    ... 26 more
    End of Job
    ++++++
    output of isciadm (on both OVM and OVS host). Same output whether using IP address or host name (/etc/hosts resolution)
    [root@caballo5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.0.49
    192.168.0.49:3260,1 iqn.2006-01.com.openfiler:tsn.79a153599c9b
    192.168.2.254:3260,1 iqn.2006-01.com.openfiler:tsn.79a153599c9b
    192.168.2.253:3260,1 iqn.2006-01.com.openfiler:tsn.79a153599c9b
    192.168.0.49:3260,1 iqn.2006-01.com.openfiler:rac.asm
    192.168.2.254:3260,1 iqn.2006-01.com.openfiler:rac.asm
    192.168.2.253:3260,1 iqn.2006-01.com.openfiler:rac.asm
    192.168.0.49:3260,1 iqn.2006-01.com.openfiler:rac.vm
    192.168.2.254:3260,1 iqn.2006-01.com.openfiler:rac.vm
    192.168.2.253:3260,1 iqn.2006-01.com.openfiler:rac.vm
    192.168.0.49:3260,1 iqn.2006-01.com.openfiler:tsn.b0d06035dcdb
    192.168.2.254:3260,1 iqn.2006-01.com.openfiler:tsn.b0d06035dcdb
    192.168.2.253:3260,1 iqn.2006-01.com.openfiler:tsn.b0d06035dcdb

    Solved.
    DNS Issue. The OpenFiler system was not listed properly in DNS. Added it to the /etc/hosts on the OVS machine and the refresh works OK.

  • Hello. I have a problem with OEL 6.5 and ocfs2. When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.

    Hello.
    I have a problem with OEL 6.5 and ocfs2.
    When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.
    fstab:
    UUID=32130a0b-2e15-4067-9e65-62b7b3e53c72 /some/4 ocfs2 _netdev,defaults 0 0
    #UUID=af522894-c51e-45d6-bce8-c0206322d7ab /some/9 ocfs2 _netdev,defaults 0 0
    UUID=1126b3d2-09aa-4be0-8826-0b2a590ab995 /some/3 ocfs2 _netdev,defaults 0 0
    #UUID=9ea9113d-edcf-47ca-9c64-c0d4e18149c1 /some/8 ocfs2 _netdev,defaults 0 0
    UUID=a368f830-0808-4832-b294-d2d1bf909813 /some/5 ocfs2 _netdev,defaults 0 0
    UUID=ee816860-5a95-493c-8559-9d528e557a6d /some/6 ocfs2 _netdev,defaults 0 0
    UUID=3f87634f-7dbf-46ba-a84c-e8606b40acfe /some/7 ocfs2 _netdev,defaults 0 0
    UUID=5def16d7-1f58-4691-9d46-f3fa72b74890 /some/1 ocfs2 _netdev,defaults 0 0
    UUID=0e682b5a-8d75-40d1-8983-fa39dd5a0e54 /some/2 ocfs2 _netdev,defaults 0 0

    What is the output of:
    # chkconfig --list o2cb
    # chkconfig --list ocfs2
    # cat /etc/ocfs2/cluster.conf

  • Cannot create iSCSI connection to HP4300 SAN [OVM3.1.1]

    I have set up OVM 3.1.1 on oracle 11g SE. I have on OVS with three bonded networks...One for Management, Storage, and the third for VM vlan networks. Now I need to set up some storage. My SAN is an HP P4300. I found that the initiator for my OVS is
    OVM1:iqn.1988-12.com.oracle:6d6be5deab82
    Inside the console for my SAN, I create a server object and enter the initiator OVM1:iqn.1988-12.com.oracle:6d6be5deab82 into it.
    I get a message that it is not in the standard format.... but I continue.
    Now back to OVM, I try to discover my SAN. Enter the name, ISCSI SAN, Gengeric, and the IP address of the SAN. I then select the OVS. After the job is launched....it fails
    I tried the initiator without the OVM1: making the standard format....but it still failed. I do not have chap selected to eliminate that.
    I appears to be an authentication thing....so it must be the initiator (Can it be changed to delete the OVM1:)
    What am I missing.....this should be pretty simple......
    Job Construction Phase
    begin()
    begin()
    Appended operation 'Storage Array Validate' to object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Appended operation 'Storage Array Discover Info' to object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Appended operation 'Storage Server Discover Capabilities' to object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Appended operation 'Storage Server Discover Access Groups' to object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Operation 'Storage Array Discover Info' was not added, it already exists on object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Appended operation 'Storage Array Discover Storage Elements' to object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Appended operation 'ISCSI Storage Array Discover Targets' to object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (CREATED): [IscsiStorageArray] 0004fb0000090000309e760f71aec824 (HP4300_1)
    Operation: Storage Array Validate
    Operation: Storage Array Discover Info
    Operation: Storage Server Discover Capabilities
    Operation: Storage Server Discover Access Groups
    Operation: Storage Array Discover Storage Elements
    Operation: ISCSI Storage Array Discover Targets
    Object (IN_USE): [StorageArrayPlugin] oracle.generic.SCSIPlugin.GenericPlugin (1.1.0) (Oracle Generic SCSI Plugin)
    Object (CREATED): [AccessGroup] Default access group @ HP4300_1 @ 0004fb0000090000309e760f71aec824 (Default access group @ HP4300_1)
    Object (IN_USE): [Server] 35:37:33:31:32:32:55:53:45:31:35:33:52:30:4c:32 (OVM1)
    Job Running Phase at 22:40 on Thu, Oct 4, 2012
    Job Participants: [35:37:33:31:32:32:55:53:45:31:35:33:52:30:4c:32 (OVM1)]
    Actioner
    Starting operation 'Storage Array Validate' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Operation 'Storage Server Discover Capabilities' was not added, it already exists on object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Operation 'Storage Server Discover Access Groups' was not added, it already exists on object '0004fb0000090000309e760f71aec824 (HP4300_1)'.
    Completed operation 'Storage Array Validate' completed with direction ==> DONE
    Starting operation 'Storage Array Discover Info' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Setting Context to model only in job with id=1349408417858
    Setting Context to default in job with id=1349408417858
    Completed operation 'Storage Array Discover Info' completed with direction ==> DONE
    Starting operation 'Storage Server Discover Capabilities' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Setting Context to model only in job with id=1349408417858
    Setting Context to default in job with id=1349408417858
    Completed operation 'Storage Server Discover Capabilities' completed with direction ==> DONE
    Starting operation 'Storage Server Discover Access Groups' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Completed operation 'Storage Server Discover Access Groups' completed with direction ==> DONE
    Starting operation 'Storage Array Discover Storage Elements' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Completed operation 'Storage Array Discover Storage Elements' completed with direction ==> LATER
    Starting operation 'ISCSI Storage Array Discover Targets' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Setting Context to model only in job with id=1349408417858
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000309e760f71aec824] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012] OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:218)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:30)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:19)
    at com.oracle.ovm.mgr.discover.ovm.DiscoverHandler.execute(DiscoverHandler.java:61)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.handleDiscover(StorageServerDiscover.java:77)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.discoverStorageServer(StorageServerDiscover.java:53)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.discoverIscsiStorageArrayTargets(IscsiStorageArrayDiscoverTargets.java:47)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.action(IscsiStorageArrayDiscoverTargets.java:38)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.storage.IscsiStorageArrayProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:459)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:385)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:214)
    ... 32 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 35 more
    FailedOperationCleanup
    Starting failed operation 'ISCSI Storage Array Discover Targets' cleanup on object 'HP4300_1'
    Complete rollback operation 'ISCSI Storage Array Discover Targets' completed with direction=HP4300_1
    Rollbacker
    Executing rollback operation 'ISCSI Storage Array Discover Targets' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Complete rollback operation 'ISCSI Storage Array Discover Targets' completed with direction=DONE
    Executing rollback operation 'Storage Server Discover Access Groups' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Complete rollback operation 'Storage Server Discover Access Groups' completed with direction=DONE
    Executing rollback operation 'Storage Server Discover Capabilities' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Complete rollback operation 'Storage Server Discover Capabilities' completed with direction=DONE
    Executing rollback operation 'Storage Array Discover Info' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Complete rollback operation 'Storage Array Discover Info' completed with direction=DONE
    Executing rollback operation 'Storage Array Validate' on object '0004fb0000090000309e760f71aec824 (HP4300_1)'
    Complete rollback operation 'Storage Array Validate' completed with direction=DONE
    Objects To Be Rolled Back
    Object (CREATED): [IscsiStorageArray] 0004fb0000090000309e760f71aec824 (HP4300_1)
    Object (IN_USE): [StorageArrayPlugin] oracle.generic.SCSIPlugin.GenericPlugin (1.1.0) (Oracle Generic SCSI Plugin)
    Object (CREATED): [AccessGroup] Default access group @ HP4300_1 @ 0004fb0000090000309e760f71aec824 (Default access group @ HP4300_1)
    Object (IN_USE): [Server] 35:37:33:31:32:32:55:53:45:31:35:33:52:30:4c:32 (OVM1)
    Object (CREATED): [VolumeGroup] Generic_iSCSI_Volume_Group @ 0004fb0000090000309e760f71aec824 (Generic_iSCSI_Volume_Group)
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=3071 method=addTransactionIdentifier accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=addAdminServer accessLevel=6
    Class=ServerDbImpl vessel_id=1265 method=addStorageServer accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=validate accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=refresh accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setAssociatedHandles accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setValidated accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=addJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=addJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setCurrentJobOperationComplete accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=lock accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=createVolumeGroup accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=setName accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=setFoundryContext accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=onPersistableCreate accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=setLifecycleState accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=setRollbackLifecycleState accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=setStorageArray accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=setSimpleName accessLevel=6
    Class=VolumeGroupDbImpl vessel_id=3092 method=lock accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setCurrentJobOperationComplete accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAbility accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setCurrentJobOperationComplete accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setCurrentJobOperationComplete accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setTuringMachineFlag accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=setFailedOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=StorageArrayPluginDbImpl vessel_id=1657 method=nextJobOperation accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=1265 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=3071 method=addTransactionIdentifier accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setName accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setFoundryContext accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=onPersistableCreate accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setLifecycleState accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setRollbackLifecycleState accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setStoragePlugin accessLevel=6
    Class=StorageArrayPluginDbImpl vessel_id=1657 method=addStorageServer accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAdminHostname accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAdminUsername accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAdminPassword accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setSimpleName accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=createAccessGroup accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=setName accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=setFoundryContext accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=onPersistableCreate accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=setLifecycleState accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=setRollbackLifecycleState accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=setSimpleName accessLevel=6
    Class=AccessGroupDbImpl vessel_id=3086 method=setStorageServer accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setDescription accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setAccessHost accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setUsername accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setPassword accessLevel=6
    Class=IscsiStorageArrayDbImpl vessel_id=3080 method=setUseChapAuth accessLevel=6
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000309e760f71aec824] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012] OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_discover] failed for storage server [0004fb0000090000309e760f71aec824] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012] OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1371)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:218)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:30)
    at com.oracle.ovm.mgr.discover.ovm.IscsiStorageArrayTargetsDiscoverHandler.query(IscsiStorageArrayTargetsDiscoverHandler.java:19)
    at com.oracle.ovm.mgr.discover.ovm.DiscoverHandler.execute(DiscoverHandler.java:61)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.handleDiscover(StorageServerDiscover.java:77)
    at com.oracle.ovm.mgr.discover.StorageServerDiscover.discoverStorageServer(StorageServerDiscover.java:53)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.discoverIscsiStorageArrayTargets(IscsiStorageArrayDiscoverTargets.java:47)
    at com.oracle.ovm.mgr.op.physical.storage.IscsiStorageArrayDiscoverTargets.action(IscsiStorageArrayDiscoverTargets.java:38)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at sun.reflect.GeneratedMethodAccessor1005.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.storage.IscsiStorageArrayProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at sun.reflect.GeneratedMethodAccessor1340.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_discover to server: OVM1 failed. OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    Thu Oct 04 22:40:34 CDT 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:459)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:385)
    at com.oracle.ovm.mgr.action.StoragePluginAction.discoverISCSIStorageArrayTargets(StoragePluginAction.java:214)
    ... 32 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_discover oracle.generic.SCSIPlugin.GenericPlugin, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.TargetDiscoveryEx:'Failure to get the targets: iscsiadm: Login failed to authenticate with target \niscsiadm: discovery login to 172.21.2.12 rejected: initiator error (02/01), non-retryable, giving up\niscsiadm: Could not perform SendTargets discovery.\n'
    Thu Oct 04 22:40:34 CDT 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 35 more
    End of Job
    ----------

    Great! That makes since......but did not solve my problem.....The iSCSI initiator is in the correct format. When it is listed in the Web GUI, it adds the server name to the beginning. But I tried it with out the server name and it still
    error ed out. I rebooted the node, deleted the server from the manager and rediscovered the server.....still same problem.
    I used the same initiator address from the vm server and named my workstation initiator with it. I talked with my SAN tech support, but I was the first one who has call that uses Oracle VM ......but they have no idea what the problem could be... I was able to connect to my SAN using the oracle initiator (from my windows 7) box...... so this definitely points to the vm server...
    but what could be the problem..... I can ping the SAN from the server....????
    Thanks,

  • Unable to expand/extend partition after growing SAN-based iSCSI target

    Hello, all. I have odd situation regarding how to expand iSCSI-based partitions.
    Here is my setup:
    I use the GlobalSAN iSCSI initiator on 10.6.x server (Snow Leopard).
    The iSCSI LUN is formatted with the GPT partition table.
    The filesystem is Journaled HFS+
    My iSCSI SAN has the ability to non-destructively grow a LUN (iSCSI target).
    With this in mind, I wanted to experiment with growing a LUN/target on the SAN and then expanding the Apple partition within it using disk utility. I have been unable to do so.
    Here is my procedure:
    1) Eject the disk (iSCSI targets show up as external hard drives)
    2) Disconnect the iSCSI target using the control panel applet (provided by GlobalSAN)
    3) Grow the LUN/target on the SAN.
    4) Reconnect the iSCSI initiator
    5) Expand/extend the partition using Disk Utility to consume the (newly created) free space.
    It works until the last step. When I reconnect to the iSCSI target after expanding it on the SAN, it shows up Disk Utility as being larger than it was (so far, so expected). When I go to expand the partition, however, it errors out saying that there is not enough space.
    Investigating further, I went the command line and performed a
    "diskutil resizeVolume <identifier> limits"
    to determine what the limit was to the partition. The limits did NOT reflect the newly-created space.
    My suspicion is that the original partition map, since it was created as 100% of the volume, does not allow room for growth despite the fact that the disk suddenly (and, to the system, unexpectedly) became larger.
    Is this assumption correct? Is there any way around this? I would like to be able to expand my LUNs/targets (since the SAN can grow with the business), but this has no value if I cannot also extend the partition table to use the new space.
    If anyone has any insight, I would greatly appreciate it. Thank you!

    I have exactly the same problem that you describe above. My iSCSI LUN was near capacity and therefore i extended the iSCSI LUN from 100G to 150G. No problem so far.
    Disk Utility shows the iSCSI device as 150G but i cannot extend the volume to the new size. It gives me the same error (in Dutch).
    Please someone help us out !

  • ISCSI connections for guests: how to set up?

    A couple of questions:
    1. If we wanted to set up iSCSI connections for guests such as SQL servers, what is the best way to handle this? For example, if we had four 10-Gb NICs and wanted to use as few of them as possible, is it common to turn two of the NICs into Virtual Switches
    accessible by the OS, then use these to connect both the host and the SQL guests? Or would the best option be to use two 10-Gb NICs for the Hyper-V Host's iSCSI connections only, and use the other two 10-Gb NICs as virtual switches which are dedicated
    to the SQL server iSCSI connections?
    2. I know MPIO should be used for storage connections instead of teaming; if two NICs are teamed as a virtual switch, however, does this change anything? For example, if a virtual switch is created from a NIC team of two 10-Gb NICs, is it acceptable to create
    an iSCSI connection on a network adapter created on that virtual switch?

    " If we wanted to set up iSCSI connections for guests such as SQL servers, what is the best way to handle this?"
    Don't.   Use VHDX files instead.  A common reason for using iSCSI for SQL was to allow for shared storage in a SQL cluster.  2012 R2 introduces the capability to use shared vhdx files.  It is much easier to set up and will likely
    give you as good, or better performance, that iSCSI.
    But, if you insist on setting it up, set it up the same as you would on the host.  Two NICs on different subnets configured with MPIO. (Unless using Dell's software which forces a non-standard configuration of both NICs on the same subnet).  Teamed
    NICs are not supported.  For a purely best practice configuration, yes, it makes sense to have separate NICs for host and guest, but it is not an absolute requirement.
    .:|:.:|:. tim

  • Is it possible to boot oracle vm server over iSCSI and ipxe?

    Hi,
    Is it possible to boot oracle vm server over iSCSI and ipxe?
    I have tried it, but got a kernel panic after the boot progress.
    Can anyone tell me what should I do to install and or boot oracle vm server on and from a iscsi lun instead a local hard drive?
    Thanks!
    Redwan

    959211 wrote:
    Hi,
    I have Windows 2008 R2 64 Bit Operating Server Installed.
    Other software that are Installed :
    1) Visual Studio 2010 Ultimate 32 Bit.
    2) Microsoft SharePoint server 2010 64 Bit.
    For my development purpose I want to install both oracle 11g client 32 bit and oracle 11g client 64 on the same machine.
    Is it possible to install both 32 and 64 bit instance on the same machine.
    Thanks
    Sambityes, possible; but minor challenge to manage dynamically
    how to ensure that you actually use the flavor you desire?

  • Drive does not appear in ISCSi volumes

    Hi,
    I have configured a windows server 2012 r2 OS with 2 disks/ The server is a VM. Both disks are SCSI. Hoerver, the data disk does not show in ISCSI setup
    Screenshot of the part in question is here:
    https://weikingteh.files.wordpress.com/2012/12/image13.png
    Any help appreciated

    Sorry for the delay. However I still cannot confirm the exact status. 
    I assume the Drive S is not the data drive. Please open Disk Management to see if your Data is listed with a drive letter. Also test if you can access it correctly.
    Also though you mentioned these 2 disks are SCSI disks, whether they are physical disk on host (physical) computer, or they are 2 VHD files created on hard disk and connected in Hyper-V?
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Server 2012 Failover Cluster No Disks available / iSCSI

    Hi All,
    I am testing out the Failover Clustering on Windows Server 2012 with hopes of winding up with a clustered File Server once I am done. 
    I am starting with a single node in the cluster for testing purposes; I have connected to this cluster a single iSCSI LUN that is 100GB in size.
    When I right click on Storage -> Disks  and then click 'Add Disk', I get No disks suitable for cluster disks were found.
    I get this, even if I add a second server to the cluster, and connect it to the iSCSI drive as well.
    Any ideas?

    Hi All,
    I am testing out the Failover Clustering on Windows Server 2012 with hopes of winding up with a clustered File Server once I am done. 
    I am starting with a single node in the cluster for testing purposes; I have connected to this cluster a single iSCSI LUN that is 100GB in size.
    When I right click on Storage -> Disks  and then click 'Add Disk', I get No disks suitable for cluster disks were found.
    I get this, even if I add a second server to the cluster, and connect it to the iSCSI drive as well.
    Any ideas?
    For testing purpose you'd better spawn a set of VMs on a single physical Hyper-V host and use shared VHDX as a back clusterd storage. That would be both much easier and much faster then what you do. + it would be trivial move one of the VMs to another physical
    host, shared VHDX to CSV on a shared storage and go from Test & Development to production :) See:
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Virtual File Server with Shared VHDX
    http://www.aidanfinn.com/?p=15145
    Guest
    VM Cluster with Shared VHDX
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    For a pure iSCSI scenario you may try this step-by-step guide (just skip StarWind config as you do have a shared storage already with your SAN). See:
    Configuring HA File Server on Windows Server 2012 for SMB NAS
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • BUPA creation from customer and Vendor Master data in R/3 - Urgent!!

    Hello All, Firstly I am posting this query in this group to figure out a way to achieve this functionality of integration and moving Business Partners. <u>Requirement: </u>On creation of every Customer or Vendor in R/3 we need to create a Business Pa

  • Regarding the Java version for  installation of ECC6.0

    Hi frnds When i tried to install ECC6.0 abap + java system with version j2sdk-1_4_2_13-windows-i586-p.exe,  igot stuck up with errors.My dear friends please suggest me which version should i use and from where to down load.Please help me out. From Si

  • How to call java program from javascript

    Hi, I have an java program which will transform one xml to another xml using XSLT. I want to call this java program from Javascript? Is this possible? Please suggest me. Thanks, Gopal

  • IBP 4.0 FP1 Patch 2 - Need more info on new features in Supply Planning

    Hi All, I was reading the SAP help and some of the new features are not very clear to me, if you have clarity around it please read and help me to get more clarity. As per SAP Help,The following features are now available in IBP for supply: Separate

  • CS3 Web Standard Installer?

    Hey guys, I recently had my CS3 Web Standard disk get ruined and I'm kind of in a jam. I still have my serial key but Adobe is no longer supporting either DVD delivery or downloading it from their website. Does anyone know where I could find a trial