ISCSI Target Scheduled Snapshots

I know there are cmdlets to snapshot a target but is there a built in way to schedule snapshots?
Alternatively, is there a cmdlet to list snapshots for a target so I can create a script to run on a schedule to remove old snapshots and create new ones.

I know there are cmdlets to snapshot a target but is there a built in way to schedule snapshots?
Alternatively, is there a cmdlet to list snapshots for a target so I can create a script to run on a schedule to remove old snapshots and create new ones.
Yes, you can use Schedule Snapshot Wizard to have automatic snapshots. See:
Creating and Managing Snapshots
and Schedules
http://technet.microsoft.com/en-us/library/gg232620(v=ws.10).aspx
Creating
and Scheduling Snapshots of Virtual Disks
http://technet.microsoft.com/en-us/library/gg232610(v=ws.10).aspx
Alternatively you can use cmdlets to deal with these tasks (PowerShell would give you more control over
creation / deletion of a snapshots. See:
iSCSI Target Cmdlets in Windows PowerShell
http://technet.microsoft.com/en-us/library/jj612803.aspx
The ones should be of your interest are Checkpoint-IscsiVirtualDisk and Remove-IscsiVirtualDiskSnapshot cmdlets.
Also you may use SMI-S and WMI if you want to use say C# or C++ and not PowerShell. 
Some more of the cmdlets use links (including samples). See:
PowerShell
cmdlets for the Microsoft iSCSI Target 3.3 (included in Windows Storage Server 2008 R2)
http://blogs.technet.com/b/josebda/archive/2010/09/29/powershell-cmdlets-for-the-microsoft-iscsi-target-3-3-included-in-windows-storage-server-2008-r2.aspx
(that's for Windows Server 2008 R2 and up)
Managing
iSCSI Target Server through Storage Cmdlets
http://blogs.technet.com/b/filecab/archive/2013/09/28/managing-iscsi-target-server-through-storage-cmdlets.aspx
(the most recent one about Windows Server 2012 R2)
Hope this helped a bit. Good luck :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Nas Iscsi target

    Hi
    Couple of questions regarding Nas boxes as iscsi targets.
    I currently have one nas box connected to DPM (2012 R2) by iscsi. (two luns; one target each). This is now running out of space; and I have another nas box that I want to use in the same way.
    If I connect the new nas as a new target from the DPM server; once I've initialised the disk will it just become part of the storage pool and the backups automatically "grow" into it.
    This is 19 TB and I want to use half for DPM and half as general storage; will this work if I set up one lun for the DPM server to target; and then another lun to be targeted later by another 2012 Server?
    Finally; the nas can do HDD hibernation when not in use. Will DPM wake it up when it want's to do a backup; or should I disable this?
    Thanks
    Richard

    Hi,
    Q1) <snip>If I connect the new nas as a new target from the DPM server; once I've initialised the disk will it just become part of the storage pool and the backups automatically "grow" into it.>snip<
    A1) After you initialize it in windows disk management, convert it to GPT (if over 2TB) - then in the DPM console, add that disk to the storage pool.  DPM will then be able to use it for new protected data sources or grow DPM volumes to the new free
    space on that disk.
    Q2)  <snip>This is 19 TB and I want to use half for DPM and half as general storage; will this work if I set up one lun for the DPM server to target; and then another lun to be targeted later by another 2012 Server? >snip<
    A2)  Correct.
    Q3)  <snip>Finally; the nas can do HDD hibernation when not in use. Will DPM wake it up when it want's to do a backup; or should I disable this?>snip<
    A3)  I would disable that feature - it may effect VSS snapshots in a negative way. 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Unable to expand/extend partition after growing SAN-based iSCSI target

    Hello, all. I have odd situation regarding how to expand iSCSI-based partitions.
    Here is my setup:
    I use the GlobalSAN iSCSI initiator on 10.6.x server (Snow Leopard).
    The iSCSI LUN is formatted with the GPT partition table.
    The filesystem is Journaled HFS+
    My iSCSI SAN has the ability to non-destructively grow a LUN (iSCSI target).
    With this in mind, I wanted to experiment with growing a LUN/target on the SAN and then expanding the Apple partition within it using disk utility. I have been unable to do so.
    Here is my procedure:
    1) Eject the disk (iSCSI targets show up as external hard drives)
    2) Disconnect the iSCSI target using the control panel applet (provided by GlobalSAN)
    3) Grow the LUN/target on the SAN.
    4) Reconnect the iSCSI initiator
    5) Expand/extend the partition using Disk Utility to consume the (newly created) free space.
    It works until the last step. When I reconnect to the iSCSI target after expanding it on the SAN, it shows up Disk Utility as being larger than it was (so far, so expected). When I go to expand the partition, however, it errors out saying that there is not enough space.
    Investigating further, I went the command line and performed a
    "diskutil resizeVolume <identifier> limits"
    to determine what the limit was to the partition. The limits did NOT reflect the newly-created space.
    My suspicion is that the original partition map, since it was created as 100% of the volume, does not allow room for growth despite the fact that the disk suddenly (and, to the system, unexpectedly) became larger.
    Is this assumption correct? Is there any way around this? I would like to be able to expand my LUNs/targets (since the SAN can grow with the business), but this has no value if I cannot also extend the partition table to use the new space.
    If anyone has any insight, I would greatly appreciate it. Thank you!

    I have exactly the same problem that you describe above. My iSCSI LUN was near capacity and therefore i extended the iSCSI LUN from 100G to 150G. No problem so far.
    Disk Utility shows the iSCSI device as 150G but i cannot extend the volume to the new size. It gives me the same error (in Dutch).
    Please someone help us out !

  • My Windows 2012 ISCSI Target service needs to be restarted every time the server starts.

    Every time I restart my Windows 2012 server, the ISCSI Target Service is unavailable to the clients even though it appears to be running.
    I have to restart the service, then the clients can connect to it. 
    I tried to change the startup Type to Delayed, but I get an error 87: parameter is incorrect.
    I've tried to delete and recreate the VHD and ISCSI Target.
    I've tried to uninstall the Role, and reinstall it. 
    Anybody have any additional ideas to try and troubleshoot this?
    Thank you
    James Roper

    The service is not starting correctly, even though it does start. If I restart the service, everything works correctly. Here is the event in the ISCSI Target. 
    Log Name:      Microsoft-Windows-iSCSITarget-Service/Admin
    Source:        Microsoft-Windows-iSCSITarget-Service
    Date:          5/8/2013 3:11:10 PM
    Event ID:      10
    Task Category: None
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      SRV-ISCSI1
    Description:
    The Microsoft iSCSI Software Target service could not bind to network address 10.5.4.31, port 3260. The operation failed with error code 10049. Ensure that no other application is using this port.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-iSCSITarget-Service" Guid="{13953C6E-C594-414E-8BA7-DEB4BA1878E3}" />
        <EventID>10</EventID>
        <Version>0</Version>
        <Level>2</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2013-05-08T22:11:10.899843800Z" />
        <EventRecordID>38</EventRecordID>
        <Correlation />
        <Execution ProcessID="976" ThreadID="1448" />
        <Channel>Microsoft-Windows-iSCSITarget-Service/Admin</Channel>
        <Computer>SRV-ISCSI1</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="IpAddress">10.5.4.31</Data>
        <Data Name="dwPort">3260</Data>
        <Data Name="Error">10049</Data>
      </EventData>
    </Event>

  • Unable to use device as an iSCSI target

    My intended purpose is to have iSCSI targets for a virtualbox setup at home where block devices are for the systems and existing data on a large RAID partition is exported as well. I'm able to successfully export the block files by using dd and added them as a backing-store into the targets.conf file:
    include /etc/tgt/temp/*.conf
    default-driver iscsi
    <target iqn.2012-09.net.domain:vm.fsrv>
    backing-store /srv/vm/disks/iscsi-disk-fsrv
    </target>
    <target iqn.2012-09.net.domain:vm.wsrv>
    backing-store /srv/vm/disks/iscsi-disk-wsrv
    </target>
    <target iqn.2012-09.net.domain:lan.storage>
    backing-store /dev/md0
    </target>
    but the last one with /dev/md0 only creates the controller and not the disk.
    The RAID device is mounted, I don't whether or not that matters, unfortunately I can't try it being unmounted yet because it is in use. I've tried all permutations of backing-store and direct-store with md0 as well as another device (sda) with and without the partition number, all had the same result.
    If anyone has been successful exporting a device (specifically a multi disk) I'd be real interested in knowing how. Also, if anyone knows how, or if it's even possible, to use a directory as the backing/direct store I'd like to know that as well, my attempts there have been unsuccessful as well.
    I will preempt anyone asking why I'm not using some other technology, eg. NFS, CIFS, ZFS, etc., by saying that this is largely academic. I want to compare the performance that a virtualized file server has that receives it's content being served by both NFS and iSCSI, and the NFS part is easy.
    Thanks.

    Mass storage only looks at the memory expansion.
    Did you have a micro SD card in it?
    What OS on the PC are you running?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • ZFS root problem after iscsi target experiment

    Hello all.
    I need help with this situation... I've installed solaris 10u6, patched, created branded full zone. Everything went well until I started to experiment with iSCSI target according to this document: http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=en&a=view&q=iscsi
    After setting up iscsi discovery address of my iscsi target solaris hung up and the only way was to send break from service console. Then I got these messages during boot
    SunOS Release 5.10 Version Generic_138888-01 64-bit
    /dev/rdsk/c5t216000C0FF8999D1d0s0 is clean
    Reading ZFS config: done.
    Mounting ZFS filesystems: (1/6)cannot mount 'root': mountpoint or dataset is busy
    (6/6)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Jan 23 14:25:42 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Jan 23 14:25:42 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    ---- There are many affected services from this error, unfortunately one of them is system-log, so I cannot find any relevant information why this happens.
    bash-3.00# svcs -xv
    svc:/system/filesystem/local:default (local file system mounts)
    State: maintenance since Fri Jan 23 14:25:42 2009
    Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
    See: http://sun.com/msg/SMF-8000-KS
    See: /var/svc/log/system-filesystem-local:default.log
    Impact: 32 dependent services are not running:
    svc:/application/psncollector:default
    svc:/system/webconsole:console
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/milestone/multi-user-server:default
    svc:/system/basicreg:default
    svc:/system/zones:default
    svc:/application/graphical-login/cde-login:default
    svc:/system/iscsitgt:default
    svc:/application/cde-printinfo:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/system/dumpadm:default
    svc:/system/fmd:default
    svc:/system/sysidtool:net
    svc:/network/rpc/bind:default
    svc:/network/nfs/nlockmgr:default
    svc:/network/nfs/status:default
    svc:/network/nfs/mapid:default
    svc:/application/sthwreg:default
    svc:/application/stosreg:default
    svc:/network/inetd:default
    svc:/system/sysidtool:system
    svc:/system/postrun:default
    svc:/system/filesystem/volfs:default
    svc:/system/cron:default
    svc:/application/font/fc-cache:default
    svc:/system/boot-archive-update:default
    svc:/network/shares/group:default
    svc:/network/shares/group:zfs
    svc:/system/sac:default
    [ Jan 23 14:25:40 Executing start method ("/lib/svc/method/fs-local") ]
    WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    [ Jan 23 14:25:42 Method "start" exited with status 95 ]
    Finaly there is output of zpool list command, where everything about ZFS pools looks OK:
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    root 68G 18.5G 49.5G 27% ONLINE -
    storedgeD2 404G 45.2G 359G 11% ONLINE -
    I would appretiate any help.
    thanks in advance,
    Berrosch

    OK, i've tryied to install s10u6 to default rpool and move root user to /rpool directory (which it is nonsense of course, it was just for this testing purposes) and everything went OK.
    Another experiment was with rootpool name 'root' and root user in /root, everything went OK as well.
    Next try was with rootpool 'root', root user in /root, enabling iscsi initiator:
    # svcs -a |grep iscsi
    disabled 16:31:07 svc:/network/iscsi_initiator:default
    # svcadm enable iscsi_initiator
    # svcs -a |grep iscsi
    online 16:34:11 svc:/network/iscsi_initiator:default
    and voila! the problem is here...
    Mounting ZFS filesystems: (1/5)cannot mount 'root': mountpoint or dataset is busy
    (5/5)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Feb 9 16:37:35 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Feb 9 16:37:35 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    Seems to be a bug in iscsi implementation, some quotas of 'root' in source code or something like it...
    Martin

  • Error when trying to install the iSCSI target - Cannot find the Windows PowerShell data file 'ImportExportIscsiTargetConfiguration.psd1'

    Hi,
    I'm just attempting to setup an iSCSI target on a freshly installed Windows 2012r2 box, but I get the following error when attempting to Create and iSCSI virtual disk via the wizard after a successful
    installation of the iSCSI target role.
    The full error is:
    Cannot find the Windows PowerShell data file 'ImportExportIscsiTargetConfiguration.psd1' in directory 'C:\Windows\System32\WindowsPowerShell\v1.0\Modules\IscsiTarget\en-GB\', or in any parent culture.
    I tried to uninstall, then reinstall the role but no go.
    The Server Locale and UI was all updated to en-GB but this folder does not appear to exist in this location. Rather, the folder I can see is:
    'C:\Windows\System32\WindowsPowerShell\v1.0\Modules\IscsiTarget\en-US\'
    I'm going to attempt to copy the 'en-US' folder to 'en-GB' to see what happens, but I would like to know why this has occurred in the first place? Other roles have installed (such as AD DS AD CS and
    IIS) on other machines with no issue.
    Many thanks
    Chris
    Chris

    Hi Chris,
    The error "Cannot find the Windows PowerShell data file 'ImportExportIscsiTargetConfiguration.psd1'" occured, because the file 'ImportExportIscsiTargetConfiguration.psd1' can't be loaded under the folder en-GB with current culture.
    I recommend you can copy this .psd1 file under  'C:\Windows\System32\WindowsPowerShell\v1.0\Modules\IscsiTarget'. Essentially if PowerShell can’t find the specified data file for the current culture it will “fallback” to the top-level data
    file in this case.
    For more detailed information, please refer to this article:
    Windows PowerShell 2.0 String Localization
    If there is anything else regarding this issue, please feel free to post back.
    Best Regards,
    Anna Wang
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Server 2012 R2 iSCSI Target - Multiple targets per iSCSI Virtual Disk with CHAP

    Scenario I am trying to achieve is this:
    Windows Server 2012 R2 serves as iSCSI Target configured to have 1 iSCSI Virtual Disk
    2 Hyper-V servers connecting to this target with iSCSI Inistator and have multiple targets for that iSCSI Virtual Disk using CHAP
    **These 2 are nodes in fail over cluster, this iSCSI is added as a CSV.
    Issue that I have, is that you can only have 1 target per iSCSI Virtual Disk
    Both Hyper-V servers can connect to this LUN without issue when I add both initiator IDs to the target, but once I enable CHAP, you can only put one initiator ID in the "Name" field, so I can only connect from 1 Hyper-V server.
    Do you know of a way around this?

    From my understanding, "chaptest" is a single target, my goal was to make 2 targets to the same iSCSI virtual disk.
    So if you were to right click the iSCSI virtual disk that "chaptest" is assigned to and click "Assign iSCSI Veritual Disk...", then select "New iSCSI Target, and proceed with the wizard it removes the "chaptest" target
    and adds the new one just created.
    My goal was to have 2 targets to 1 iSCSI VD, but seeing your screenshot, with 2 initiators connected, that goal doesn't seem needed anymore.
    I was under the impression that the "User name" = the iscsi initiator IQN name, which had to be unique. That is why I thought I would need 2 targets.
    Thanks

  • Reg reboot issue - console message- svc:/network/iscsi/target:default: Method or service exit timed out

    Hi,
    When I reboot the system I am seeing this message on the console and system is not getting rebooted.
    svc.startd[10]: svc:/network/iscsi/target:default: Method or service exit timed out. Killing contract 131.
    can someone help me to solve this?

    There are still a few issues with Solaris 11.1 and iSCSI LUN and rebooting. I know they are working on fixes.
    A possible workaround is to use the reboot command and NOT the init command. At least when you use the reboot the command the system will reboot and not hang.
    If this is not your problem then it may be a different problem.
    Andrew

  • Failed Creation ISCSI Target on NSS324

    Hello,
    I configured a NSS324 4 drives of 2To in RAID5.I had not no problem
    But when i would like to create a ISCSI target (one LUN of my total capacity : 5,36 To), it tooks more than 15 hours and i got this error after 15H: [iSCSI] Failed to create iSCSI target. (error code=5).
    Can you help me? can i create a LUN of my whole capacity?
    Thanks a lot
    Sev

    Please use code tags for your config and other terminal output for better readability. Still, it looks ok to me.
    I have targetcli-fb on one of my Arch boxes, but it's an old version (2.1.fb35-1) built back in June (official packages are up to date). Discovery and login from another Arch box works. I don't have time to troubleshoot further, but if you haven't found a solution by Monday I can update and maybe be of more use.

  • Iscsi target rewriting sparse backing store

    Hi all,
    I have this particular problem when trying to use sparse file residing on zfs as backing store for iscsi target. For the sake of this post, lets say I have to use the sparse file instead of whole zfs filesystem as iscsi backing store.
    However, as soon as the sparse file is used as iscsi target backing store, Solaris OS (iscsitgt process) decides to rewrite entire sparse file and make it non-sparse. Note this all happens without any iscsi initiator (client) accessed this iscsi target.
    My question is why the sparse file is being rewritten at that time?
    I can expect write at iscsi initiator connect time, but why at the iscsi target create time?
    Here are the steps:
    1. Create the sparse file, note the actual size,
    # dd if=/dev/zero of=sparse_file.dat bs=1024k count=1 seek=4096
    1+0 records in
    1+0 records out
    # du -sk .
    2
    # ll sparse_file.dat
    -rw-r--r--   1 root     root     4296015872 Feb  7 10:12 sparse_file.dat
    2. Create the iscsi target using that file as backing store:
    # iscsitadm create target --backing-store=$PWD/sparse_file.dat sparse
    3. Above command returns immediately, everything seems ok at this time
    4. But after couple of seconds, disk activity increases, and zpool iostat shows
    # zpool iostat 3
                   capacity     operations    bandwidth
    pool         used  avail   read  write   read  write
    mypool  5.04G   144G      0    298      0  35.5M
    mypool  5.20G   144G      0    347      0  38.0M
    and so on, until the write over previously sparse 4G is over:
    5. Note the real size now:
    # du -sk .
    4193252 .Note all of the above was happening with no iscsi initiators connected to that node nor target. Solaris OS did it by itself, and I can see no reasons why.
    I would like to have those files sparse, at least until I use them as iscsi targets, and I would prefer those files to grow as my initiators (clients) are filling them.
    If anyone can share some thoughts on this, I'd appreciate it
    Thanks,
    Robert

    Problem solved.
    Solaris iscsi target daemon configuration file has to be updated with:
    <thin-provisioning>true</thin-provisioning>
    in order to iscsitgtd not to initialize the iscsi target backing store files. This is all valid only for iscsi targets having files as backing store.
    After creating iscsi targets with file (sparse or not) as backing store, there is no i/o activity whatsoever, and thats' what I wanted.
    FWIW, This is how the config file looks now.
    # more /etc/iscsi/target_config.xml
    <config version='1.0'>
    <thin-provisioning>true</thin-provisioning>
    </config>
    #

  • Iscsi target on cluster node

    Hi
    I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
    because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
    thanks

    Hi
    I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
    because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
    thanks
    Bad news: You cannot do it with a Microsoft built-in solutions because you do need indeed to have physical shared storage to make Microsoft iSCSI target clustered. Something like on Robert Smit blog here:
    Clustering Microsoft iSCSI Target
    https://robertsmit.wordpress.com/2012/06/26/clustering-iscsi-target-on-windows-2012-step-by-step/
    ...or here:
    MSFT iSCSI Target in HA
    https://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    ...or very detailed walk thru here:
    MSFT iSCSI Target in High Availability Mode
    https://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Good news: you can take a third-party solution from various companies (below) and create an HA iSCSI volumes on just a pair of nodes. See:
    StarWind Virtual SAN
    http://www.starwindsoftware.com/starwind-virtual-san-free
    (this setup is FREE of charge, you just need to be an MCT, MVP or MCP to obtain your free 2-node key)
    ...to have a setup like this:
    Also SteelEye has similar one here:
    SteelEye #SANLess Clusters
    http://us.sios.com/products/datakeeper-cluster/
    DataCore SANsyxxxx
    http://www.datacore.com/products/SANsymphony-V.aspx
    You can also spawn a VMs running FreeBSD/HAST or Linux/DRBD to build a very similar setup yourself (Two-node setups should be Active-Passive to avoid brain split, Windows solutions from above all maintain own pacemaker and heartbeats to run Active-Active
    on just a pair of nodes).
    Good luck and happy clustering :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • OVS servers restart when iSCSI target restarts

    Long story short, I've been working on my new lab at home off and on.  Tonight I was working on my lab to get to the point where I would have OVMM as a virtual machine as the dedicated host I have for temporary measures only has two gigs of ram vs the recommended 8 gigs of ram.
    More to the point:  I was working on the storage portion of my setup and I got my iSCSI target discovered (FreeNAS 9.3) and I was able to build my cluster of OVSs.  I noticed earlier that my FreeNAS server had some pending updates and I figured it would be a good idea to perform these updates before I started using the iSCSI storage for active virtual machines.  Naturally, when the updates are done on my FreeNAS server, it reboots.
    What I don't understand is why both of my OVSs restarted at the same time after the FreeNAS server went down for a reboot.  Was there a hidden gotcha?  Did I overlook something in the documentation?
    A side effect of the unexpected reboot is the fact that my OVS(2) did not come back online with OVMM and I could not ping my management interface for that OVS.  I did several Google searches and tried some manual up/downs of the interfaces and tried one effort to manually configure bond0 to an IP address to try and get some sort of response.  Needless to say, none of those were good ideas and didn't help.  Therefore, I'm re-installing OVS on server 2 now.
    If anyone has any ideas as to how I could have fixed OVS(2) I'm more than willing to take notes in case this happens again.
    Thank you for any and all help,
    Brendan
    P.S. If this has any influence with my situation, I am running LACP bonds and all IP addresses are configured on the VLAN interfaces.  I have not had any issues configuring my switch and with the exception of OVS(2) not wanting to reply after the unexpected restart I had everything working.  An added gotcha is that the storage VLAN interface on OVS(2) continued to reply after the restart.

    Hi Brendan,
    the situation you encountered is expected for a single-headed iSCSI target. If you present OVM with a LUN - regardless if it's iSCSI or FC, OVM creates an OCFS2 volume on it, when you choose that LUN to be used as a storage repository. This is, since OCFS2 is a cluster-aware file system and thus is the only supported fs to use for a cluster setup. Now, although OVFS2 is really easy to setup - compared to other cluster file systems - it has some mechanics that will cause your nodes to fence (or reboot) in case the node looses "contact" with the OCFS2. Each node writes little keep-alive fragments onto the OCSF2 and if you reboot your iSCSI target, then this will cease and OCFS2 will fence itself, hoping for some minor glitch that may be gone after a reboot.
    There are a couple of ways to achieve this with iSCSI, but their resp. setup is not easy, and the hoops to jump through are probably to many for a lab setup. E.g. I am running a two RAC nodes as iSCST targets, which in turn do have high-redundancy disk setup behind them, consiting of 3 seperate storage servers. So, in my setup I am able to survive:
    - two storage node failures
    - one RAC node failure (which provides the iSCSI target for my OVS)
    Regarding your OVS2, as you have already re-installed OVS2, there will be now way, to tell, what went wrong with it, as you'd need access to the console to poke around a bit, while OVS2 is in that situation, where it doesn't respond to pings on it's management interface.
    Cheers,
    budy

  • How to create iSCSI target using the entire drive?

    This sounds silly, but after setting up the DL4100 in RAID 5, I could not assign the entire 11.89TB to an iSCSI target... Only integer numbers seem to be the valid input. And the only choice of TB or GB offered no help because I could not enter: 11890 GB either?! What am I missing?

    Thanks to all who have responded....
    Here's my take: 4TB x4 in RAID 5 = 11.89TB but via iSCSI creation will result in 0.89TB unallocable to iSCSI due to the GUI disabling non-interger values (ie: no decimal points). That is 890GB of storage that is supposedly used for firmware upgrades, logs, etc?! 
    Snooping around I noticed that the validation is only performed via javascript and a quick re-POST to the unit of the params can trigger modification/creation of a non-integer iSCSI volume sizes. Please note that you can only grow volume sizes and not shrink them! Below is a walk-thru of how to do this:
    Disclaimer: *WARNING: USE AT YOUR OWN RISK*
    1) Create the iSCSI volume as per normal but at the integer value less than what you intend (eg: if you wanted 1.5TB, create 1TB) then wait for the iSCSI volume to be created.
    2) Use a web browser with debugging turned on and capturing traffic. For my example I am using Firefox and hit F12 to fire up the debug tool. Pick "Network" and you will see Firefox start picking up all traffic to the DL4100.
    3) Go into the iSCSI volume and choose "Details" to modify it. Put the current size as the modified size and click "Apply". Look in the list of messages to locate a POST message "iscsi_mgr.cgi" that has a Request body with a "size" parameter and select that to be resent.
     4) In this copy of the same message, look in Request Body and the list of parameters being passed back to the unit. You should find one of the parameters called "size". This value is sent to the unit in GB... Change the value to a GB value that you desire (eg: 1TB would appear as "1000", so you can change it to "1500" for 1.5TB) and then re-POST this POST message back to the unit.
    5) Wait for the update to transact and verify that your iSCSI volume has indeed been resized to a "non-Integer" TB value.
    That's it! I hope this helps others who have been trapped by this limitation. Please be mindfull not to allocate all your dive space since as Bill_S has mentioned, some space is required by the system for its own housekeeping operations.
    Good Luck!

  • How to sync the data between the two iSCSI target server

    Hi experts:
    I have double HP dl380g8 server, i plan to install the server 2012r2 iSCSI target as storage, i know the iSCSI storage can setup as high ability too, but after some research i doesn't find out how to sync the data between the two iSCSI target server, can
    any body help me?
    Thanks

    Hi experts:
    I have double HP dl380g8 server, i plan to install the server 2012r2 iSCSI target as storage, i know the iSCSI storage can setup as high ability too, but after some research i doesn't find out how to sync the data between the two iSCSI target server, can
    any body help me?
    Thanks
    There are basically three ways to go:
    1) Get compatible software. Microsoft iSCSI target cannot do what you want out-of-box but good news third-party software (there are even free versions with a set of limitations) can do what you want. See:
    StarWind Virtual SAN [VSAN]
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    DataCore SANxxx
    http://datacore.com/products/SANsymphony-V.aspx
    SteelEye DataKeeper
    http://us.sios.com/what-we-do/windows/
    All of them do basically the same: mirror set of LUs between Windows hosts to emulate a high performance and fault tolerant virtual SAN. All of them do this in active-active mode (all nodes handle I/O) and at least StarWind and DataCore have sophisticated
    distributed cache implementations (RAM and flash).
    2) Get incompatible software (MSFT iSCSI target) and run it in generic Windows cluster. That would require you to have CSV so physical shared storage (FC or SAS, iSCSI obviously has zero sense as you can feed THAT iSCSI target directly to your block storage
    consumers). This is doable and is supported by MSFS but has numerous drawbacks. First of all it's SLOW as a) MSFT target does no caching and even does not use file system cache (at all, VHDX it uses as a containers are opened and I/O-ed in a "pass-thru" mode)
    b) it's only active-passive (one node will handle I/O @ a time with other one just doing nothing in standby mode) and c) long I/O route (iSCSI initiator -> MSFT iSCSI target -> clustered block back end). For reference see:
    Configuring iSCSI Storage for High Availability
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    MSFT iSCSI Target Cluster
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    3) Re-think what you do. Except iSCSI target from MSFT you can use newer technologies like SoFS (obviously faster but requires a set of a dedicated servers) or just a shared VHDX if you have a fault tolerant SAS or FC back end and want to spawn a guest VM
    cluster. See:
    Scale-Out File Servers
    http://technet.microsoft.com/en-us/library/hh831349.aspx
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    With Windows Server 2012 R2 release virtual FC and clustered MSFT target are both really deprecated features as shared VHDX is both faster and easier to setup and use if you have FC or SAS block back end and need to have guest VM cluster.
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for