DFS & multiple iSCSI targets

Hiya,
A bit of a hypothetical question here.
I have a ReadyNAS Pro connected to my 2008 R2 server via iSCSI (I connect with iSCSI so that I can use MS shadow copies), this target holds my DFS namespace and works well. I now have a 2012 R2 server and have added that as an additional namespace
server, again fine.
What I now want to do is add some failover capacity should the 2008 server go down/need reboot and I was wondering what would be the consequence of adding another iSCSI connection to the same target from the 2012 server? I realise that normally this
would involve clustering to avoid corruption (no experience here) but because it's a DFS would this still apply?
TIA
Baz

Hiya,
A bit of a hypothetical question here.
I have a ReadyNAS Pro connected to my 2008 R2 server via iSCSI (I connect with iSCSI so that I can use MS shadow copies), this target holds my DFS namespace and works well. I now have a 2012 R2 server and have added that as an additional namespace
server, again fine.
What I now want to do is add some failover capacity should the 2008 server go down/need reboot and I was wondering what would be the consequence of adding another iSCSI connection to the same target from the 2012 server? I realise that normally this
would involve clustering to avoid corruption (no experience here) but because it's a DFS would this still apply?
TIA
Baz
You can have DFS running between a pair of Windows boxes and these boxes keeping their content on an individual SMB/NFS shares or iSCSI volumes BUT you cannot easily have access to the same volume @ block level. You may build a cluster between Windows boxes
using your ReadyNAS as a shared storage (that would be a single point of failure BTW) and then you may spawn a failover SMB share between two Windows boxes (you'll have to have the same Windows version however) or continue playing with DFS. You may read more
here:
http://www.starwindsoftware.com/forums/starwind-f5/trying-clear-this-use-iscsi-instead-smb-t1392.html
(has nothing to do with StarWind directly, rather it's generic SAN question).
Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Server 2012 R2 iSCSI Target - Multiple targets per iSCSI Virtual Disk with CHAP

    Scenario I am trying to achieve is this:
    Windows Server 2012 R2 serves as iSCSI Target configured to have 1 iSCSI Virtual Disk
    2 Hyper-V servers connecting to this target with iSCSI Inistator and have multiple targets for that iSCSI Virtual Disk using CHAP
    **These 2 are nodes in fail over cluster, this iSCSI is added as a CSV.
    Issue that I have, is that you can only have 1 target per iSCSI Virtual Disk
    Both Hyper-V servers can connect to this LUN without issue when I add both initiator IDs to the target, but once I enable CHAP, you can only put one initiator ID in the "Name" field, so I can only connect from 1 Hyper-V server.
    Do you know of a way around this?

    From my understanding, "chaptest" is a single target, my goal was to make 2 targets to the same iSCSI virtual disk.
    So if you were to right click the iSCSI virtual disk that "chaptest" is assigned to and click "Assign iSCSI Veritual Disk...", then select "New iSCSI Target, and proceed with the wizard it removes the "chaptest" target
    and adds the new one just created.
    My goal was to have 2 targets to 1 iSCSI VD, but seeing your screenshot, with 2 initiators connected, that goal doesn't seem needed anymore.
    I was under the impression that the "User name" = the iscsi initiator IQN name, which had to be unique. That is why I thought I would need 2 targets.
    Thanks

  • Multiple Hyper-V 2012 R2 servers to access a since ISCSI target

    A couple years ago the community helped me configure a CentOS file server to connect to my VMWare and HyperV Servers for the purpose of backup. At the time I created 2 LVM devices, one to connect to Esxi via NFS, and one to connect to HV via ISCSI. This has worked GREAT!I have converted almost all my production environment to HV. I had to of course rebuild the arrays and make the larger one ISCSI rather than NFS. I just noticed that file the file structure of the connected HV drives is not the same. I have (3) HV Servers connected to the same ISCSI target which resides on the CentOS file server. When you view the connected drives, the same target has different files on it from different HV Hosts. Prior to rebuilding the arrays, the ISCSI target worked as intended. I suspect that I missed a step during the rebuild. The drives were...
    This topic first appeared in the Spiceworks Community

    Hello,
    I have installed VMM 2012 R2, this is also not allowing me to add and manage Server 2012 R2 hosts. However it is perfectly able to manage Server 2008 host.
    It gives following error when tried to add Server 2012 R2 hosts:
    Error (2916)
    VMM is unable to complete the request. The connection to the agent windows2012.test.com was lost.
    WinRM: URL: [http://windows2012.test.com:5985], Verb: [INVOKE], Method: [Associate], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/AgentManagement]
    Unknown error (0x80338012)
    Recommended Action
    Ensure that the WS-Management service and the agent are installed and running and that a firewall is not blocking HTTPS traffic.
    This problem can also be caused by WMI service crash. Ensure that KB 982293 (http://support.microsoft.com/kb/982293) is installed on the machine if it is running Windows Server 2008 R2.
    If the error persists, reboot windows2012.test.com and then try the operation again.
    Any suggestions please?

  • ISCSI target weirdness

    Hello!
    If I hadn't experienced the issue I'm going to talk about I would have hardly believed it may ever happen...
    I'm starting setting up a test cluster (all hosts = WinServer 2012 R2): Host3 = iSCSI target, Host1 and Host2 - cluster nodes.
    1) I create three iSCSI disks on Host3: Disk1-Q (5Gb),
    FS (60 Gb), Mail (60 Gb). All of them are "part" of the single target.
    2) ... then make an iSCSI connection to Host3 from Host1 and Host2.
    3) On Host1 I open Disk Management and create three volumes: Disk1-Q (X),
    FS (Y), Mail (Z).
    4) I format disk X (Disk1-Q) and give it the name "QUORUM".
    4) Now I must open Disk Management on Host2 and make these disks online....but having connected to Host2 I see that all three disks are already online and UNformatted.
    Here goes my "funny" "tests:
    5) On Host2 I format THE SAME disk X (Disk1-Q) and give it another name "QUORUM2" - the operation succeeds.
    6) I create the folder "hhh" on disk X (Disk1-Q) when I connected to Host1 and create the folder "kkk" on
    Disk1-Q when connected to Host2. Each node shows only its "own" folder on
    Disk1-Q...
    7!) a) On Host1: I copied one iso image (3,6 Gb) to disk X (Disk1-Q) and
         b) On Host2: I copied other iso image (3.1 Gb) do disk
    X (Disk1-Q).
    The second copy operation succeeded too!!! Now my 5Gb disk X (iSCSI disk Disk1-Q) contains two iso images that total to 6.7Gb!
    ...I know that's impossible...is it magic?
    ...but looking at the pictures above what can you say?
    Thank you in advance,
    Michael

    You cannot do what you're trying to do the way you do :) Making long story short: you connect single NTFS-formatted volume to more then one writer. They both update data and metadata w/o telling each other what they do. That results corrupted volume. You
    absolutely need some separate arbiter to take care of ditributed locks. It can be CSV (set of filters on top of NTFS) or it can be some clustered file system (trust me you don't need this as it's expensive). Or layer SMB share on top of your block volume (iSCSI
    or generic does not matter) and connect from multiple clients. For more details you can read the link below (ignore StarWind as issues are the same for any SAN). See:
    SAN Vs. NAS
    http://www.starwindsoftware.com/forums/starwind-f5/trying-clear-this-use-iscsi-instead-smb-t1392.html
    Hope this helped a bit ;)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Unable to expand/extend partition after growing SAN-based iSCSI target

    Hello, all. I have odd situation regarding how to expand iSCSI-based partitions.
    Here is my setup:
    I use the GlobalSAN iSCSI initiator on 10.6.x server (Snow Leopard).
    The iSCSI LUN is formatted with the GPT partition table.
    The filesystem is Journaled HFS+
    My iSCSI SAN has the ability to non-destructively grow a LUN (iSCSI target).
    With this in mind, I wanted to experiment with growing a LUN/target on the SAN and then expanding the Apple partition within it using disk utility. I have been unable to do so.
    Here is my procedure:
    1) Eject the disk (iSCSI targets show up as external hard drives)
    2) Disconnect the iSCSI target using the control panel applet (provided by GlobalSAN)
    3) Grow the LUN/target on the SAN.
    4) Reconnect the iSCSI initiator
    5) Expand/extend the partition using Disk Utility to consume the (newly created) free space.
    It works until the last step. When I reconnect to the iSCSI target after expanding it on the SAN, it shows up Disk Utility as being larger than it was (so far, so expected). When I go to expand the partition, however, it errors out saying that there is not enough space.
    Investigating further, I went the command line and performed a
    "diskutil resizeVolume <identifier> limits"
    to determine what the limit was to the partition. The limits did NOT reflect the newly-created space.
    My suspicion is that the original partition map, since it was created as 100% of the volume, does not allow room for growth despite the fact that the disk suddenly (and, to the system, unexpectedly) became larger.
    Is this assumption correct? Is there any way around this? I would like to be able to expand my LUNs/targets (since the SAN can grow with the business), but this has no value if I cannot also extend the partition table to use the new space.
    If anyone has any insight, I would greatly appreciate it. Thank you!

    I have exactly the same problem that you describe above. My iSCSI LUN was near capacity and therefore i extended the iSCSI LUN from 100G to 150G. No problem so far.
    Disk Utility shows the iSCSI device as 150G but i cannot extend the volume to the new size. It gives me the same error (in Dutch).
    Please someone help us out !

  • My Windows 2012 ISCSI Target service needs to be restarted every time the server starts.

    Every time I restart my Windows 2012 server, the ISCSI Target Service is unavailable to the clients even though it appears to be running.
    I have to restart the service, then the clients can connect to it. 
    I tried to change the startup Type to Delayed, but I get an error 87: parameter is incorrect.
    I've tried to delete and recreate the VHD and ISCSI Target.
    I've tried to uninstall the Role, and reinstall it. 
    Anybody have any additional ideas to try and troubleshoot this?
    Thank you
    James Roper

    The service is not starting correctly, even though it does start. If I restart the service, everything works correctly. Here is the event in the ISCSI Target. 
    Log Name:      Microsoft-Windows-iSCSITarget-Service/Admin
    Source:        Microsoft-Windows-iSCSITarget-Service
    Date:          5/8/2013 3:11:10 PM
    Event ID:      10
    Task Category: None
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      SRV-ISCSI1
    Description:
    The Microsoft iSCSI Software Target service could not bind to network address 10.5.4.31, port 3260. The operation failed with error code 10049. Ensure that no other application is using this port.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-iSCSITarget-Service" Guid="{13953C6E-C594-414E-8BA7-DEB4BA1878E3}" />
        <EventID>10</EventID>
        <Version>0</Version>
        <Level>2</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2013-05-08T22:11:10.899843800Z" />
        <EventRecordID>38</EventRecordID>
        <Correlation />
        <Execution ProcessID="976" ThreadID="1448" />
        <Channel>Microsoft-Windows-iSCSITarget-Service/Admin</Channel>
        <Computer>SRV-ISCSI1</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="IpAddress">10.5.4.31</Data>
        <Data Name="dwPort">3260</Data>
        <Data Name="Error">10049</Data>
      </EventData>
    </Event>

  • Unable to use device as an iSCSI target

    My intended purpose is to have iSCSI targets for a virtualbox setup at home where block devices are for the systems and existing data on a large RAID partition is exported as well. I'm able to successfully export the block files by using dd and added them as a backing-store into the targets.conf file:
    include /etc/tgt/temp/*.conf
    default-driver iscsi
    <target iqn.2012-09.net.domain:vm.fsrv>
    backing-store /srv/vm/disks/iscsi-disk-fsrv
    </target>
    <target iqn.2012-09.net.domain:vm.wsrv>
    backing-store /srv/vm/disks/iscsi-disk-wsrv
    </target>
    <target iqn.2012-09.net.domain:lan.storage>
    backing-store /dev/md0
    </target>
    but the last one with /dev/md0 only creates the controller and not the disk.
    The RAID device is mounted, I don't whether or not that matters, unfortunately I can't try it being unmounted yet because it is in use. I've tried all permutations of backing-store and direct-store with md0 as well as another device (sda) with and without the partition number, all had the same result.
    If anyone has been successful exporting a device (specifically a multi disk) I'd be real interested in knowing how. Also, if anyone knows how, or if it's even possible, to use a directory as the backing/direct store I'd like to know that as well, my attempts there have been unsuccessful as well.
    I will preempt anyone asking why I'm not using some other technology, eg. NFS, CIFS, ZFS, etc., by saying that this is largely academic. I want to compare the performance that a virtualized file server has that receives it's content being served by both NFS and iSCSI, and the NFS part is easy.
    Thanks.

    Mass storage only looks at the memory expansion.
    Did you have a micro SD card in it?
    What OS on the PC are you running?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • ZFS root problem after iscsi target experiment

    Hello all.
    I need help with this situation... I've installed solaris 10u6, patched, created branded full zone. Everything went well until I started to experiment with iSCSI target according to this document: http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=en&a=view&q=iscsi
    After setting up iscsi discovery address of my iscsi target solaris hung up and the only way was to send break from service console. Then I got these messages during boot
    SunOS Release 5.10 Version Generic_138888-01 64-bit
    /dev/rdsk/c5t216000C0FF8999D1d0s0 is clean
    Reading ZFS config: done.
    Mounting ZFS filesystems: (1/6)cannot mount 'root': mountpoint or dataset is busy
    (6/6)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Jan 23 14:25:42 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Jan 23 14:25:42 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    ---- There are many affected services from this error, unfortunately one of them is system-log, so I cannot find any relevant information why this happens.
    bash-3.00# svcs -xv
    svc:/system/filesystem/local:default (local file system mounts)
    State: maintenance since Fri Jan 23 14:25:42 2009
    Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
    See: http://sun.com/msg/SMF-8000-KS
    See: /var/svc/log/system-filesystem-local:default.log
    Impact: 32 dependent services are not running:
    svc:/application/psncollector:default
    svc:/system/webconsole:console
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/milestone/multi-user-server:default
    svc:/system/basicreg:default
    svc:/system/zones:default
    svc:/application/graphical-login/cde-login:default
    svc:/system/iscsitgt:default
    svc:/application/cde-printinfo:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/system/dumpadm:default
    svc:/system/fmd:default
    svc:/system/sysidtool:net
    svc:/network/rpc/bind:default
    svc:/network/nfs/nlockmgr:default
    svc:/network/nfs/status:default
    svc:/network/nfs/mapid:default
    svc:/application/sthwreg:default
    svc:/application/stosreg:default
    svc:/network/inetd:default
    svc:/system/sysidtool:system
    svc:/system/postrun:default
    svc:/system/filesystem/volfs:default
    svc:/system/cron:default
    svc:/application/font/fc-cache:default
    svc:/system/boot-archive-update:default
    svc:/network/shares/group:default
    svc:/network/shares/group:zfs
    svc:/system/sac:default
    [ Jan 23 14:25:40 Executing start method ("/lib/svc/method/fs-local") ]
    WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    [ Jan 23 14:25:42 Method "start" exited with status 95 ]
    Finaly there is output of zpool list command, where everything about ZFS pools looks OK:
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    root 68G 18.5G 49.5G 27% ONLINE -
    storedgeD2 404G 45.2G 359G 11% ONLINE -
    I would appretiate any help.
    thanks in advance,
    Berrosch

    OK, i've tryied to install s10u6 to default rpool and move root user to /rpool directory (which it is nonsense of course, it was just for this testing purposes) and everything went OK.
    Another experiment was with rootpool name 'root' and root user in /root, everything went OK as well.
    Next try was with rootpool 'root', root user in /root, enabling iscsi initiator:
    # svcs -a |grep iscsi
    disabled 16:31:07 svc:/network/iscsi_initiator:default
    # svcadm enable iscsi_initiator
    # svcs -a |grep iscsi
    online 16:34:11 svc:/network/iscsi_initiator:default
    and voila! the problem is here...
    Mounting ZFS filesystems: (1/5)cannot mount 'root': mountpoint or dataset is busy
    (5/5)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Feb 9 16:37:35 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Feb 9 16:37:35 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    Seems to be a bug in iscsi implementation, some quotas of 'root' in source code or something like it...
    Martin

  • Error when trying to install the iSCSI target - Cannot find the Windows PowerShell data file 'ImportExportIscsiTargetConfiguration.psd1'

    Hi,
    I'm just attempting to setup an iSCSI target on a freshly installed Windows 2012r2 box, but I get the following error when attempting to Create and iSCSI virtual disk via the wizard after a successful
    installation of the iSCSI target role.
    The full error is:
    Cannot find the Windows PowerShell data file 'ImportExportIscsiTargetConfiguration.psd1' in directory 'C:\Windows\System32\WindowsPowerShell\v1.0\Modules\IscsiTarget\en-GB\', or in any parent culture.
    I tried to uninstall, then reinstall the role but no go.
    The Server Locale and UI was all updated to en-GB but this folder does not appear to exist in this location. Rather, the folder I can see is:
    'C:\Windows\System32\WindowsPowerShell\v1.0\Modules\IscsiTarget\en-US\'
    I'm going to attempt to copy the 'en-US' folder to 'en-GB' to see what happens, but I would like to know why this has occurred in the first place? Other roles have installed (such as AD DS AD CS and
    IIS) on other machines with no issue.
    Many thanks
    Chris
    Chris

    Hi Chris,
    The error "Cannot find the Windows PowerShell data file 'ImportExportIscsiTargetConfiguration.psd1'" occured, because the file 'ImportExportIscsiTargetConfiguration.psd1' can't be loaded under the folder en-GB with current culture.
    I recommend you can copy this .psd1 file under  'C:\Windows\System32\WindowsPowerShell\v1.0\Modules\IscsiTarget'. Essentially if PowerShell can’t find the specified data file for the current culture it will “fallback” to the top-level data
    file in this case.
    For more detailed information, please refer to this article:
    Windows PowerShell 2.0 String Localization
    If there is anything else regarding this issue, please feel free to post back.
    Best Regards,
    Anna Wang
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • One data source and assign multiple data target in BI?

    Hi all,
    Is it  possible is to assign one data source to multiple data targets in BI? Not like  in BW 3.X ; one data source can assign only one Info source. I am bit confused about it, let me know about this ?
    Regards.
    hari

    Okay, I must have misunderstood your message, I was thinking BI 7 and data targets .. like cubes and DSO's.
    In 3.x, assign your datasource to a single infosource.  Then assign that infosource to multiple data targets by creating update rules and assigning your infosource to each/all of them.
    In this way, you shouldn't need multiple infosources per datasource unless you have a special situation that calls for it.
    Brian

  • Reg reboot issue - console message- svc:/network/iscsi/target:default: Method or service exit timed out

    Hi,
    When I reboot the system I am seeing this message on the console and system is not getting rebooted.
    svc.startd[10]: svc:/network/iscsi/target:default: Method or service exit timed out. Killing contract 131.
    can someone help me to solve this?

    There are still a few issues with Solaris 11.1 and iSCSI LUN and rebooting. I know they are working on fixes.
    A possible workaround is to use the reboot command and NOT the init command. At least when you use the reboot the command the system will reboot and not hang.
    If this is not your problem then it may be a different problem.
    Andrew

  • Multiple FPGA targets under one cRIO controller

    Hi !
    I was reading the cRIO System Configuration Information (CRI) Reference Library article (http://www.ni.com/example/51852/en/) and there Figure 9 shows a cRIO Controller with multiple FPGA targets. How can this be accomplished?
    In my case, when I tried to add a 2nd FPGA target, under my cRIO-9076, I get a message that only one can be associated with the controller. 
    Any ideas ?
    Solved!
    Go to Solution.

    The CRI Library claims support back to LabVIEW 8.5.1, which leads me to believe this screenshot was taken in that version. The RIO Scan Interface/FPGA Scan Engine (RSI) were introduced in LabVIEW 8.6 and NI-RIO 3.0.x. In order to include this support, the notion of a chassis in the LV project was introduced (notice there is no chassis under the controller in the screenshot). To better facilitate RSI and the Scan Engine and provide a more accurate representation of what is actually available in a system, you can only add one chassis per controller. This allows the RSI to load the correct controllers for deployment.
    In LV 8.5.1, you can add multiple targets to an integrated controller/FPGA system (like the cRIO-9072) even though there is no way that could happen in real life, so this isn't really that desirable. What you can still do is add multiple FPGA targets (even from cRIO chassis) under the My Computer target in your project. This will still allow you to communicate with the FPGA target, but any VIs will be running on your PC system, not the cRIO controller.
    Donovan

  • Failed Creation ISCSI Target on NSS324

    Hello,
    I configured a NSS324 4 drives of 2To in RAID5.I had not no problem
    But when i would like to create a ISCSI target (one LUN of my total capacity : 5,36 To), it tooks more than 15 hours and i got this error after 15H: [iSCSI] Failed to create iSCSI target. (error code=5).
    Can you help me? can i create a LUN of my whole capacity?
    Thanks a lot
    Sev

    Please use code tags for your config and other terminal output for better readability. Still, it looks ok to me.
    I have targetcli-fb on one of my Arch boxes, but it's an old version (2.1.fb35-1) built back in June (official packages are up to date). Discovery and login from another Arch box works. I don't have time to troubleshoot further, but if you haven't found a solution by Monday I can update and maybe be of more use.

  • Iscsi target rewriting sparse backing store

    Hi all,
    I have this particular problem when trying to use sparse file residing on zfs as backing store for iscsi target. For the sake of this post, lets say I have to use the sparse file instead of whole zfs filesystem as iscsi backing store.
    However, as soon as the sparse file is used as iscsi target backing store, Solaris OS (iscsitgt process) decides to rewrite entire sparse file and make it non-sparse. Note this all happens without any iscsi initiator (client) accessed this iscsi target.
    My question is why the sparse file is being rewritten at that time?
    I can expect write at iscsi initiator connect time, but why at the iscsi target create time?
    Here are the steps:
    1. Create the sparse file, note the actual size,
    # dd if=/dev/zero of=sparse_file.dat bs=1024k count=1 seek=4096
    1+0 records in
    1+0 records out
    # du -sk .
    2
    # ll sparse_file.dat
    -rw-r--r--   1 root     root     4296015872 Feb  7 10:12 sparse_file.dat
    2. Create the iscsi target using that file as backing store:
    # iscsitadm create target --backing-store=$PWD/sparse_file.dat sparse
    3. Above command returns immediately, everything seems ok at this time
    4. But after couple of seconds, disk activity increases, and zpool iostat shows
    # zpool iostat 3
                   capacity     operations    bandwidth
    pool         used  avail   read  write   read  write
    mypool  5.04G   144G      0    298      0  35.5M
    mypool  5.20G   144G      0    347      0  38.0M
    and so on, until the write over previously sparse 4G is over:
    5. Note the real size now:
    # du -sk .
    4193252 .Note all of the above was happening with no iscsi initiators connected to that node nor target. Solaris OS did it by itself, and I can see no reasons why.
    I would like to have those files sparse, at least until I use them as iscsi targets, and I would prefer those files to grow as my initiators (clients) are filling them.
    If anyone can share some thoughts on this, I'd appreciate it
    Thanks,
    Robert

    Problem solved.
    Solaris iscsi target daemon configuration file has to be updated with:
    <thin-provisioning>true</thin-provisioning>
    in order to iscsitgtd not to initialize the iscsi target backing store files. This is all valid only for iscsi targets having files as backing store.
    After creating iscsi targets with file (sparse or not) as backing store, there is no i/o activity whatsoever, and thats' what I wanted.
    FWIW, This is how the config file looks now.
    # more /etc/iscsi/target_config.xml
    <config version='1.0'>
    <thin-provisioning>true</thin-provisioning>
    </config>
    #

  • Iscsi target on cluster node

    Hi
    I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
    because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
    thanks

    Hi
    I'm trying to install a cluster on a lab environment, i have to physical servers, and would like to use them as cluster nodes, on one of this nodes i would like to install iscsi target server to use for sharing disk to the cluster itself is this possible?
    because i did all the configurations but after installing the cluster the iscsi target server doesn't work anymore.
    thanks
    Bad news: You cannot do it with a Microsoft built-in solutions because you do need indeed to have physical shared storage to make Microsoft iSCSI target clustered. Something like on Robert Smit blog here:
    Clustering Microsoft iSCSI Target
    https://robertsmit.wordpress.com/2012/06/26/clustering-iscsi-target-on-windows-2012-step-by-step/
    ...or here:
    MSFT iSCSI Target in HA
    https://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    ...or very detailed walk thru here:
    MSFT iSCSI Target in High Availability Mode
    https://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Good news: you can take a third-party solution from various companies (below) and create an HA iSCSI volumes on just a pair of nodes. See:
    StarWind Virtual SAN
    http://www.starwindsoftware.com/starwind-virtual-san-free
    (this setup is FREE of charge, you just need to be an MCT, MVP or MCP to obtain your free 2-node key)
    ...to have a setup like this:
    Also SteelEye has similar one here:
    SteelEye #SANLess Clusters
    http://us.sios.com/products/datakeeper-cluster/
    DataCore SANsyxxxx
    http://www.datacore.com/products/SANsymphony-V.aspx
    You can also spawn a VMs running FreeBSD/HAST or Linux/DRBD to build a very similar setup yourself (Two-node setups should be Active-Passive to avoid brain split, Windows solutions from above all maintain own pacemaker and heartbeats to run Active-Active
    on just a pair of nodes).
    Good luck and happy clustering :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for