OEL 4.5 iSCSI Target notes wanted

I'm about to build an iSCSI target based around an OEL 4.5/Dual Opteron/8GB/2.5TB Sata machine. Distro is flexible, but I would prefer to stick with OEL 4.5 or OEL 5.0.
I'd appreciate pointers and links to good documentation, howtos, notes, etc. that you have encountered. This is a learning opportunity for me.

Hans,
we can start with installation HowTo :-)
1. Install kernel-devel package
2. Download "iSCSI Enterprise Target" from http://sourceforge.net/project/showfiles.php?group_id=108475
3. tar xvzf iscsitarget-0.4.15.tar.gz
4. cd iscsitarget-0.4.15
5. make
6. # make install
7. Modify /etc/ietd.conf and set mandatory parameters:
- Target (iSCSI qualified name aka IQN - in format iqn.yyyy-mm.reversed domainname:identifier)
- LUN (storage)
Assume that your domain is acme.com and my shared device /dev/sda, no authentication. So config will be as following:
Target iqn.2008-01.com.acme:storage.disk1
        Lun 0 Path=/dev/sda8. # chkconfig iscsi-target on
9. # /etc/init.d/iscsi-target start
10. "man ietadm" "man ietd" "man 5 ietd.conf" and cup of coffee :-)
A Quick Guide to iSCSI on Linux - http://www.cuddletech.com/articles/iscsi/index.html
Setting Up an iSCSI Enviroment - http://www.howtoforge.com/iscsi_on_linux
Message was edited by:
Ivan Kartik
Of course you may consider to use this: http://www.openfiler.com

Similar Messages

  • ISCSI target setup fails: command "iscsitadm" not available?

    Hello,
    I want to set up a iSCSI target:
    however it seems I don't have
    iscsitadm
    available on my system, only
    iscsiadm
    What to do?
    Is this
    http://alessiodini.wordpress.com/2010/10/24/iscsi-nice-to-meet-you/
    still valid in terms of set up procedure?
    Thanks

    Ok,
    here you go using COMSTAR:
    pkg install storage-server
    pkg install -v SUNWiscsit
    http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html
    svcs \*stmf\*
    svcadm enable svc:/system/stmf:default
    zfs create -V 250G tank-esata/macbook0-tm
    sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
    sbdadm list-lu
    stmfadm list-lu -v
    stmfadm add-view 600144f00800271b51c04b7a6dc70001
    svcs \*scsi\*
    itadm create-target
    devfsadm -i iscsi
    reboot
    Solaris 11 Express iSCSI manual:
    http://dlc.sun.com/pdf/821-1459/821-1459.pdf
    and that for reference
    http://nwsmith.blogspot.com/2009/07/opensolaris-2009-06-and-comstar-iscsi.html
    Windows iSCSI initiator
    http://www.microsoft.com/downloads/en/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en
    works after manually adding the Server's IP (no auto detect)

  • Unable to expand/extend partition after growing SAN-based iSCSI target

    Hello, all. I have odd situation regarding how to expand iSCSI-based partitions.
    Here is my setup:
    I use the GlobalSAN iSCSI initiator on 10.6.x server (Snow Leopard).
    The iSCSI LUN is formatted with the GPT partition table.
    The filesystem is Journaled HFS+
    My iSCSI SAN has the ability to non-destructively grow a LUN (iSCSI target).
    With this in mind, I wanted to experiment with growing a LUN/target on the SAN and then expanding the Apple partition within it using disk utility. I have been unable to do so.
    Here is my procedure:
    1) Eject the disk (iSCSI targets show up as external hard drives)
    2) Disconnect the iSCSI target using the control panel applet (provided by GlobalSAN)
    3) Grow the LUN/target on the SAN.
    4) Reconnect the iSCSI initiator
    5) Expand/extend the partition using Disk Utility to consume the (newly created) free space.
    It works until the last step. When I reconnect to the iSCSI target after expanding it on the SAN, it shows up Disk Utility as being larger than it was (so far, so expected). When I go to expand the partition, however, it errors out saying that there is not enough space.
    Investigating further, I went the command line and performed a
    "diskutil resizeVolume <identifier> limits"
    to determine what the limit was to the partition. The limits did NOT reflect the newly-created space.
    My suspicion is that the original partition map, since it was created as 100% of the volume, does not allow room for growth despite the fact that the disk suddenly (and, to the system, unexpectedly) became larger.
    Is this assumption correct? Is there any way around this? I would like to be able to expand my LUNs/targets (since the SAN can grow with the business), but this has no value if I cannot also extend the partition table to use the new space.
    If anyone has any insight, I would greatly appreciate it. Thank you!

    I have exactly the same problem that you describe above. My iSCSI LUN was near capacity and therefore i extended the iSCSI LUN from 100G to 150G. No problem so far.
    Disk Utility shows the iSCSI device as 150G but i cannot extend the volume to the new size. It gives me the same error (in Dutch).
    Please someone help us out !

  • Unable to use device as an iSCSI target

    My intended purpose is to have iSCSI targets for a virtualbox setup at home where block devices are for the systems and existing data on a large RAID partition is exported as well. I'm able to successfully export the block files by using dd and added them as a backing-store into the targets.conf file:
    include /etc/tgt/temp/*.conf
    default-driver iscsi
    <target iqn.2012-09.net.domain:vm.fsrv>
    backing-store /srv/vm/disks/iscsi-disk-fsrv
    </target>
    <target iqn.2012-09.net.domain:vm.wsrv>
    backing-store /srv/vm/disks/iscsi-disk-wsrv
    </target>
    <target iqn.2012-09.net.domain:lan.storage>
    backing-store /dev/md0
    </target>
    but the last one with /dev/md0 only creates the controller and not the disk.
    The RAID device is mounted, I don't whether or not that matters, unfortunately I can't try it being unmounted yet because it is in use. I've tried all permutations of backing-store and direct-store with md0 as well as another device (sda) with and without the partition number, all had the same result.
    If anyone has been successful exporting a device (specifically a multi disk) I'd be real interested in knowing how. Also, if anyone knows how, or if it's even possible, to use a directory as the backing/direct store I'd like to know that as well, my attempts there have been unsuccessful as well.
    I will preempt anyone asking why I'm not using some other technology, eg. NFS, CIFS, ZFS, etc., by saying that this is largely academic. I want to compare the performance that a virtualized file server has that receives it's content being served by both NFS and iSCSI, and the NFS part is easy.
    Thanks.

    Mass storage only looks at the memory expansion.
    Did you have a micro SD card in it?
    What OS on the PC are you running?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • ISCSI disk not available for storage

    I am trying to create a lab to demonstrate a simple clustering environment. Best practices is not an issue here. 
    I have a lone Domain Controller that is also running Hyper-V. I am hosting 2 VMs, I call Cluster1 and Cluster2. The VMs share the NIC with the DC, and are members of the domain. 
    After turning on the iSCSI initiator service on both VMs, I created an iSCSI target on the DC. When I created the target, the DC saw both of the iSCSI initiators on the VMs, and created the target without incident. The target now points to a .vhdx file on
    the DC. The disk initialized without incident on both VMs.
    I was able to add the disk and create a CSV without incident in Failover Cluster Manager. It shows up as online and healthy. When I look at the iSCSI target on the DC, it shows up at health and connected. I ran validation against the cluster, and the storage
    comes up perfect, no warnings. 
    When I try and add the file sever role (File Server for general use) to the clustered VMs, no storage is available to add the the role. 
    The only indication that anything is wrong is that on Cluster2, the disk is shown to be offline. If I try and bring it online, I get the following error:
    “The specified disk or volume is managed by the Microsoft Failover Clustering component. The disk must be in cluster maintenance mode and the cluster resource status must be online to perform this operation”
    I have tried everything mentioned in the error message, to no avail. In cluster manager, it shows that Cluster1 is the owner of the resource.
    Why isn’t this working? I know a .vhdx file has to be marked at shared if two machines are going to access it, but I don’t see how I can do that, or perhaps its already shared?
    More to the point, where could I place a .vhdx file on the DC one so that I could share it between the two VMs? 
    I have 2 more servers at my disposal, and I could hypothetically go out and buy cheap a NAS that supports iSCSI, but thats missing the point. I want to run it all off one box so its portable and easy to demonstrate. 
    Thank you in advance.

    Hi
    Thanks for posting
    Did you format your iSCSI disk ? if you didnt format it, it needs to be formatted before using with a File Server Cluster
    for more information please have a look
    https://technet.microsoft.com/en-us/library/cc731844%28v=ws.10%29.aspx
    Nirmal Madhawa Thewarathanthri

  • Iscsi target rewriting sparse backing store

    Hi all,
    I have this particular problem when trying to use sparse file residing on zfs as backing store for iscsi target. For the sake of this post, lets say I have to use the sparse file instead of whole zfs filesystem as iscsi backing store.
    However, as soon as the sparse file is used as iscsi target backing store, Solaris OS (iscsitgt process) decides to rewrite entire sparse file and make it non-sparse. Note this all happens without any iscsi initiator (client) accessed this iscsi target.
    My question is why the sparse file is being rewritten at that time?
    I can expect write at iscsi initiator connect time, but why at the iscsi target create time?
    Here are the steps:
    1. Create the sparse file, note the actual size,
    # dd if=/dev/zero of=sparse_file.dat bs=1024k count=1 seek=4096
    1+0 records in
    1+0 records out
    # du -sk .
    2
    # ll sparse_file.dat
    -rw-r--r--   1 root     root     4296015872 Feb  7 10:12 sparse_file.dat
    2. Create the iscsi target using that file as backing store:
    # iscsitadm create target --backing-store=$PWD/sparse_file.dat sparse
    3. Above command returns immediately, everything seems ok at this time
    4. But after couple of seconds, disk activity increases, and zpool iostat shows
    # zpool iostat 3
                   capacity     operations    bandwidth
    pool         used  avail   read  write   read  write
    mypool  5.04G   144G      0    298      0  35.5M
    mypool  5.20G   144G      0    347      0  38.0M
    and so on, until the write over previously sparse 4G is over:
    5. Note the real size now:
    # du -sk .
    4193252 .Note all of the above was happening with no iscsi initiators connected to that node nor target. Solaris OS did it by itself, and I can see no reasons why.
    I would like to have those files sparse, at least until I use them as iscsi targets, and I would prefer those files to grow as my initiators (clients) are filling them.
    If anyone can share some thoughts on this, I'd appreciate it
    Thanks,
    Robert

    Problem solved.
    Solaris iscsi target daemon configuration file has to be updated with:
    <thin-provisioning>true</thin-provisioning>
    in order to iscsitgtd not to initialize the iscsi target backing store files. This is all valid only for iscsi targets having files as backing store.
    After creating iscsi targets with file (sparse or not) as backing store, there is no i/o activity whatsoever, and thats' what I wanted.
    FWIW, This is how the config file looks now.
    # more /etc/iscsi/target_config.xml
    <config version='1.0'>
    <thin-provisioning>true</thin-provisioning>
    </config>
    #

  • OVS servers restart when iSCSI target restarts

    Long story short, I've been working on my new lab at home off and on.  Tonight I was working on my lab to get to the point where I would have OVMM as a virtual machine as the dedicated host I have for temporary measures only has two gigs of ram vs the recommended 8 gigs of ram.
    More to the point:  I was working on the storage portion of my setup and I got my iSCSI target discovered (FreeNAS 9.3) and I was able to build my cluster of OVSs.  I noticed earlier that my FreeNAS server had some pending updates and I figured it would be a good idea to perform these updates before I started using the iSCSI storage for active virtual machines.  Naturally, when the updates are done on my FreeNAS server, it reboots.
    What I don't understand is why both of my OVSs restarted at the same time after the FreeNAS server went down for a reboot.  Was there a hidden gotcha?  Did I overlook something in the documentation?
    A side effect of the unexpected reboot is the fact that my OVS(2) did not come back online with OVMM and I could not ping my management interface for that OVS.  I did several Google searches and tried some manual up/downs of the interfaces and tried one effort to manually configure bond0 to an IP address to try and get some sort of response.  Needless to say, none of those were good ideas and didn't help.  Therefore, I'm re-installing OVS on server 2 now.
    If anyone has any ideas as to how I could have fixed OVS(2) I'm more than willing to take notes in case this happens again.
    Thank you for any and all help,
    Brendan
    P.S. If this has any influence with my situation, I am running LACP bonds and all IP addresses are configured on the VLAN interfaces.  I have not had any issues configuring my switch and with the exception of OVS(2) not wanting to reply after the unexpected restart I had everything working.  An added gotcha is that the storage VLAN interface on OVS(2) continued to reply after the restart.

    Hi Brendan,
    the situation you encountered is expected for a single-headed iSCSI target. If you present OVM with a LUN - regardless if it's iSCSI or FC, OVM creates an OCFS2 volume on it, when you choose that LUN to be used as a storage repository. This is, since OCFS2 is a cluster-aware file system and thus is the only supported fs to use for a cluster setup. Now, although OVFS2 is really easy to setup - compared to other cluster file systems - it has some mechanics that will cause your nodes to fence (or reboot) in case the node looses "contact" with the OCFS2. Each node writes little keep-alive fragments onto the OCSF2 and if you reboot your iSCSI target, then this will cease and OCFS2 will fence itself, hoping for some minor glitch that may be gone after a reboot.
    There are a couple of ways to achieve this with iSCSI, but their resp. setup is not easy, and the hoops to jump through are probably to many for a lab setup. E.g. I am running a two RAC nodes as iSCST targets, which in turn do have high-redundancy disk setup behind them, consiting of 3 seperate storage servers. So, in my setup I am able to survive:
    - two storage node failures
    - one RAC node failure (which provides the iSCSI target for my OVS)
    Regarding your OVS2, as you have already re-installed OVS2, there will be now way, to tell, what went wrong with it, as you'd need access to the console to poke around a bit, while OVS2 is in that situation, where it doesn't respond to pings on it's management interface.
    Cheers,
    budy

  • How to create iSCSI target using the entire drive?

    This sounds silly, but after setting up the DL4100 in RAID 5, I could not assign the entire 11.89TB to an iSCSI target... Only integer numbers seem to be the valid input. And the only choice of TB or GB offered no help because I could not enter: 11890 GB either?! What am I missing?

    Thanks to all who have responded....
    Here's my take: 4TB x4 in RAID 5 = 11.89TB but via iSCSI creation will result in 0.89TB unallocable to iSCSI due to the GUI disabling non-interger values (ie: no decimal points). That is 890GB of storage that is supposedly used for firmware upgrades, logs, etc?! 
    Snooping around I noticed that the validation is only performed via javascript and a quick re-POST to the unit of the params can trigger modification/creation of a non-integer iSCSI volume sizes. Please note that you can only grow volume sizes and not shrink them! Below is a walk-thru of how to do this:
    Disclaimer: *WARNING: USE AT YOUR OWN RISK*
    1) Create the iSCSI volume as per normal but at the integer value less than what you intend (eg: if you wanted 1.5TB, create 1TB) then wait for the iSCSI volume to be created.
    2) Use a web browser with debugging turned on and capturing traffic. For my example I am using Firefox and hit F12 to fire up the debug tool. Pick "Network" and you will see Firefox start picking up all traffic to the DL4100.
    3) Go into the iSCSI volume and choose "Details" to modify it. Put the current size as the modified size and click "Apply". Look in the list of messages to locate a POST message "iscsi_mgr.cgi" that has a Request body with a "size" parameter and select that to be resent.
     4) In this copy of the same message, look in Request Body and the list of parameters being passed back to the unit. You should find one of the parameters called "size". This value is sent to the unit in GB... Change the value to a GB value that you desire (eg: 1TB would appear as "1000", so you can change it to "1500" for 1.5TB) and then re-POST this POST message back to the unit.
    5) Wait for the update to transact and verify that your iSCSI volume has indeed been resized to a "non-Integer" TB value.
    That's it! I hope this helps others who have been trapped by this limitation. Please be mindfull not to allocate all your dive space since as Bill_S has mentioned, some space is required by the system for its own housekeeping operations.
    Good Luck!

  • How to sync the data between the two iSCSI target server

    Hi experts:
    I have double HP dl380g8 server, i plan to install the server 2012r2 iSCSI target as storage, i know the iSCSI storage can setup as high ability too, but after some research i doesn't find out how to sync the data between the two iSCSI target server, can
    any body help me?
    Thanks

    Hi experts:
    I have double HP dl380g8 server, i plan to install the server 2012r2 iSCSI target as storage, i know the iSCSI storage can setup as high ability too, but after some research i doesn't find out how to sync the data between the two iSCSI target server, can
    any body help me?
    Thanks
    There are basically three ways to go:
    1) Get compatible software. Microsoft iSCSI target cannot do what you want out-of-box but good news third-party software (there are even free versions with a set of limitations) can do what you want. See:
    StarWind Virtual SAN [VSAN]
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    DataCore SANxxx
    http://datacore.com/products/SANsymphony-V.aspx
    SteelEye DataKeeper
    http://us.sios.com/what-we-do/windows/
    All of them do basically the same: mirror set of LUs between Windows hosts to emulate a high performance and fault tolerant virtual SAN. All of them do this in active-active mode (all nodes handle I/O) and at least StarWind and DataCore have sophisticated
    distributed cache implementations (RAM and flash).
    2) Get incompatible software (MSFT iSCSI target) and run it in generic Windows cluster. That would require you to have CSV so physical shared storage (FC or SAS, iSCSI obviously has zero sense as you can feed THAT iSCSI target directly to your block storage
    consumers). This is doable and is supported by MSFS but has numerous drawbacks. First of all it's SLOW as a) MSFT target does no caching and even does not use file system cache (at all, VHDX it uses as a containers are opened and I/O-ed in a "pass-thru" mode)
    b) it's only active-passive (one node will handle I/O @ a time with other one just doing nothing in standby mode) and c) long I/O route (iSCSI initiator -> MSFT iSCSI target -> clustered block back end). For reference see:
    Configuring iSCSI Storage for High Availability
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    MSFT iSCSI Target Cluster
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    3) Re-think what you do. Except iSCSI target from MSFT you can use newer technologies like SoFS (obviously faster but requires a set of a dedicated servers) or just a shared VHDX if you have a fault tolerant SAS or FC back end and want to spawn a guest VM
    cluster. See:
    Scale-Out File Servers
    http://technet.microsoft.com/en-us/library/hh831349.aspx
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    With Windows Server 2012 R2 release virtual FC and clustered MSFT target are both really deprecated features as shared VHDX is both faster and easier to setup and use if you have FC or SAS block back end and need to have guest VM cluster.
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Ix4-300D as iSCSI target cause alert disk is full

    I am using ix4-300D with RAID5 as iSCSI target.
    Once I create largest volume using all capacity, ix4-300D sends warning disk is full.  Because we monitor this iSCSI target using E-mail alert, I want to eliminate this warning message.  There is no configuration point on Web management GUI.  
    I guess other people using this unit as iSCSI target have same concern.  If you know something, please help me about this.  Thanks in advance.
    Alert message:
    The amount of free space on your Lenovo ix4-300d is below 10% of your total capacity.

    Thank you for your reply.
    I gave up to use 11% of diskspace as iSCSI target of VMware ESXi because catching alert message is more important for production environment.  I hoped interface to disable some type of alerts.
    I am not sure about using remained 11% space for NFS NAS or something.  It is because iSCSI Target volume will contain VM Guest OS volumes which require very stable operation. 

  • Nas Iscsi target

    Hi
    Couple of questions regarding Nas boxes as iscsi targets.
    I currently have one nas box connected to DPM (2012 R2) by iscsi. (two luns; one target each). This is now running out of space; and I have another nas box that I want to use in the same way.
    If I connect the new nas as a new target from the DPM server; once I've initialised the disk will it just become part of the storage pool and the backups automatically "grow" into it.
    This is 19 TB and I want to use half for DPM and half as general storage; will this work if I set up one lun for the DPM server to target; and then another lun to be targeted later by another 2012 Server?
    Finally; the nas can do HDD hibernation when not in use. Will DPM wake it up when it want's to do a backup; or should I disable this?
    Thanks
    Richard

    Hi,
    Q1) <snip>If I connect the new nas as a new target from the DPM server; once I've initialised the disk will it just become part of the storage pool and the backups automatically "grow" into it.>snip<
    A1) After you initialize it in windows disk management, convert it to GPT (if over 2TB) - then in the DPM console, add that disk to the storage pool.  DPM will then be able to use it for new protected data sources or grow DPM volumes to the new free
    space on that disk.
    Q2)  <snip>This is 19 TB and I want to use half for DPM and half as general storage; will this work if I set up one lun for the DPM server to target; and then another lun to be targeted later by another 2012 Server? >snip<
    A2)  Correct.
    Q3)  <snip>Finally; the nas can do HDD hibernation when not in use. Will DPM wake it up when it want's to do a backup; or should I disable this?>snip<
    A3)  I would disable that feature - it may effect VSS snapshots in a negative way. 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • ISCSI Target Scheduled Snapshots

    I know there are cmdlets to snapshot a target but is there a built in way to schedule snapshots?
    Alternatively, is there a cmdlet to list snapshots for a target so I can create a script to run on a schedule to remove old snapshots and create new ones.

    I know there are cmdlets to snapshot a target but is there a built in way to schedule snapshots?
    Alternatively, is there a cmdlet to list snapshots for a target so I can create a script to run on a schedule to remove old snapshots and create new ones.
    Yes, you can use Schedule Snapshot Wizard to have automatic snapshots. See:
    Creating and Managing Snapshots
    and Schedules
    http://technet.microsoft.com/en-us/library/gg232620(v=ws.10).aspx
    Creating
    and Scheduling Snapshots of Virtual Disks
    http://technet.microsoft.com/en-us/library/gg232610(v=ws.10).aspx
    Alternatively you can use cmdlets to deal with these tasks (PowerShell would give you more control over
    creation / deletion of a snapshots. See:
    iSCSI Target Cmdlets in Windows PowerShell
    http://technet.microsoft.com/en-us/library/jj612803.aspx
    The ones should be of your interest are Checkpoint-IscsiVirtualDisk and Remove-IscsiVirtualDiskSnapshot cmdlets.
    Also you may use SMI-S and WMI if you want to use say C# or C++ and not PowerShell. 
    Some more of the cmdlets use links (including samples). See:
    PowerShell
    cmdlets for the Microsoft iSCSI Target 3.3 (included in Windows Storage Server 2008 R2)
    http://blogs.technet.com/b/josebda/archive/2010/09/29/powershell-cmdlets-for-the-microsoft-iscsi-target-3-3-included-in-windows-storage-server-2008-r2.aspx
    (that's for Windows Server 2008 R2 and up)
    Managing
    iSCSI Target Server through Storage Cmdlets
    http://blogs.technet.com/b/filecab/archive/2013/09/28/managing-iscsi-target-server-through-storage-cmdlets.aspx
    (the most recent one about Windows Server 2012 R2)
    Hope this helped a bit. Good luck :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • ISCSI Target Issues

    I am running MS Server 2012 and am only using this as a iscsi target server.  I run VMware vspher 5.5 as the initiator.  So every night around 12:30 am, my server restarts many of the services, one of them being the iscsi target service.  No
    errors are produced, just informational logs about the services stopping and starting again.  However, when they stop my vphere client and vm's on the iscsi target lose connection.  I just want them to run all the time, these are production systems.
     When I come in the next day I just do a refresh on the iscsi target gui and everything reconnects.  Then I have to go to the vsphere client and answer the guest question about why they lost contact with the targets and the vm's all restart without
    a problem.  Does anyone know about this problem and how to resolve it?

    No, but I think 2012 has built in virus checker, I'll check that.  Nope, but this is the info log published.
    At 12:31 I got "The Microsoft iSCSI Software Target service entered the stopped state." Then when I came in this morning and hit refresh at 6:01 I got "The Microsoft iSCSI Software Target
    service entered the running state.".  Both were the event id below.  Sometimes it puts it back in the running state automatically, but sometimes I have to manually do it, either way, when it goes into stopped state my vsphere client makes me
    answer the "guest" question to bring it back live in the vmware client side.  Checked antivirus, no problems there.  Again, these are not errors, they a re calling these basic service operations as shown below.  How can shutting down
    productions systems be a normal operation, must be some way around that or who would want to use this.  The nice thing is how easy it is to set up a target, but this is a problem.  People use are systems around the clock sooo...any other thoughts?
    Event ID 7036 — Basic Service Operations
    73 out of 202 rated this helpful - Rate this topic
    Updated: December 11, 2007
    Applies To: Windows Server 2008
    green
    Service Control Manager transmits control requests to running services and driver services. It also maintains status information about those services, and reports configuration changes and state changes.
    Event Details
    Product: Windows Operating System
    ID: 7036
    Source: Service Control Manager
    Version: 6.0
    Symbolic Name: EVENT_SERVICE_STATUS_SUCCESS
    Message: The %1 service entered the %2 state.
    Resolve
    This is a normal condition. No further action is required.
    Related Management Information
    Basic Service Operations
    Core Operating System
    So how c

  • Baremetal recovery from iSCSI target

    I am experimenting with iSCSI targets (OpenFiler) for network backup of Windows 2008 server and Windows 7.  Backups are working fine but now I get to restore testing and realize I cannot see the iSCSI storage for a baremetal restore.  Of course the volumes on the iSCSI target are available for restore from a working system.
    Is it possible to access the iSCSI volumes during a baremetal disaster recovery?

    I just went through this. It was rather simple using WINDOWS 2012 as the target (hosting ISCSI DISKS) and windows 2008 R2 as the initiator (using ISCSI DISK)
    Since your itiator server is dead and can't access the backup volume through winPE or winRE, disable the disk on the target server.
    Go to the directory that has the VHD and right click mount the volume.
    Find the drive letter it mounted as and click share. Adjust permissions accordingly.
    Follow Freds steps:
    3)    
    Boot your new hardware from the 2K8 install disk.
    4)    
    At the install Windows screen Select your regional settings and Next
    5)    
    Select Repair your computer
    6)    
    System Recovery Options should be blank for BRM, click Next
    7)    
    Select Windows Complete PC Restore
    8)    
    At the alert window select cancel
    9)    
    Select Restore a different backup and Next
    10)
    Click Advanced and Search for a backup on the network
    11)
    Select yes to connect to the network
    12)
    Enter the unc path to the computer and share above and OK
    13)
    Enter your credentials and OK
    14)
    Select 
    the backup and Next
    15)
    Select the backup time to restore to and Next
    16)
    Next
    17)
    Finish (note the drive you restore to will need to be at least the same size as the drive that the computer was imaged from)
    18)
    Confirm that you want to format the new drive and OK
    Then as Corey said, don't reboot the restored machine.
    On the target server open explorer, right click on the drive letter for the VHD and click eject.
    Go Back and reboot the server.
    I think this woud work using 2008 R2 /2008 R2, however I didn't test that.
    The input box is messed up so I can't see half of what I'm typing.

  • Can I transfer data from my G4 to my IMac over a firewire without them syncing or messing either one up - do not want to modify either one...!

    Can I transfer data from my G4 to my IMac over a firewire without them syncing or messing either one up - do not want to modify either one...!

    Boot the G4 Mac into Target mode...
    http://support.apple.com/kb/HT1661
    That should make the G4 look like a big FW drive to the iMac.

Maybe you are looking for