ISCSI Volume Resizing

I have resized the volume, but connecting to a current server and one to test with shows the volume at the same size under Device Manager. Why is 2008 and 2008 R2 servers showing the old size and I cannot expand it.
The device is a PX12-400r with the most recent firmware.
Solved!
Go to Solution.

I agree with storageman that those statements aren't what someone should hear from support.
I should be able to get these below questions answered from support:
1. Why is it not showing up in a mutliple 2008/2008 R2 servers' disk management as unallocated (a drive that could be expanded)?
2. Is there a way I can get around this issue (a workaround)?
3. If we both followed the same procedure, then why am I getting a different result?
4. Why aren't these steps documented in the User Manual?
5. Is there limitations of the expand iSCSI drive process, (an example if all the space is, nearly, previously allocated and a final drive is created with the remaining space and the previous drive(s) are removed that gives up free space, is not able to be expanded properly)?
6. Has the software, on the device, been vetted enought to determine the answers to above?
This is a question for moving forward:
How I am I supposed migrate all the data to another iSCSI drive, that will be the recreated LUN currently in use, when I do not have enough space on the current system to give it 8TB of space?
If I knew that I would be in this predicament, I would've just created another 8TB iSCSI drive and migrated to it, but by taking the extra 2TB into the current drive I cannot do that. And I cannot take it away now that it has been given this extra space. To me seems like a poor design for Enterprise level, as well as a poor answer.
Jason

Similar Messages

  • Has anyone used ONLY an iSCSI volume with XSAN?

    I'm trying to get an iSCSI volume available as a LUN for XSAN.  It seems that without FC hardware attached to a machine, I can't even start the service...  What is your experience?

    Since you don't have fibre channel hardware, you won't be able to enable Xsan in System Preferences. You will need to load the Xsan launch daemon manually.
    launchctl load -w /System/Library/LaunchDaemons/com.apple.xsan.plist
    You should then be able to get a volume set up in Xsan Admin. I would only do this for testing though. Apple only supports fibre channel storage with Xsan.

  • Drive does not appear in ISCSi volumes

    Hi,
    I have configured a windows server 2012 r2 OS with 2 disks/ The server is a VM. Both disks are SCSI. Hoerver, the data disk does not show in ISCSI setup
    Screenshot of the part in question is here:
    https://weikingteh.files.wordpress.com/2012/12/image13.png
    Any help appreciated

    Sorry for the delay. However I still cannot confirm the exact status. 
    I assume the Drive S is not the data drive. Please open Disk Management to see if your Data is listed with a drive letter. Also test if you can access it correctly.
    Also though you mentioned these 2 disks are SCSI disks, whether they are physical disk on host (physical) computer, or they are 2 VHD files created on hard disk and connected in Hyper-V?
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • ISCSI volume lost, have the .img file but no access to data. Help please, WD support worthless!

      Basically, I have a iSCSI target which I cannot access  after re-installing windows because the CHAP "secret"  I thought I knew was wrong.  Although I was told there was no way to acess the .img file by WD, I was able to pull the image using an SSH connection to the EX2 from a hidden folder.  So now I have that file stored locally.  Of course it would just be too easy to be able to mount that file and I understand why that is.  However, through research, I am seeing that it may in fact be possible some how.   Can anyone please help me with the resources that I have?  I have alot of info that I really do not want to lose and I dont have the funds to pay a data recovery company thousands of dollars like WD support suggested.  Is the .img file encrypted or is it just a FS that windows wont recognize?  Please help and I would really appreciate any info. Thank you.

    if the iSCSI target is protected with a username and password.  It would not do any good even with a linux tool to open it. Like WD tech support suggest to find a recovery company is all your best bet.  If you still have access to your NAS, you should be able to change the iSCSI target to with CHAP enable to NONE.  From your Windows try to access your iSCSI target again without a password. bda714 wrote:
    After being told multiple times by WD support that there was nothing I could possibly do to access my LUN i have gotten to a point that I believe is close to where I am trying to be.  Along the way, there were many times that the support tech's at WD had told me what was said to be rock solid information, was proven to be wrong. . . at this point I feel like I am better off on my own and hopefully with some help from the community. Basically, I have a iSCSI target which I cannot access  after re-installing windows because the CHAP I thought I knew was wrong.  Although I was told there was no way to acess the .img file by WD, I was able to pull the image using an SSH connection to the EX2.  So now I have that file stored locally.  Of course it would just be too easy to be able to mount that file and I understand why that is.  However, through research, I am seeing that it may in fact be possible some how.   Can anyone please help me with the resources that I have?  I have alot of info that I really do not want to lose and I dont have the funds to pay a data recovery company thousands of dollars like WD support suggested.  Is the .img file encrypted or is it just a FS that windows wont recognize?  Please help and I would really appreciate any info. Thank you. 

  • /var on iscsi volume - journalctl empty

    Hi All,
    I have a following setup - NAS exporting two partitions mounted on my Archlinux box one mounted on /home another on /var. Here is fstab piece:
    UUID=bdbc3fe4-7205-496f-9ef3-95c9b5ed97f8 /home ext4 defaults,nofail,_netdev           0      0
    UUID=ffde5a51-b160-4657-9d67-ce170542b5a8 /var ext4 defaults,nofail,_netdev           0      0
    Problem is that it seems that some services (journal for example) - are still using /var located on root partition, so if for example I run "journal -b" after everything is booted I get empty output:
    # journalctl -b
    -- Logs begin at Thu, 1970-01-01 03:00:03 MSK, end at Thu, 2013-01-03 02:21:22 MSK. --
    logger and syslog though seem to be doing it the right way - I can see new messaged coming to /var/log/everything.log
    Any idea if I should restart some service for journal to be in right place?
    Thanks

    Ok, so looking more into this - seems like the only process using "new" /var is named, which is started after network is set up - which makes a lot of sense, since network needs to be set up, then open-iscsi pops in and only then new /var is mounted - so now question is - what is the right way to do what I want - have an iscsi mounted /var?

  • ZFS tries to mount SAN volume before ISCSI is running

    I am running Solaris 10 x86 U7. It is actually a VMWare guest (ESX4) on a Sun X4170 server- although I do not believe that that is relevant. I have a Sun 2510 iSCSI SAN appliance. I have an iSCSI volume with a ZFS Pool that is mounted on the server. All was fine until yesterday when I installed the following patches:
    142934-02 SunOS 5.10_x86: failsafe patch
    142910-17 SunOS 5.10_x86: kernel patch
    144489-02 SunOS 5.10_x86: kernel patch
    142912-01 (as a dependency requirement for one of the others.)
    I had installed the patches in run level 1 , then switched to run level S to allow the patch install to finish.
    Now, when I restart the zfs volume on the san is marked as off line. the /var/adm/messages shows the following
    Nov 7 00:26:30 hostnameiscsi: [ID 114404 kern.notice] NOTICE: iscsi discovery failure - SendTargets (ip.ad.dr.ess)
    I can mount the SAN ZFS pool with
    #zpool clear ZFSPOOL1
    #zfs mount -a
    For iscsi device discovery, I am using send targets (not static or iSNS.) I am not using CHAP authentication.
    It seems to be me this may merely be a timing in services and not fundamentally an iscsi issue. Can I tell the OS to wait for a minute after starting iscsi service before continuing with zfs mount and autofs shares? Can I tell the OS to delay mounting non OS zfs pools?
    Thanks

    Here is what I tried. Installed Batchmod and Xupport on each of internal system disk, backup internal system disk and external system disk. Batchmod could not find the folders automount or Network.
    Booting from external disk, I made hidden files visible using Xupport, then deleted automount > Servers, automount > Static on internal disk and backup disk. The folder Network had no files or folder named "Server". Booting from internal disk, the desktop tried to mount server volumes. Examining the internal disk automount folder showed aliases for "Servers" and "static". Get Info said they pointed to originals "Servers" and "static" in folder /automount but these items do not appear in the Finder.
    Sometimes icons, not aliases, for "Network", "Servers", and "static" appear on all three desktops on login. Trying to eject these icons by dragging to Trash or highlighting and clicking File > Eject has no effect. Examining Users > Username > Desktop does not show these items. Sometimes ".DS_Store" appears on desktop and in folder Users > Username > Desktop.
    Next I deleted user accounts so that all system disks are single user. Booted up on External disk and deleted automount > Servers, automount > Static on internal disk and internal backup disk or their aliases, whichever appeared in Finder. Booting up on internal disk results in... desktop trying to mount server volumes.
    Will try an archive and install on internal disk.

  • ISCSI issue - can't detect the volume

    Hi,
    I configured iSCSI LUN on the NSS324 and put one of the NAS interface in to the iSCSI VLAN that has jumbo packet size support. Attached are my config screenshot.
    From the NAS, I can ping my vmware server's iSCSI interface at 9000 byte packet size so the underline network is working.
    [~] # ping -s 9000 172.24.200.11
    PING 172.24.200.11 (172.24.200.11): 9000 data bytes
    9008 bytes from 172.24.200.11: icmp_seq=0 ttl=64 time=1.5 ms
    I tried auto discovery of iSCSI volume and manual too but none of them seems to detect my iSCSI LUN.
    Any suggestions on troubleshooting efforts?
    Thanks in advance,
    Sam

    Sam,
    Please make sure you're running on the latest firmware. Which gives add functionality to the ISCSI.
    Also when creating the LUN you might have to give the NSS just a little time to allocate the space you just created. I have had the NSS take about 10 minutes or so before VMserver was able to connect to my target depending on how much space you allocate.if still are not able to connected , it could be the ISCSI initiator . if you are using Chap authentication you might want to create a iSCSI target without security for testing. If you're able to connect without Chap, then you know where problem is.
    Now you want to make sure the ISCSI is connected (Raid managment --> ISCSI --> see if the initiator is connected to target)
    Once all this is verified and still can't get connect via ISCSI then you might give us a call so we can take a look.
    Please call 1-866-606-1866 and open a case with Small Business Support Center for help.
    Thanks,
    Jason Bryant,
    Cisco Support Engineer

  • Sun 2510 ISCSI mapping volumes to hosts

    I am setting up a Sun 2510 ISCSI SAN system. I have created two logical volumes. I am currently testing this with a Solaris 10 client and Windows 2008 Client. CHAP authentication to the target is required. I configured mutual authentication for the Windows server but not for the Solaris server.
    By default, volumes are mapped to the default storage domain. In this case, both of the clients can see the disks. But I would like to reduce the chance that I accidentally use one client to format a disk intended for another disk. I can create a host entry on the 2510 with the CAM software. The name can't include any "." characters so I figure it should be the hostname of the client not the fully qualified domain name, iscsi address or ip address. When I map the volume to that client name, the client can not longer see the iscsi volume.
    As far as I know, this should not require client mutual authentication or even any CHAP authentication. Does the client send its host name to the SAN server or does the SAN server try to resolve it by some means?
    Help is appreciated.
    Thanks

    Yes the data are redistributed. Although this is an operation which can be online, if you have running IO this process can take a large number of hours depending on the ongoing IO workload.
    Regards

  • ASM Volumes on thin-provisioned ISCSI dirtying whole volumes

    Hi, we've got some EBS test instances going and are testing auto-extend (in reasonable chunks) on a thin-provisioned ISCSI volume for the 11g database tier. Something doesn't seem right though: typically I see about 55GB in use by datafiles but about 95% of the thin-provisioned volume is marked dirty.
    Obviously, this sort of negates the point of using thin provisioning at all, but I can't help but think there's something else at work here. Does anyone have experience with this situation and if so what parameters can be set to make what we're trying to do actually work right?

    Billy, thank you for the helpful reply. In this case, none of that is news to me - we wear a lot of hats where I work and in this case I not only put together the systems the ASM instance is running on, but the ASM instance itself (DB is at 11.1.0.7 with the 6851110 ASM patch recently applied). System is OEL 5 x86_64, the ISCSI volume is thin-provisioned, no filesystem ever written to it, raw disk other than a partition table, holds DATA and FRA, configured with autoextend on.
    What I'm trying to get at here is why a 500GB volume which is holding only about 55GB of data would have almost all its blocks marked dirty - as far as the filer is concerned, something has touched nearly every block on the volume at one point or another. Now if I'd written a 500GB datafile to it - which as you say is pointless where ASM is concerned - I'd understand how the blocks got dirtied, but in this case that never happened - it's somewhere in the way ASM behaves (as far as I can see now anyway) that has caused the data to be written all over the physical blocks on the drive, even if only momentarily.
    So getting back to my original question, is there a way in ASM to ensure that writes are done in a contiguous fashion so that the apparent disk usage (from the SAN point of view) more closely matches the actual amount of data stored? I'm not seeking a 1:1 relationship here, but we're close to 1:10, and I think that's only because that's how big I sized the volume in the first place. As far as it being ISCSI, the only relevance (and the whole point of my question) is that it's how I happened to attach the volume, which was intended to be thin-provisioned and only allocate space when it's needed. If I'd made the volume 250GB (and I can test this theory if need be), it would likely dirty 250GB of blocks and still store the same 55GB of data. My hope is that there's a way to get ASM to play nicer on thin-provisoned volumes than what I'm currently seeing.
    Thanks again.

  • Hyper-V cluster Backup causes virtual machine reboots for common Cluster Shared Volumes members.

    I am having a problem where my VMs are rebooting while other VMs that share the same CSV are being backed up. I have provided all the information that I have gather to this point below. If I have missed anything, please let me know.
    My HyperV Cluster configuration:
    5 Node Cluster running 2008R2 Core DataCenter w/SP1. All updates as released by WSUS that will install on a Core installation
    Each Node has 8 NICs configured as follows:
     NIC1 - Management/Campus access (26.x VLAN)
     NIC2 - iSCSI dedicated (22.x VLAN)
     NIC3 - Live Migration (28.x VLAN)
     NIC4 - Heartbeat (20.x VLAN)
     NIC5 - VSwitch (26.x VLAN)
     NIC6 - VSwitch (18.x VLAN)
     NIC7 - VSwitch (27.x VLAN)
     NIC8 - VSwitch (22.x VLAN)
    Following hotfixes additional installed by MS guidance (either while build or when troubleshooting stability issue in Jan 2013)
     KB2531907 - Was installed during original building of cluster
     KB2705759 - Installed during troubleshooting in early Jan2013
     KB2684681 - Installed during troubleshooting in early Jan2013
     KB2685891 - Installed during troubleshooting in early Jan2013
     KB2639032 - Installed during troubleshooting in early Jan2013
    Original cluster build was two hosts with quorum drive. Initial two hosts were HST1 and HST5
    Next host added was HST3, then HST6 and finally HST2.
    NOTE: HST4 hardware was used in different project and HST6 will eventually become HST4
    Validation of cluster comes with warning for following things:
     Updates inconsistent across hosts
      I have tried to manually install "missing" updates and they were not applicable
      Most likely cause is different build times for each machine in cluster
       HST1 and HST5 are both the same level because they were built at same time
       HST3 was not rebuilt from scratch due to time constraints and it actually goes back to Pre-SP1 and has a larger list of updates that others are lacking and hence the inconsistency
       HST6 was built from scratch but has more updates missing than 1 or 5 (10 missing instead of 7)
       HST2 was most recently built and it has the most missing updates (15)
     Storage - List Potential Cluster Disks
      It says there are Persistent Reservations on all 14 of my CSV volumes and thinks they are from another cluster.
      They are removed from the validation set for this reason. These iSCSI volumes/disks were all created new for
      this cluster and have never been a part of any other cluster.
     When I run the Cluster Validation wizard, I get a slew of Event ID 5120 from FailoverClustering. Wording of error:
      Cluster Shared Volume 'Volume12' ('Cluster Disk 13') is no longer available on this node because of
      'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the
      volume is reestablished.
     Under Storage and Cluster Shared VOlumes in Failover Cluster Manager, all disks show online and there is no negative effect of the errors.
    Cluster Shared Volumes
     We have 14 CSVs that are all iSCSI attached to all 5 hosts. They are housed on an HP P4500G2 (LeftHand) SAN.
     I have limited the number of VMs to no more than 7 per CSV as per best practices documentation from HP/Lefthand
     VMs in each CSV are spread out amonst all 5 hosts (as you would expect)
    Backup software we use is BackupChain from BackupChain.com.
    Problem we are having:
     When backup kicks off for a VM, all VMs on same CSV reboot without warning. This normally happens within seconds of the backup starting
    What have to done to troubleshoot this:
     We have tried rebalancing our backups
      Originally, I had backup jobs scheduled to kick off on Friday or Saturday evening after 9pm
      2 or 3 hosts would be backing up VMs (Serially; one VM per host at a time) each night.
      I changed my backup scheduled so that of my 90 VMs, only one per CSV is backing up at the same time
       I mapped out my Hosts and CSVs and scheduled my backups to run on week nights where each night, there
       is only one VM backed up per CSV. All VMs can be backed up over 5 nights (there are some VMs that don't
       get backed up). I also staggered the start times for each Host so that only one Host would be starting
       in the same timeframe. There was some overlap for Hosts that had backups that ran longer than 1 hour.
      Testing this new schedule did not fix my problem. It only made it more clear. As each backup timeframe
      started, whichever CSV the first VM to start was on would have all of their VMs reboot and come back up.
     I then thought maybe I was overloading the network still so I decided to disable all of the scheduled backup
     and run it manually. Kicking off a backup on a single VM, in most cases, will cause the reboot of common
     CSV members.
     Ok, maybe there is something wrong with my backup software.
      Downloaded a Demo of Veeam and installed it onto my cluster.
      Did a test backup of one VM and I had not problems.
      Did a test backup of a second VM and I had the same problem. All VMs on same CSV rebooted
     Ok, it is not my backup software. Apparently it is VSS. I have looked through various websites. The best troubleshooting
     site I have found for VSS in one place it on BackupChain.com (http://backupchain.com/hyper-v-backup/Troubleshooting.html)
     I have tested almost every process on there list and I will lay out results below:
      1. I have rebooted HST6 and problems still persist
      2. When I run VSSADMIN delete shadows /all, I have no shadows to delete on any of my 5 nodes
       When I run VSSADMIN list writers, I have no error messages on any writers on any node...
      3. When I check the listed registry key, I only have the build in MS VSS writer listed (I am using software VSS)
      4. When I run VSSADMIN Resize ShadowStorge command, there is no shadow storage on any node
      5. I have completed the registration and service cycling on HST6 as laid out here and most of the stuff "errors"
       Only a few of the DLL's actually register.
      6. HyperV Integration Services were reconciled when I worked with MS in early January and I have no indication of
       further issue here.
      7. I did not complete the step to delete the Subscriptions because, again, I have no error messages when I list writers
      8. I removed the Veeam software that I had installed to test (it hadn't added any VSS Writer anyway though)
      9. I can't realistically uninstall my HyperV and test VSS
      10. Already have latest SPs and Updates
      11. This is part of step 5 so I already did this. This seems to be a rehash of various other stratgies
     I have used the VSS Troubleshooter that is part of BackupChain (Ctrl-T) and I get the following error:
      ERROR: Selected writer 'Microsoft Hyper-V VSS Writer' is in failed state!
      - Status: 8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
      - Writer Failure code: 0x800423f0 (<Unknown error code>)
      - Writer ID: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
      - Instance ID: {d55b6934-1c8d-46ab-a43f-4f997f18dc71}
      VSS snapshot creation failed with result: 8000FFFF
    VSS errors in event viewer. Below are representative errors I have received from various Nodes of my cluster:
    I have various of the below spread out over all hosts except for HST6
    Source: VolSnap, Event ID 10, The shadow copy of volume took too long to install
    Source: VolSnap, Event ID 16, The shadow copies of volume x were aborted because volume y, which contains shadow copy storage for this shadow copy, wa force dismounted.
    Source: VolSnap, Event ID 27, The shadow copies of volume x were aborted during detection because a critical control file could not be opened.
    I only have one instance of each of these and both of the below are from HST3
    Source: VSS, Event ID 12293, Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an
    unexpected error.
    Source: VSS, Event ID 8193, Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult.  hr = 0x80070057, The parameter is incorrect.
    So, basically, everything I have tried has resulted in no success towards solving this problem.
    I would appreciate anything assistance that can be provided.
    Thanks,
    Charles J. Palmer
    Wright Flood

    Tim,
    Thanks for the reply. I ran the first two commands and got this:
    Name                                                            
    Role Metric
    Cluster Network 1                                              
    3  10000
    Cluster Network 2 - HeartBeat                              1   1300
    Cluster Network 3 - iSCSI                                    0  10100
    Cluster Network 4 - LiveMigration                         1   1200
    When you look at the properties of each network, this is how I have it configured:
    Cluster Network 1 - Allow cluster network communications on this network and Allow clients to connect through this network (26.x subnet)
    Cluster Network 2 - Allow cluster network communications on this network. New network added while working with Microsoft support last month. (28.x subnet)
    Cluster Network 3 - Do not allow cluster network communications on this network. (22.x subnet)
    Cluster Network 4 - Allow cluster network communications on this network. Existing but not configured to be used by VMs for Live Migration until MS corrected. (20.x subnet)
    Should I modify my metrics further or are the current values sufficient.
    I worked with an MS support rep because my cluster (once I added the 5th host) stopped being able to live migrate VMs and I had VMs host jumping on startup. It was a mess for a couple of days. They had me add the Heartbeat network as part of the solution
    to my problem. There doesn't seem to be anywhere to configure a network specifically for CSV so I would assume it would use (based on my metrics above) Cluster Network 4 and then Cluster Network 2 for CSV communications and would fail back to the Cluster Network
    1 if both 2 and 4 were down/inaccessible.
    As to the iSCSI getting a second NIC, I would love to but management wants separation of our VMs by subnet and role and hence why I need the 4 VSwitch NICs. I would have to look at adding an additional quad port NIC to my servers and I would be having to
    use half height cards for 2 of my 5 servers for that to work.
    But, on that note, it doesn't appear to actually be a bandwidth issue. I can run a backup for a single VM and get nothing on the network card (It caused the reboots before any real data has even started to pass apparently) and still the problem occurs.
    As to Backup Chain, I have been working with the vendor and they are telling my the issue is with VSS. They also say they support CSV as well. If you go to this page (http://backupchain.com/Hyper-V-Backup-Software.html)
    they say they support CSVs. Their tech support has been very helpful but unfortunately, nothing has fixed the problem.
    What is annoying is that every backup doesn't cause a problem. I have a daily backup of one of our machines that runs fine without initiating any additional reboots. But most every other backup job will trigger the VMs on the common CSV to reboot.
    I understood about the updates but I had to "prove" it to the MS tech I was on the phone with and hence I brought it up. I understand on the storage as well. Why give a warning for something that is working though... I think that is just a poor indicator
    that it doesn't explain that in the report.
    At a loss for what else I can do,
    Charles J. Palmer

  • How to create iSCSI target using the entire drive?

    This sounds silly, but after setting up the DL4100 in RAID 5, I could not assign the entire 11.89TB to an iSCSI target... Only integer numbers seem to be the valid input. And the only choice of TB or GB offered no help because I could not enter: 11890 GB either?! What am I missing?

    Thanks to all who have responded....
    Here's my take: 4TB x4 in RAID 5 = 11.89TB but via iSCSI creation will result in 0.89TB unallocable to iSCSI due to the GUI disabling non-interger values (ie: no decimal points). That is 890GB of storage that is supposedly used for firmware upgrades, logs, etc?! 
    Snooping around I noticed that the validation is only performed via javascript and a quick re-POST to the unit of the params can trigger modification/creation of a non-integer iSCSI volume sizes. Please note that you can only grow volume sizes and not shrink them! Below is a walk-thru of how to do this:
    Disclaimer: *WARNING: USE AT YOUR OWN RISK*
    1) Create the iSCSI volume as per normal but at the integer value less than what you intend (eg: if you wanted 1.5TB, create 1TB) then wait for the iSCSI volume to be created.
    2) Use a web browser with debugging turned on and capturing traffic. For my example I am using Firefox and hit F12 to fire up the debug tool. Pick "Network" and you will see Firefox start picking up all traffic to the DL4100.
    3) Go into the iSCSI volume and choose "Details" to modify it. Put the current size as the modified size and click "Apply". Look in the list of messages to locate a POST message "iscsi_mgr.cgi" that has a Request body with a "size" parameter and select that to be resent.
     4) In this copy of the same message, look in Request Body and the list of parameters being passed back to the unit. You should find one of the parameters called "size". This value is sent to the unit in GB... Change the value to a GB value that you desire (eg: 1TB would appear as "1000", so you can change it to "1500" for 1.5TB) and then re-POST this POST message back to the unit.
    5) Wait for the update to transact and verify that your iSCSI volume has indeed been resized to a "non-Integer" TB value.
    That's it! I hope this helps others who have been trapped by this limitation. Please be mindfull not to allocate all your dive space since as Bill_S has mentioned, some space is required by the system for its own housekeeping operations.
    Good Luck!

  • Recover a VHD file from a formatted volume

    Apologies if this is not the right forum.
    So I was preparing a new server to host virtual machines. This is a Dell PE r710, Dual quad-core Xeon processors, 48 GB RAM, 2x 72 GB SAS drives. The server has Hyper-V Server 2008 R2 installed. iSCSI volumes hosted in a Dell EqualLogic SAN will host the
    actual virtual machines. We have 5 of these with similar setups, 3 of which are setup in a cluster. We have 2 set aside for additional projects. These will not be clustered as they will be hosting Lync virtual machines. Anyway we have 1 server setup and working
    with 2 VM's. The 2nd server will initially host 2 additional VM's. My goal was to format the iSCSI volume on the 2nd server and prepare it to host VMs. Except I accidentally logged into the 1st server (it's one of those days - I was interrupted as I was typing
    in the server name into Remote Desktop and it auto-populated the first server instead of the 2nd) and begin formatting the volume. Needless to say Lync stopped working and the VMs crashed. I immediately recognized my mistake and stopped
    the format but obviously it was too late.
    I was hoping I could recover the data on that volume. We have a trial of this software called GetDataBack NTFS edition. It can recognize the volume ok and it can scan it and see files. Unfortunately it does not see actual VHD files. Instead what
    happens is that it sees the VHD files as separate volumes/partitions/etc and recovers the data within the VHD files. Not sure if it should do that, waiting on a response from Runtime. Kind of cool in a way if we ever needed to recover data within a VHD file.
    Anyway, does anyone know of software that can recover data from a volume but not scan the contents of the vhd file and include that data in the recovery? By this I mean I want to recover the actual VHD files, not the data within. Perhaps Virtual
    Machine Manager can do something? We do have it installed and manage our VM's with it. It would save us the trouble of re-deploying Lync. Apparently there's a lot of ADSI edits needed to undo the changes Lync did to AD/ Even migrating users back requires some
    ADSI edits. Trying to find those online someplace just in case.
    We did not have backups yet of these 2 VMS as we just recently got them setup and have moved a few users over (some of us from IT). We still had some configuration/tweaking/adjustments to make before migrating the remainder of our OCS users. I know, I know,
    backup backup backup. The SAN we have can even do snapshots of iSCSI volumes but we didn't even think of enabling it yet.
    Regardless in what happens with the VM's, I enabled Windows Server Backup and will immediately setup nightly backups once the VM's are operational again. I will also have our SAN admin enable snapshots, and just for laughs I will also enable snapshots on
    the individual VM's. We're also currently testing out Data Protection Manager. Kind of wish we already had it implemented....sigh...
    Thanks,
    Banging my head against a board with a nail in it...

    Hi,
    There are lots of data recovery tools you can use to recovery the data. You can also try to call some professional data recovery company to check whether
    they can recover the data for you. Thanks for your understanding.
    Best Regards,
    Vincent Hu

  • Is it possible to resize a mac partition from windows?

    Here's what's going on. I originally wanted to use Disk Utility to do this, which is clearly the easy way to do things, but my computer is so slow that Disk Utility immediately becomes unresponsive upon wanting to resize -any- partition. The windows side works fine though, it isn't slow as crap. Is there a way to resize the mac partition from windows (HFS+)? Any possible way? Even crazy ones? I'm open to suggestions.
    Oh yeah, and in case you want to know...my Mac Mini just randomly started getting really, really slow, so slow in fact, that now my old iMac from 1999 is faster (it doesn't even have ANY RAM!). I tried resetting the PRAM/NVRAM, multiple times, booting into safe mode takes forever, but I left it for awhile and it finished. Almost worked, the finder showed up quicker than usual, but then it reverted back to it's slow self again once I opened Disk Utility. I tried booting into single user mode and typing /sbin/fsck -fy, and it's so slow, that it couldn't even do that! It got stuck at checking libraries or something, for hours, and i could type things in still, but it wouldn't recognize any commands (ie fsk - fy). I also cannot replace the RAM because it is an older mac mini and those things are hard as f*** to repair, trust me, i've tried. The weirdest part of all of this has to be that this is the computer that we DON'T download many apps onto, so it most likely isn't bloatware, adware, a virus or something else. We download most stuff to our MacBook, and go to the sketchy websites on it as well (it really should be the other way around, the mac mini would be around 3x less expensive to replace, it's just that it's the family computer). And that's always slow, but this surpassed it (btw, the laptop is a year or so older). I do not have the installation disk.
    Can someone help? Mainly about the first thing...please. The second thing is hopeless, I will need to get a new computer pretty soon, and these macs don't last as long as they used to I guess, we'd most likely get a quality Windows PC to replace it, this has been a disappointment. I'm just trying to get a little more life out of it with Linux, maybe a year or two. That's what i've done with my old iMac. Runs GREAT. Especially for a 13 year old computer. The thing is, my folks don't want me to completely eliminate the mac partition even though it's virtually unusuable.

    I'm not really understanding what the end goal was. But in case anyone else comes across the thread, neither the built-in Windows volume resizing utility nor any 3rd party Windows resizing utility should be used to resize either the Windows partition or the Mac OS partition. Invariably they all manipulate the NTFS volume in such a way that the secondary GPT is corrupt, the primary GPT is rendered invalid, and only correctly alter the MBR. It's a recipe for data loss.
    Conversely, while Disk Utility can resize a Mac OS volume, it cannot resize an NTFS volume. With Lion and Mountain Lion, upon using Boot Camp Assistant there are already four partitions: EFI System, Mac OS, Recovery HD, and Windows. There is no more room for a fifth partition in the MBR partition scheme, and if you try to add one, Disk Utility will apparently let you and render Windows unbootable (because Disk Utility adds the fifth partition to the GPT, but can't add it to the MBR, and instead removes the hybrid MBR needed for Windows support and replaces it with a PMBR.)
    So in effect this is not to be done except with 3rd party utilities.

  • Share a virtual volume

    Hello everybody. I've got a My Cl.oud EX2. I attached an iSCSI volume to a Win7 initiator. It's way faster than the samba share. After including it in a virtual volume, it got a share label and it shows up in the windows network as well. Unfortunately, it does not show up in the WD cloud. Is it somehow possible to share it in the private cloud and make the contents subject to streaming and sharing? Regards, ALexander.

    Hi there, welcome to the community.
    Let me see if I understand, you want to allow access from an ISCSI target via remote access?

  • ZFS iscsi share failing after ZFS storage reboot

    Hello,
    ZFS with VDI 3.2.1 works fine until i reboot the ZFS server.
    After the ZFS server reboot, the desktop providers are not able to access the virtual disks via iscsi.
    This is what is see when listing the targets:
    Target: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    iSCSI Name: iqn.1986-03.com.sun:02:e5012bea-487c-40ff-81d5-af89405c7121
    Alias: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Connections: 0
    ACL list:
    TPGT list:
    LUN information:
    LUN: 0
    GUID: 600144f04cf383010000144f201a2c00
    VID: SUN
    PID: SOLARIS
    Type: disk
    Size: 20G
    Backing store: /dev/zvol/rdsk/vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Status: No such file or directory
    zfs list -Hrt volume vdi | grep d711b8c1a1d3
    vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 0 774G 8.69G -
    The vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 was cloned from a template prior to the reboot.
    The workaround is to delete and re-create all pools after the ZFS server reboot.
    I use the VDI broker as the ZFS server which mounts an iscsi volume from a Linux box. The following shows an error in sharing a ZFS snaphoot via iscsi after a reboot. Not sure why. The same seems to happen with zfs clone as shown above.
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460 114M 765G 114M -
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1 0 - 114M -
    zfs set shareiscsi=off vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    zfs set shareiscsi=on vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1': iscsitgtd failed request to share
    I'm a bit desparate as i cannot find a solution. VDI 3.2.1 is really good but the above behaviour in deleting and re-creating vdi pools after reboots is not sustainable.
    Thanks
    Thierry.

    Hello,
    ZFS with VDI 3.2.1 works fine until i reboot the ZFS server.
    After the ZFS server reboot, the desktop providers are not able to access the virtual disks via iscsi.
    This is what is see when listing the targets:
    Target: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    iSCSI Name: iqn.1986-03.com.sun:02:e5012bea-487c-40ff-81d5-af89405c7121
    Alias: vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Connections: 0
    ACL list:
    TPGT list:
    LUN information:
    LUN: 0
    GUID: 600144f04cf383010000144f201a2c00
    VID: SUN
    PID: SOLARIS
    Type: disk
    Size: 20G
    Backing store: /dev/zvol/rdsk/vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3
    Status: No such file or directory
    zfs list -Hrt volume vdi | grep d711b8c1a1d3
    vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 0 774G 8.69G -
    The vdi/0fe93683-91ca-4faf-a4db-d711b8c1a1d3 was cloned from a template prior to the reboot.
    The workaround is to delete and re-create all pools after the ZFS server reboot.
    I use the VDI broker as the ZFS server which mounts an iscsi volume from a Linux box. The following shows an error in sharing a ZFS snaphoot via iscsi after a reboot. Not sure why. The same seems to happen with zfs clone as shown above.
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460 114M 765G 114M -
    vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1 0 - 114M -
    zfs set shareiscsi=off vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    zfs set shareiscsi=on vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460
    cannot share 'vdi/ad03deb8-214f-4b8a-bd51-8dc8f819c460@version1': iscsitgtd failed request to share
    I'm a bit desparate as i cannot find a solution. VDI 3.2.1 is really good but the above behaviour in deleting and re-creating vdi pools after reboots is not sustainable.
    Thanks
    Thierry.

Maybe you are looking for

  • Real instruments pan to one side.

    on the older (not '08) version of Garageband, when i record 'real' instruments, they always pan to one-side, even though it's shown as them being in the middle, and also even if i pan it to the other side. but software instruments are fine and are pa

  • More than one 'movie' on a disc?

    I am in the process of putting together a demo reel, and what i would like to do is this: have four or five examples of my work on the disc, but have each as their own separate file, not all together in the same track. i would like a simple menu to s

  • Abap transport release issue

    hello, i have a scenario where in my project, the crm development is happening. and for a single object there are about 4 people are working and there is a single transport request with 4 different tasks. Now that each folks are working individually

  • For the css masters out there, my problem:

    i've almost got everything on my home page the way i want it. however, on internet explorer on the pc, notice the 1 pixel shift below my li elements under "Recent Work." I can't for the life of me find what is causing that. Can anyone help? Thanks in

  • J2se 1.5 jnlp installer extension

    Has any one being successful in getting an installer extesion (using installer-desc) to run under j2se 1.5 webstart. It constantly fails with following error for me. JNLPException[category: Launch File Error : Exception: null : LaunchDesc:[/b] Howeve