Resetting ocfs2 /dev/mapper references ? ... storage pool repository

So, am using 3 disks from SAN which were previously exported to another OVS server. The current setup is a fresh installed OVS server and OVM Manager. OVM is reading the disk paths thats written on the disks and would not allow me to proceed with new pool creation. How do i get rid of it ? I dont want to delete storage from the SAN side and reprovision
from ovs-agent.log
[2011-09-23 15:59:20 26901] DEBUG (OVSCommons:123) create_pool_filesystem: ('lun', '/dev/mapper/350002ac000a206ef', '9df421903d48fcc1', '0004fb0000050000cabdb766eeb75c85', '', '0004fb00000100003c815b9182723afd', '0004fb00000200009df421903d48fcc1')
[2011-09-23 15:59:20 26901] ERROR (OVSCommons:142) catch_error: /dev/mapper/350002ac000a206ef is already a pool filesystem.

lol, i overlooked the Delete FileSystem button ... but am curious to know what is done @ background ?
[2011-09-23 16:23:54 29394] DEBUG (StoragePluginManager:36) storage_plugin_destroyFileSystem(oracle.ocfs2.OCFS2.OCFS2Plugin)
[2011-09-23 16:23:54 29394] DEBUG (caller:347) Started worker process 29395
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [{'capability.clone.asynchronous': False, 'filesystem.api-version': ['1', '2', '7'], 'capability.snapclone': True, 'capability.resize': True, 'help.extra-info.file': 'None', 'capability.splitclone.open': False, 'filesystem.backing-device.type': 'device', 'capability.snapclone.asynchronous': False, 'help.extra-info.filesystem': 'None', 'capability.clone': True, 'capability.snapshot': True, 'capability.splitclone': False, 'capability.clone.online': True, 'capability.snapshot.custom-name': True, 'capability.access-control.max-entries': 0, 'type': 'ifs', 'capability.snapshot.asynchronous': False, 'capability.snapclone.online': True, 'filesystem.backing-device.multi': False, 'vendor': 'Oracle', 'description': 'Oracle OCFS2 File system Storage Connect Plugin', 'capability.splitclone.asynchronous': False, 'capability.splitclone.online': False, 'capability.storage-name-required': False, 'capability.snapshot.online': True, 'filesystem.type': 'LocalFS', 'help.extra-info.server': 'None', 'name': 'Oracle OCFS2 File system', 'capability.clone.custom-name': True, 'capability.access-control': False, 'capability.resize.asynchronous': False, 'capability.resize.online': True, 'api-version': ['1', '2', '7'], 'filesystem.name': 'ocfs2'}, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:357) Stopped worker process 29395
[2011-09-23 16:23:54 29394] DEBUG (caller:347) Started worker process 29396
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
[2011-09-23 16:23:54 29394] DEBUG (caller:357) Stopped worker process 29396
[2011-09-23 16:23:54 29394] DEBUG (OVSCommons:131) storage_plugin_destroyFileSystem: call completed.

Similar Messages

  • OVM 3.1.1 Repository Being Mapped with Diffferent /dev/mapper/ address

    Hello,
    I'm running OVM 3.1 on a Dell T420 server with onboard storage. The RAID 5 virtual disk that was housing my repository was inadvertently deleted. I was able to recreate a new VD and the server can now see the original ocsf2 partition that the repository lived on. However, it obtained a new /dev/mapper/ address and OVM manager will not mount the repository. Is there a way to rename the /dev/mapper address to the old address that OVM manager is looking for? Thanks!

    No, the ID in /dev/mapper is generated from the SCSI-83-page and that one is created on the storage and it cannot be changed. You can find some howto on the net, that deals with such an issue where, but it's not for the faint-hearted. A lot of information are in this post, which deals with howto get an existing storage repo up on a remote site, but some of the concepts also apply to your situation:
    https://oraclenz.wordpress.com/2013/06/13/oracle-vm-3-1-disaster-recovery-ha/
    That one helped me as well…
    https://community.oracle.com/thread/3596198
    Cheers,
    budy

  • Libvirt: Unable to define LVM storage pool

    Hello,
    I'm trying to define an LVM storage pool for my virtual machines using KVM/libvirt. The configuration looks like this:
    <pool type="logical">
    <name>vol0</name>
    <source>
    <device path="/dev/md0"/>
    </source>
    <target>
    <path>/dev/vol0</path>
    </target>
    </pool>
    The problem is, that this LVM group is already active (other vms running using volumes inside this group) and 'virsh pool-start vol0' wants me to disable it. Is there any way to start the pool without "deactivate" the volume group?
    virsh pool-start vol0
    error: internal error '/sbin/vgchange -an vol0' exited with non-zero status 5 and signal 0: Can't deactivate volume group "vol0" with 14 open logical volume(s)
    Further, I'm a bit curious that libvirt might recreate the volume group and therefore deletes all the content during the building process.
    Would appreciate any advice
    Regards,
    Jonas

    maahes wrote:
    did so now, only now I'm getting a slightly different error: could not find udevd no such file or directory. I checked both grub.cfg's and my mkinitcpio.conf and there's no listing for udevd ....which I've never heard of, so I assumed it was a typo?
    For clarification: udev is in the mkinitcpio.
    I'm not sure whether I yet have a good intuition for how you have your machine set up, but I suspect you need to include a cryptdevice flag to the kernel in your grub config. The file isn't found because the kernel doesn't know your root directory needs decrypting first.
    My setup is an LVM over LUKS over LVM sandwich. To boot into my system, the grub.cfg contains the line:
    linux /vmlinuz-linux root=/dev/mapper/cryptvg-root cryptdevice=/dev/mapper/vg-crypt:root rootfstype=ext4 pcie_aspm=force acpi_osi=Linux acpi_backlight=vendor i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 i915.lvds_downclock=1 ro
    Now, most of those flags don't have anything to do with your problem, but note the cryptdevice. It tells the kernel it's dealing with an encrypted filesystem sitting in a logical volume called crypt on a volume group called vg. The bit after the colon tells the kernel to associate this encrypted filesystem with /dev/mapper/root.
    As for how to fix your system, I'm afraid I still feel a bit fuzzy about how your LVM and encrypted layers relate to each other, whether you have LVM over LUKS, or LUKS over LVM, or something else. Was there a particular how-to that you followed?

  • After upgrading to 8.1 Pro from 8.0 Pro, my Storage Spaces and Storage Pool are all gone.

    Under 8.0 I had three 4-terabyte drives set up as a Storage Pool in Storage Spaces.  I had five storage-space drives using this pool  I had
    been using them for months with no problems.  After upgrading to 8.1 ( which gave no errors ) the drives no longer exist.  Going into "Storage Spaces" in the control panel, I do not see my existing storage pool or storage drives. I'm prompted
    to create a new Pool and Storage Spaces.  If I click the "Create Pool" it does not list the three drives I used previously as available to add.
    Device Manager shows all three drives as present and OK.  
    Disk Management shows Disks 0,1,2,6.  The gap in between 2 and 6 is where the 3,4,5 storage spaces drives were.  
    Nothing helpful in the event log or the services.
    I've downloaded the ReclaiMe Storage Spaces recovery tool and it sees all of my logical drives with a "good" prognosis for recovery.  I've not gone down that road yet though because it requires separate physical drives to copy everything to
    and they want $299 for the privilege.
    Does anyone have any ideas?  I'm thinking of doing a fresh 8.1 install to another drive to see if it can see it or reinstalling 8.1 to the existing drive in the hope that it will just suddenly work.  Or possibly going back to 8.0.
    Thanks for your help!
    Steve

    Hi,
    “For parity spaces you must backup your data and delete the parity spaces. At this point, you may upgrade or perform a clean install of Windows 8. After the upgrade or clean installation is complete, you may recreate parity spaces and
    restore your data.”
    I’d like to share the following article with you for reference:
    Storage Spaces Frequently Asked Questions
    (Pay attention to this part:How do I prepare Storage Spaces for upgrading from the Windows 8 Consumer Preview to Windows 8 Release Preview?)
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_prepare_Storage_Spaces_for_upgrading_from_the_Windows_8_Consumer_Preview_to_Windows_8_Release_Preview
    Regards,
    Yolanda
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VirtualDisk on Windows Server 2012 R2 Storage Pool stuck in "Warning: In Service" state and all file transfers to and from is awfully slow

    Greetings,
    I'm having some trouble with my Windows Storage Pool and my VirtualDisk running on a Windows Server 2012 R2 installation. It consists of 8x Western Digital RE-4 2TB drives + 2x Western Digital Black Edition 2TB drives and have been configured in a single-disk
    parity setup and the virtual disk is running fixed provisioning (max size) and is formatted with ReFS.
    It's been running solid for months besides some awful write-speeds at times, it seems like the write performance running ReFS compared to NTFS is not that good.
    I was recommended to add SSD's for journalling in order to boost write-performance. Sadly I seemed to screw up this part, you need to due this through PowerShell and it needs to be done before creating the virtualdisk. I managed to add my SSD to the Storage
    Pool and then remove it.
    This seem to have caused some awkward issues, I'm not quite sure of why as the virtualdisk is "fixed" so adding the SSD to the Storage Pool shouldn't really do anything, right? But after I did this my virtual disk have been stuck in "Warning:
    In Service" and it seems to be stuck? It's been 4-5 days and it's still the same and the performance is currently horrible. Moving 40GB of data off the virtual disk took me about 20 hours or so. Launching files under 1mb of the virtual disk takes several
    minutes etc.. It's pretty much useless.
    The GUI is not providing any useful information about what's going on. What does "Warning: In Service" actually imply? How am I supposed to know how long this is supposed to take? Running Get-Virtualdisk in PowerShell does not provide any useful
    information either. I did try to do a repair through the Server Manager GUI but it goes to about 21% within 2-3 hours but drops back down to 10%. I have had the repair running for days but it wont go past 21% without dropping back down again.
    Running repair through PowerShell yields the same results, but if I detach the virtual disk and then try to repair through PowerShell (the GUI wont let me do repair on detached virtual disks) it will just run for a split second and then close.
    After doing some "Googeling" I've seen people mentioning that the repair is not able to finish unless I have at least the same amount of free space in the Storage Pool as the largest drive in my Storage Pool is housing so I added a 4TB drive as
    due to me running fixed provisioning I had used all the space in the pool but the repair is still not able to go past 21%.
    As am running "fixed provisioning" I guess adding a extra drive to the pool doesn't do much difference as it's not available for the virtual disk? So I went ahead and deleted 3 TB of data on the virtual disk so now I've got about 4 TB free space
    on the virtual disk so there should be plenty of room for Windows Server 2012 R2 to re-build the parity or whatever it's trying to do but it's still the same, the repair wont move past 21% and the virtual disk is still stuck in "Warning: In Service"
    mode and the performance keeps being horrible so taking a backup will take forever at these speeds...
    Am I missing something here? All the drives in the pool is working fine. I have verified using various bootable tools so why is this happening and what can I do to get the virtual disk running at full state again? Why doesn't the GUI prompt you with any
    kind of usable information?
    Best regards, Thomas Andre

    Hi,
    Please run chkdsk /f /r command on the virtual disk to have a try. In the meantime, run the following commands in PowerShell to share the output.
    get-virtualdisk -friendlyname <name> | get-physicaldisk | fl
    get-virtualdisk -friendlyname <name> |fl
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Server 2012 R2 Storage Pool Disk Identification Method

    Hi all,
    I'm currently using Server 2012 R2 Essentials with a Storage Space consisting of 7 3TB disks. The disks are connected to an LSI MegaRAID controller which does not support JBOD so each disk is configured as a single disk RAID0. The disks are connected to
    the controller using SAS Breakout Cables (SATA to SFF-8087).
    I am considering moving my server into a new chassis. The new chassis has a SAS Backplane for drive attachment which means I would be re-cabling to use SFF-8087 to SFF-8087 cables instead and in doing so, the channel and port assignment on the LSI MegaRAID
    will change.
    I know that the LSI card will have no problem identifying the disk as the same disk when it's connected to a different port or channel on the controller, but is the same true for the Storage Space?
    How does Storage Spaces track the identity of the individual disks?
    Just to be clear, the hardware configuration otherwise will not be changing. Motherboard, CPU, RAID controller etc will all be the same, it will just be moving everything into a new chassis.

    Hi,
    If the disks are still recognized as the same, the storage space should be recognized as well.
    You could test to do the replacement and see if the storage pools are being recognized. If not you can still change them back with original devices and storage pools will back to work. Then we may need to find a way to migrate your data. Personally I think
    it will work directly. 
    Note: backup important files is always recommended. 
    If you have any feedback on our support, please send to [email protected]

  • Is it possible to mount a physical disk (/dev/mapper/ disk) on one of my Oracle VM server

    I have a physical disk that I can see from multipath -ll  that shows up as such
    # multipath -ll
    3600c0ff00012f4878be35c5401000000 dm-115 HP,P2000G3 FC/iSCSI
    size=410G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=active
    | `- 7:0:0:49  sdcs 70:0   active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
      `- 10:0:0:49 sdcr 69:240 active ready running
    That particular is visible in the OVMM Gui as a physical disk that I can present to one of my VMs but currently its not presented to any of them.
    I have about 50 physical LUNs that my Oracle VM server can see.  I believe I can see all of them from a fdisk -l, but "dm-115" (which is from the multipath above) doesnt show up.
    This disk has 3 usable partitions on it, plus a Swap.
    I want to mount the 3rd partition temporarily on the OVM server itself and I receive
    # mount /dev/mapper/3600c0ff00012f4878be35c5401000000p3 /mnt
    mount: you must specify the filesystem type
    If I present the disk to a VM and then try to mount the /dev/xvdx3 partition -it of course works.  (x3 - represents the 3rd partition on what ever letter position the disk shows up as)
    Is this possible?

    Its more of the correct syntax. Like I can not seem to figure out how to translate the /dev/mapper path above into what fdisk -l shows. Perhaps if I knew how fdisk and multipath can be cross referenced I could mount the partition.
    I had already tried what you suggested. Here is the output if I present the disk to a VM and then mount the 3rd partition.
    # fdisk -l
    Disk /dev/xvdh: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvdh1   *           1          13      104391   83  Linux
    /dev/xvdh2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/xvdh3            2103       27783   206282632+  83  Linux
    /dev/xvdh4           27784       30394    20972857+   5  Extended
    /dev/xvdh5           27784       30394    20972826   83  Linux
    # mount /dev/xvdh3 /mnt  <-- no error
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/xvda3            197G  112G   75G  60% /
    /dev/xvda5             20G 1011M   18G   6% /var
    /dev/xvda1             99M   32M   63M  34% /boot
    tmpfs                 2.0G     0  2.0G   0% /dev/shm
    /dev/xvdh3            191G   58G  124G  32% /mnt  <-- mounted just fine
    Its ext3 partition
    # df -T
    /dev/xvdh3
    ext3   199822096  60465024 129042944  32% /mnt
    Now if I go to my vm.cfg file, I can see the disk that is presented.
    My disk line contains
    disk = [...'phy:/dev/mapper/3600c0ff00012f4878be35c5401000000,xvdh,w', ...]
    Multipath shows that disk and says "dm-115" but that does not translate on fdisk
    # multipath -ll
    3600c0ff00012f4878be35c5401000000 dm-115 HP,P2000G3 FC/iSCSI
    size=410G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=active
    | `- 7:0:0:49  sdcs 70:0   active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
      `- 10:0:0:49 sdcr 69:240 active ready running
    I have around 50 disks on this server, but the ones of the same size from fdisk -l from the server shows me many.
    # fdisk -l
    Disk /dev/sdp: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdp1   *           1          13      104391   83  Linux
    /dev/sdp2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdp3            2103       27783   206282632+  83  Linux
    /dev/sdp4           27784       30394    20972857+   5  Extended
    /dev/sdp5           27784       30394    20972826   83  Linux
    Disk /dev/sdab: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdab1   *           1          13      104391   83  Linux
    /dev/sdab2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/sdab3            1319       27783   212580112+  83  Linux
    /dev/sdab4           27784       30394    20972857+   5  Extended
    /dev/sdab5           27784       30394    20972826   83  Linux
    Disk /dev/sdac: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdac1   *           1          13      104391   83  Linux
    /dev/sdac2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdac3            2103       27783   206282632+  83  Linux
    /dev/sdac4           27784       30394    20972857+   5  Extended
    /dev/sdac5           27784       30394    20972826   83  Linux
    Disk /dev/sdad: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdad1   *           1          13      104391   83  Linux
    /dev/sdad2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/sdad3            1319       27783   212580112+  83  Linux
    /dev/sdad4           27784       30394    20972857+   5  Extended
    /dev/sdad5           27784       30394    20972826   83  Linux
    Disk /dev/sdae: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdae1   *           1          13      104391   83  Linux
    /dev/sdae2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdae3            2103       27783   206282632+  83  Linux
    /dev/sdae4           27784       30394    20972857+   5  Extended
    /dev/sdae5           27784       30394    20972826   83  Linux
    Disk /dev/sdaf: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdaf1   *           1          13      104391   83  Linux
    /dev/sdaf2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdaf3            2103       27783   206282632+  83  Linux
    /dev/sdaf4           27784       30394    20972857+   5  Extended
    /dev/sdaf5           27784       30394    20972826   83  Linux
    Disk /dev/sdag: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdag1   *           1          13      104391   83  Linux
    /dev/sdag2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdag3            2103       27783   206282632+  83  Linux
    /dev/sdag4           27784       30394    20972857+   5  Extended
    /dev/sdag5           27784       30394    20972826   83  Linux
    Disk /dev/dm-13: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-13p1   *           1          13      104391   83  Linux
    /dev/dm-13p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-13p3            2103       27783   206282632+  83  Linux
    /dev/dm-13p4           27784       30394    20972857+   5  Extended
    /dev/dm-13p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-25: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-25p1   *           1          13      104391   83  Linux
    /dev/dm-25p2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/dm-25p3            1319       27783   212580112+  83  Linux
    /dev/dm-25p4           27784       30394    20972857+   5  Extended
    /dev/dm-25p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-26: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-26p1   *           1          13      104391   83  Linux
    /dev/dm-26p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-26p3            2103       27783   206282632+  83  Linux
    /dev/dm-26p4           27784       30394    20972857+   5  Extended
    /dev/dm-26p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-27: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-27p1   *           1          13      104391   83  Linux
    /dev/dm-27p2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/dm-27p3            1319       27783   212580112+  83  Linux
    /dev/dm-27p4           27784       30394    20972857+   5  Extended
    /dev/dm-27p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-28: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-28p1   *           1          13      104391   83  Linux
    /dev/dm-28p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-28p3            2103       27783   206282632+  83  Linux
    /dev/dm-28p4           27784       30394    20972857+   5  Extended
    /dev/dm-28p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-29: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-29p1   *           1          13      104391   83  Linux
    /dev/dm-29p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-29p3            2103       27783   206282632+  83  Linux
    /dev/dm-29p4           27784       30394    20972857+   5  Extended
    /dev/dm-29p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-30: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-30p1   *           1          13      104391   83  Linux
    /dev/dm-30p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-30p3            2103       27783   206282632+  83  Linux
    /dev/dm-30p4           27784       30394    20972857+   5  Extended
    /dev/dm-30p5           27784       30394    20972826   83  Linux
    How to translate the /dev/mapper address into the correct fdisk, I think I can then mount it.
    If I try the same command as before with the -t option it gives me this error.
    # mount -t ext3 /dev/mapper/3600c0ff00012f48791975b5401000000p3 /mnt
    mount: special device /dev/mapper/3600c0ff00012f48791975b5401000000p3 does not exist
    I know I am close here, and feel it should be possible, I am just missing something.
    Thanks for any help

  • Creation of new storage pool on iomega ix12-300r failed

    I have a LenovoEMC ix12-300r (iomega version).
    IX12-300r serial number: 2JAA21000A
    There is at present one storagepool (SP0) consisting of 8 drives (RAID5).
    HDD 1-8 (existing SP0): ST31000520AS CC38
    I have aquired 4 new Seagate ST3000DM001 drives as per recommendation on this forum:
    https://lenovo-na-en.custhelp.com/app/answers/detail/a_id/33028/kw/ix12%20recommended%20hdd 
    I want to make a new storage pool with these 4 drives:
    HDD9-12 (new HDD and SP1): ST3000DM001-1CH166 CC29
    I have used diskpart to clean all 4 drives and the IX12-300r can see the drives just fine.
    When I try to make a new storage pool, naming it SP1 as the only storage pool is named SP0, I get an error: "Storage Pool Creation Failed"
    Please advise as to how I can get these drives up and running.
    Regards
    Kristen Thiesen
    adena IT

    I have pulled the 8 hdd from storage pool 0.
    Then I rebooted with the 4 new hdd in slot 9 - 12.
    Result: http://device IP/manage/restart.html, with the message: Confirmation required. You must authorize overwrite existing data to start using this device. Are you sure you want to overwrite existing data?k [yes] / [no]
    I then answer yes four times anticipating that each new drive needs an acceptance, but the dialog just keeps poping up...
    Then I shut down the device and repositioned the 4 new drives to slot 1-4 - but the same thing happened...
    Any suggestions?

  • Win 8.1 Storage Pool not allowing "add drive" nor allow expand capacity

    Have one Storage Space within one Storage Pool (Parity mode) containing 4 identical hard drives.
    Used for data storage, it appears to be functioning normally
    and
    has filled 88% of capacity
    (ie. 88% x 2/3 of physical capacity (parity mode))
    The only other storage on this new PC is an SSD used for OS (win 8.1 pro) and application software.
    In "Manage Storage Spaces"
    displays this warning message to add drives:
    <   Warning                               >
    <   Low capacity; add 3 drives   >
    After clicking "add drives", it displays:
    "No drives that work with Storage Spaces are available. Make sure that the drives that you want to use are connected.".
    However I had connected another two identical hard drives via SATA cables and "Disk Management" displays these two drives available.
    in summary:
    "Manage Storage Spaces" does not find these drives as available although they show correctly in Disk Management.
    btw - I removed the pre-existing partitioning on the 'new' drives so now they show only as "unallocated" in "Disk Management". (I did
    likewise before Storage Pool found the 4 original drives)
    Perhaps the problem is to increase the total nominal capacity of the Storage Space, before can add more drives?
    Microsoft says that the capacity of Storage Pools can be increased but cannot be decreased -
    but computer displays no Change "button" by which this can be done. There is supposed to be a "Change" button but that is
    not displaying for me. So "Manage Storage Spaces" offer me no option to manage the "size" of the pool.
    only five options are displayed:
    Create a storage space     (ie. from the small amount remaining unused in the Pool)
    Add drives     (.... as explained already)
    Rename pool    (only renames the storage space)
    Format        (ie. re-format and so lose all current data)
    Delete         (ie. delete the storage space and so lose all current data)
    using Google, find nothing bearing on this problem
    except the most basic instructions to set up a storage space!
    Can you help?
    The problem is that the Storage Pool is not displaying a "button" to increase capacity, and when click "add drives" finds no hard drives available. 

    Hi,
    I would suggest you launch Device Manager, then expand
    Disk drives. Right-click the disk listed as "Disk drive", and select
    Uninstall. On the Action menu, click Scan for hardware changes to reinstall the disk.
    Please also take a look of this link:
    see this part: How do I increase pool capacity?
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_increase_pool_capacity 
    According to the link, to extend a parity space, the pool would need the appropriate number of columns available to accommodate the layout of the disk.
    Yolanda Zhu
    TechNet Community Support

  • 2012 New Cluster Adding A Storage Pool fails with Error Code 0x8007139F

    Trying to setup a brand new cluster (first node) on Server 2012. Hardware passes cluster validation tests and consists of a dell 2950 with an MD1000 JBOD enclosure configured with a bunch of 7.2K RPM SAS and 15k SAS Drives. There is no RAID card or any other
    storage fabric, just a SAS adapter and an external enclosure.
    I can create a regular storage pool just fine and access it with no issues on the same box when I don't add it to the cluster. However when I try to add it to the cluster I keep getting these errors on adding a disk:
    Error Code: 0x8007139F if I try to add a disk (The group or resource is not in the correct state to perform the requested operation)
    When adding the Pool I get this error:
    Error Code 0x80070016 The Device Does not recognize the command
    Full Error on adding the pool
    Cluster resource 'Cluster Pool 1' of type 'Storage Pool' in clustered role 'b645f6ed-38e4-11e2-93f4-001517b8960b' failed. The error code was '0x16' ('The device does not recognize the command.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    if I try to just add the raw disks to the storage -- without using a pool or anything - almost every one of them but one fails with incorrect function except for one (a 7.2K RPM SAS drive). I cannot see any difference between it and the other disks. Any
    ideas? The error codes aren't anything helpful. I would imagine there's something in the drive configuration or hardware I am missing here I just don't know what considering the validation is passing and I am meeting the listed prerequisites.
    If I can provide any more details that would assist please let me know. Kind of at a loss here.

    Hi,
    You mentioned you use Dell MD 1000 as storage, Dell MD 1000 is Direct Attached Storage (DAS)
    Windows Server cluster do support DAS storage, Failover clusters include improvements to the way the cluster communicates with storage, improving the performance of a storage area network (SAN) or direct attached storage (DAS).
    But the Raid controller PERC 5/6 in MD 1000 may not support cluster technology. I did find its official article, but I found its next generation MD 1200 use Raid controller PERC H 800 is still not support cluster technology.
    You may contact Dell to check that.
    For more information please refer to following MS articles:
    Technical Guidebook for PowerVault MD1200 and MD 1220
    http://www.dell.com/downloads/global/products/pvaul/en/storage-powervault-md12x0-technical-guidebook.pdf
    Dell™ PERC 6/i, PERC 6/E and CERC 6/I User’s Guide
    http://support.dell.com/support/edocs/storage/RAID/PERC6/en/PDF/en_ug.pdf
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • T-code that  resets the counts in the storage bins

    hello,
    is there any T-code that  resets the counts in the storage bins.
    As we do our weekly counts it keeps a running total of how many bin are completed and allows us to track the warehouse's progress.
    Regards
    Goutham

    Of cource, very sorry to waste your time,
    I didn't even notice LOL
    But your question is a easy question so I am sure some one will respond with a good answer for you.
    our program name is RLREOLPQ
    not sure if the program is custon or just t-code
    gl
    Edit: may be a bad sign for you, maybe no standard t-code if we had to create a custum one
    Edited by: Arakish on Apr 1, 2010 10:32 PM

  • LVM /dev/mapper is empty, + systemd boot hangs

    Hi!
    1st problem:
    I have 2 vgs with 1 lv each -- called Sys/Linux and Sys2/Var. Booting the kernel with root=/dev/mapper/Sys-Linux works, but after mounting the root fs, /dev/mapper is empty:
    $ ls /dev/mapper
    control
    2nd problem:
    I changed UUID= to /dev/dm-1 for /var for testing purposes, but systemd will timeout on mounting /var. Note: after entering the emergency console, I can just type mount /var and it will mount.
    The system is was quite old and I changed to systemd-only right now, because I upgraded the filesystem package (that removes /bin, /sbin, /usr/sbin and required me to remove outdated packages). ((Yes, I'm late )).
    I hope anyone has some pointers; please tell me, which config files you need to see if any, to help out.
    Thanks,
    Andy
    Last edited by Black Sliver (2013-08-12 20:14:40)

    [2013-08-12 19:33] [ALPM-SCRIPTLET] >>> Updating module dependencies. Please wait ...
    [2013-08-12 19:33] [ALPM-SCRIPTLET] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'default'
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Starting build: 3.10.5-1-ARCH
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [base]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [udev]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [autodetect]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [block]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [keyboard]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [lvm2]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> ERROR: file not found: `/usr/sbin/dmsetup'
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [resume]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [filesystems]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [shutdown]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Generating module dependencies
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Creating lzma initcpio image: /boot/initramfs-linux.img
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> WARNING: errors were encountered during the build. The image may not be complete.
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'fallback'
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Starting build: 3.10.5-1-ARCH
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [base]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [udev]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [block]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: bfa
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: aic94xx
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: smsmdtv
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [keyboard]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [lvm2]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> ERROR: file not found: `/usr/sbin/dmsetup'
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [resume]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [filesystems]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] -> Running build hook: [shutdown]
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Generating module dependencies
    [2013-08-12 19:33] [ALPM-SCRIPTLET] ==> Creating lzma initcpio image: /boot/initramfs-linux-fallback.img
    [2013-08-12 19:34] [ALPM-SCRIPTLET] ==> WARNING: errors were encountered during the build. The image may not be complete.
    Note that initial mounting of root=/dev/mapper/Sys-Linux works. Can this be the problem anyway? I rebuilt the initrd, I'll try to boot it tomorrow.
    [2013-08-12 19:42] [PACMAN] Running 'pacman -R initscripts sysvinit' # NOTE: this was required for the filesystem upgrade to work (/sbin)
    [2013-08-12 19:42] [ALPM] warning: /etc/rc.conf saved as /etc/rc.conf.pacsave
    [2013-08-12 19:42] [ALPM] warning: /etc/inittab saved as /etc/inittab.pacsave
    Theese are not required with systemd ?
    Everything else is just upgrade/install/remove or uninteresting. Or do you want to see everything?
    Last edited by Black Sliver (2013-08-12 22:48:32)

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools.
    The RC is somewhat faster than the CP version.
    Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with “Bursts”.
    Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).
    Inserting an ARC 1882X RAID card increases speed with a factor 5 – 10.
    Hence hardware RAID on the same hardware is 5 – 10 times faster!
    We noticed that the “Recourse Monitor” becomes unstable (irresponsive) while testing.
    There are no heavy processor loads while testing.
    JanV.
    JanV

    Based on some testing, I have several new pieces of information on this issue.
    1. Performance limited by controller configuration.
    First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks
    that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally
    (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.
    So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may
    also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.
    My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for
    16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.
    However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.
    Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx,
    which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.
    2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:
    (a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14
    disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different
    sized disks is much higher with multiples of 8 disks in the pool.
    I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.
    (b) Use disks as similar sized as possible in the SS pool.
    This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there
    are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal
    sized) or JBOD (for different sized) before presenting them to SS. 
    It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6
    disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).
    (c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each.
    On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is
    still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.
    It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows
    RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.
    You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column
    width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.
    (d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to
    54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.
    Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.
    Final configuration
    For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore
    card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.
    So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better
    than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.
    I hope these results are interesting and helpful to others!

  • How to move a virtual disk's physical allocation within a Storage Pool

    I have a pool of 3x500GB where one the physical drives is having intermittent issues. Currently, there is only one parity Virtual Disk of 300GB Fixed across 3 columns. I want to replace the bad drive with a good one. The old way (pre-2012) was replace the
    disk, repair the RAID 5, resync and done. These basic steps are not working.
    So far I have added a 4th 500GB drive to the pool. After searching and failing to find a way to move the data non-destructively, I decided to just pull the data cable on the disk I wanted to replace. After refresh/rescan, the disconnected drive shows "lost
    communication" and the virtual disk (after trying to repair) shows "unknown" (but the volume on that disk is accessible in Explorer).  When I try to remove the physical disk in Server Manager, I get "The selected physical disk cannot
    be removed". Reading the error message, I see that the replacement disk cannot contain any part of a virtual disk. The replacement disk that I just added appears to have some space allocated (possibly because I have tried this same procedure a couple
    of times already?). When I look at the parity disk properties/health, it shows all four physical disks under "physical disks in use".
    I have deleted and recreated a lot of storage pools lately while trying to understand how they work but I would like to avoid that this time. The data on the virtual disk in question is highly deduplicated and it took quite a while to get it that way. Since
    I can't find a way to copy/mirror the disk while keeping it fully deduplicated, I would need 3x the space to copy it all off, or a lot of time to load up and deduplicate a new virtual disk.
    I have several questions:
    1. How can a 3 column parity disk use parts of four physical disks? And can that be fixed without recreating the virtual disk?
    2. When creating a virtual disk (for example a 3 column disk in a pool that has four or more physical drives), is there a way to specify which physical disks to use?
    3. I understand that after a physical disk failure, the recovery process will move a virtual disk's allocation to a replacement disk, but can a virtual disk's allocation be moved manually among physical disks within the same storage pool
    using a PS script?
    4. Can a deduplicated virtual disk be moved/mirrored/backed up without expanding the data?
    Any help is appreciated.

    Im still fighting with storage pools and need more tests to be done and have a lot of questions my self but ther goes what I understood so far.
    You may define physical disks used for virtual disk by Powershell ,
    for list of all commands follow this:
    http://technet.microsoft.com/en-us/library/hh848705(v=wps.620).aspx ,
    specific command defining physical disks to be used on already existing virtual disk:
    Example 4: Manually assigning physical disks to a virtual disk
    This example gets two physical disks that have already been added to the storage pool and designated as ManualSelect disks,
    PhysicalDisk3 and PhysicalDisk4, and assigns them to the virtual disk
    UserData.
    PS C:\> Add-PhysicalDisk –VirtualDiskFriendlyName UserData –PhysicalDisks (Get-PhysicalDisk -FriendlyName PhysicalDisk3, PhysicalDisk4)
    http://technet.microsoft.com/en-us/library/hh848702(v=wps.620).aspx
    If You haven't seen this yet You may check it out http://blogs.technet.com/b/yungchou/archive/2011/12/06/free-ebooks.aspx

  • Adding drives to storage pool with same unique id

    i have seen a lot of discussion about using storage pools with raid controllers that reporting the same unique id across multiple drives. 
    I am yet to find a solution to my problem is that i can't add 2 drives to storage pool because they share the same unique id. Is there a way i can get around this?
    Thanks brendon

    Thanks for your reply, 
    However, Storage spaces uses the uniqueid that the raid / sata controller reports for the drive. in my case this is the output from powershell
    PS C:\Users\tfs> get-physicaldisk | ft FriendlyName, uniqueid
    FriendlyName                                                uniqueid
    PhysicalDisk1                                               2039374232333633
    PhysicalDisk2                                               2039374232333633
    PhysicalDisk10                                              SCSI\Disk&Ven_Hitachi&Prod_HDS722020ALA330\4&37df755d&0&...
    PhysicalDisk8                                               SCSI\Disk&Ven_WDC&Prod_WD10EACS-00D6B0\4&37df755d&0&0300...
    PhysicalDisk6                                               SCSI\Disk&Ven_WDC&Prod_WD10EADS-00M2B0\4&37df755d&0&0100...
    PhysicalDisk7                                               SCSI\Disk&Ven_&Prod_ST2000DL003-9VT1\4&37df755d&0&020000...
    PhysicalDisk0                                               2039374232333633
    PhysicalDisk4                                               SCSI\Disk&Ven_&Prod_ST3000DM001-9YN1\5&10a0425f&0&010000...
    PhysicalDisk3                                               SCSI\Disk&Ven_Hitachi&Prod_HDS723030ALA640\5&10a0425f&0&...
    PhysicalDisk9                                               SCSI\Disk&Ven_&Prod_ST31500341AS\4&37df755d&0&040000:sho...
    PhysicalDisk5                                               SCSI\Disk&Ven_WDC&Prod_WD1001FALS-00J7B\4&37df755d&0&000...
    as you notice i have 3 drives with the same uniqueid. This i cannot change and this is what i am looking for a workaround for. 
    If you have any thoughts that would be great.
    Thanks in advance
    Brendon

Maybe you are looking for

  • Client Open close logs

    We open up the client and close the client in our System We wanted to know if we can find the log for the same like who opened the client , time etc. In other words is the client opening and closing stored anywhere in SAP tables

  • Clarifications on company code merger

    Dear SAP Experts We need to merge two company codes i.e XXXX & YYYY into one new entity(Thailand). Legally both entities will be closed by 31st Mar 2011 and merged into newly registered entity on 1st April 2011 (FY V3 - April to March). Their existin

  • How to enhance call list to show entire list

    Hi In SAP standard, only calls witihin the maintained calling hours are displayed in web-ui. For example: If the call list is generated for all customers to be called on tuesday, then customer A will show up 9AM if the calling hours are maintained be

  • Problems with task in data acquisition

    Hallo, I'm trying to acquire current measures and then converting it in different values (temperature, pressure and flow). For this reason I've created a specific task in which every current measure is converted in the apposite measure and obviously

  • "Valid from", "Valid to" And Priority Significance/Relation

    Hello All, When we create a Ticket in Service Desk, we specify the Priority but regardless of the Priority the "Valid To"(End customer requirement) of the Ticket is always 3 days after the Date of creation of the Ticket ("Valid from"). Why so? Then w