Cloning a physical disk

I have a production Oracle VM 3.0.3 environment. One of my VMs has a physical disk for its main boot disk/most of the Linux OS. I need to create a copy of this machine and would like to clone the physical disk 1) to make a new VM off of it and 2) as a potential ongoing backup. Like if I could do this once a week that would be ideal.
What features are there in Oracle VM (either 3.0.3 or even 3.1.1) that would allow me to copy this disk without having to take the VM down. Since its one of our main production servers, I can not take this down each time I want to take a snapshot or clone the disk. Any ideas?
Thanks for any info!

the no tape drive may hurt you in the long run if you update a lot of data on your existing mirror. you can use dd to do what you want, use meta* cmds or vxvm if thats what is the current env. note that if you update lots of data on your mirror, having the old boot volume usable may not help much or will result in lots of rebuild time anyway.

Similar Messages

  • Is it possible to mount a physical disk (/dev/mapper/ disk) on one of my Oracle VM server

    I have a physical disk that I can see from multipath -ll  that shows up as such
    # multipath -ll
    3600c0ff00012f4878be35c5401000000 dm-115 HP,P2000G3 FC/iSCSI
    size=410G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=active
    | `- 7:0:0:49  sdcs 70:0   active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
      `- 10:0:0:49 sdcr 69:240 active ready running
    That particular is visible in the OVMM Gui as a physical disk that I can present to one of my VMs but currently its not presented to any of them.
    I have about 50 physical LUNs that my Oracle VM server can see.  I believe I can see all of them from a fdisk -l, but "dm-115" (which is from the multipath above) doesnt show up.
    This disk has 3 usable partitions on it, plus a Swap.
    I want to mount the 3rd partition temporarily on the OVM server itself and I receive
    # mount /dev/mapper/3600c0ff00012f4878be35c5401000000p3 /mnt
    mount: you must specify the filesystem type
    If I present the disk to a VM and then try to mount the /dev/xvdx3 partition -it of course works.  (x3 - represents the 3rd partition on what ever letter position the disk shows up as)
    Is this possible?

    Its more of the correct syntax. Like I can not seem to figure out how to translate the /dev/mapper path above into what fdisk -l shows. Perhaps if I knew how fdisk and multipath can be cross referenced I could mount the partition.
    I had already tried what you suggested. Here is the output if I present the disk to a VM and then mount the 3rd partition.
    # fdisk -l
    Disk /dev/xvdh: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvdh1   *           1          13      104391   83  Linux
    /dev/xvdh2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/xvdh3            2103       27783   206282632+  83  Linux
    /dev/xvdh4           27784       30394    20972857+   5  Extended
    /dev/xvdh5           27784       30394    20972826   83  Linux
    # mount /dev/xvdh3 /mnt  <-- no error
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/xvda3            197G  112G   75G  60% /
    /dev/xvda5             20G 1011M   18G   6% /var
    /dev/xvda1             99M   32M   63M  34% /boot
    tmpfs                 2.0G     0  2.0G   0% /dev/shm
    /dev/xvdh3            191G   58G  124G  32% /mnt  <-- mounted just fine
    Its ext3 partition
    # df -T
    /dev/xvdh3
    ext3   199822096  60465024 129042944  32% /mnt
    Now if I go to my vm.cfg file, I can see the disk that is presented.
    My disk line contains
    disk = [...'phy:/dev/mapper/3600c0ff00012f4878be35c5401000000,xvdh,w', ...]
    Multipath shows that disk and says "dm-115" but that does not translate on fdisk
    # multipath -ll
    3600c0ff00012f4878be35c5401000000 dm-115 HP,P2000G3 FC/iSCSI
    size=410G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=active
    | `- 7:0:0:49  sdcs 70:0   active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
      `- 10:0:0:49 sdcr 69:240 active ready running
    I have around 50 disks on this server, but the ones of the same size from fdisk -l from the server shows me many.
    # fdisk -l
    Disk /dev/sdp: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdp1   *           1          13      104391   83  Linux
    /dev/sdp2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdp3            2103       27783   206282632+  83  Linux
    /dev/sdp4           27784       30394    20972857+   5  Extended
    /dev/sdp5           27784       30394    20972826   83  Linux
    Disk /dev/sdab: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdab1   *           1          13      104391   83  Linux
    /dev/sdab2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/sdab3            1319       27783   212580112+  83  Linux
    /dev/sdab4           27784       30394    20972857+   5  Extended
    /dev/sdab5           27784       30394    20972826   83  Linux
    Disk /dev/sdac: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdac1   *           1          13      104391   83  Linux
    /dev/sdac2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdac3            2103       27783   206282632+  83  Linux
    /dev/sdac4           27784       30394    20972857+   5  Extended
    /dev/sdac5           27784       30394    20972826   83  Linux
    Disk /dev/sdad: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdad1   *           1          13      104391   83  Linux
    /dev/sdad2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/sdad3            1319       27783   212580112+  83  Linux
    /dev/sdad4           27784       30394    20972857+   5  Extended
    /dev/sdad5           27784       30394    20972826   83  Linux
    Disk /dev/sdae: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdae1   *           1          13      104391   83  Linux
    /dev/sdae2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdae3            2103       27783   206282632+  83  Linux
    /dev/sdae4           27784       30394    20972857+   5  Extended
    /dev/sdae5           27784       30394    20972826   83  Linux
    Disk /dev/sdaf: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdaf1   *           1          13      104391   83  Linux
    /dev/sdaf2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdaf3            2103       27783   206282632+  83  Linux
    /dev/sdaf4           27784       30394    20972857+   5  Extended
    /dev/sdaf5           27784       30394    20972826   83  Linux
    Disk /dev/sdag: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdag1   *           1          13      104391   83  Linux
    /dev/sdag2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdag3            2103       27783   206282632+  83  Linux
    /dev/sdag4           27784       30394    20972857+   5  Extended
    /dev/sdag5           27784       30394    20972826   83  Linux
    Disk /dev/dm-13: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-13p1   *           1          13      104391   83  Linux
    /dev/dm-13p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-13p3            2103       27783   206282632+  83  Linux
    /dev/dm-13p4           27784       30394    20972857+   5  Extended
    /dev/dm-13p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-25: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-25p1   *           1          13      104391   83  Linux
    /dev/dm-25p2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/dm-25p3            1319       27783   212580112+  83  Linux
    /dev/dm-25p4           27784       30394    20972857+   5  Extended
    /dev/dm-25p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-26: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-26p1   *           1          13      104391   83  Linux
    /dev/dm-26p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-26p3            2103       27783   206282632+  83  Linux
    /dev/dm-26p4           27784       30394    20972857+   5  Extended
    /dev/dm-26p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-27: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-27p1   *           1          13      104391   83  Linux
    /dev/dm-27p2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/dm-27p3            1319       27783   212580112+  83  Linux
    /dev/dm-27p4           27784       30394    20972857+   5  Extended
    /dev/dm-27p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-28: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-28p1   *           1          13      104391   83  Linux
    /dev/dm-28p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-28p3            2103       27783   206282632+  83  Linux
    /dev/dm-28p4           27784       30394    20972857+   5  Extended
    /dev/dm-28p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-29: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-29p1   *           1          13      104391   83  Linux
    /dev/dm-29p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-29p3            2103       27783   206282632+  83  Linux
    /dev/dm-29p4           27784       30394    20972857+   5  Extended
    /dev/dm-29p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-30: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-30p1   *           1          13      104391   83  Linux
    /dev/dm-30p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-30p3            2103       27783   206282632+  83  Linux
    /dev/dm-30p4           27784       30394    20972857+   5  Extended
    /dev/dm-30p5           27784       30394    20972826   83  Linux
    How to translate the /dev/mapper address into the correct fdisk, I think I can then mount it.
    If I try the same command as before with the -t option it gives me this error.
    # mount -t ext3 /dev/mapper/3600c0ff00012f48791975b5401000000p3 /mnt
    mount: special device /dev/mapper/3600c0ff00012f48791975b5401000000p3 does not exist
    I know I am close here, and feel it should be possible, I am just missing something.
    Thanks for any help

  • Physical disk for OS and read only filesystem after live migration

    I am running OVM 3.1.1 connected to an EMC storage array.  I have a situation where physical disks are being used for OS and binary filesystems (i.e. /u01, etc..) rather than virtual disks inside a repository.  When a VM is migrated from one host to another, I sometimes get a message stating the size of the disk changed and shortly afterwards the filesystem is changed to read only and I have to reboot to fix the problem.  Is this an unsupported configuration or do I have potential problems with my LUN mapping?  Has anyone seen this problem before?

    I am running OVM 3.1.1 connected to an EMC storage array.  I have a situation where physical disks are being used for OS and binary filesystems (i.e. /u01, etc..) rather than virtual disks inside a repository.  When a VM is migrated from one host to another, I sometimes get a message stating the size of the disk changed and shortly afterwards the filesystem is changed to read only and I have to reboot to fix the problem.  Is this an unsupported configuration or do I have potential problems with my LUN mapping?  Has anyone seen this problem before?

  • What is the difference between Logical Disk and Physical Disk?

    Hi.
    When I do Performance Monitor, I got Logical Disk Avg. Disk sec/Write counter and  Physical Disk Avg. Disk sec/Write counter.
    But I can see the different Avg. value and Max. value. 
    Even if Logical and Physical Disk are one-to-one mapping.
    Why did i get the result?
    One the other hands, I got a same result that Logical Disk Avg. Disk sec/Read counter and  Physical Disk Avg. Disk sec/Read counter's Avg. value and Max. value.

    Physical Disk refers to an actual physical HDD (or array in a hardware RAID setup), whereas Logical Disk refers to a Volume that has been created on that disk.
    So if you have one disk with one volume created on it then the values are likely to be 1 to 1, but if you have multiple volumes on the disk, for instance a physical disk with C:\ and D:\ volumes running on it, then the logical disks relate to c:\ and d:\
    rather than the disk they're running on.
    See
    http://blogs.technet.com/b/askcore/archive/2012/03/16/windows-performance-monitor-disk-counters-explained.aspx for a more in depth explanation.

  • Physical disk not showing up in Disk Utility

    I'm in the process of selling my mac mini.
    I went to the disk utility and erased per the instructions here:
    http://support.apple.com/en-us/HT201065
    The only thing I did differently was I adjusted the security settings to do a pass of writing zeroes. 
    Then I went to reinstall OS X per the instructions and there was no drive available on the "select the disk where you want to install os x" screen.
    I tried to do a restore from backup and got the same symptom - no disk to restore to. Also doesn't show when I go to choose a startup disk.
    I went back to the Disk Utility and noticed some weirdness:
    - The Macintosh HD logical volume is showing in the upper left, but there's no physical disk showing as its parent! See attached screenshot - I can repair and verify the logical volume without any problems. It also says that the volume is online and has 1.1 TB as it should. 
    I've also tried resetting the NVRAM and PRAM, with no luck
    Any help is much appreciated!!!

    Hi PabloHoney1979,
    When I check my Mac I see the same Macintosh HD, Logical Volume Group with an indented partition under it. So it looks like you are looking at the physical disk without a partition on it.  Try the Partition tab and create a partition on the drive.
    Disk Utility 12.x: Partition a disk
    3.  In the Mac OS X Utilities window, select Disk Utility, and then click Continue.
    4.  Select the disk that you want to partition and click Partition.
    5.  Choose the number of partitions from the Volume Scheme pop-up menu.
    6.  Click each partition and type a name for it, choose a format, and type a size. You can also drag the divider between the partitions to change their sizes. If a partition’s name has an asterisk beside it, it’s shown larger than its actual size in order to display its name clearly.
    7.  If you’ll be using a partition as a Mac OS X startup disk, click Options, and choose the the GUID partition scheme.
    8.  Click Apply.
    Take care,
    Nubz

  • Can't create a repository with a local physical disk

    Hi,
    I'm using Oracle VM Manager 3.0.3.
    I created a non clustered server pool with one server. That server has 2 identical SATA 500GB internal drives and 1 eSATA 750GB drive in AHCI mode. The 750G eSATA drive is the primary boot drive and hosts the MBR with Windows 7. One of the 500GB SATA internal drive hosts Oracle VM Server. The other 500GB SATA drive has no partition and I want to use it as a local OVM repository. When I boot the server I can select to boot Windows 7 or Oracle VM Server with no issue.
    After the OVM server boots, OVM Manager 3.0.3 can discover it along with 2 physical disks: the 500GB SATA internal drive (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750) I want to use as a repository and the eSATA 750GB drive. OVM Manager reports these physical disks as SAN type with no file system.
    When I create my local repository for the server pool, I select the physical disk SATA_WDC_WD5000BEKT-_WD-WX41A11X1750 and click the "Next" button. But the job always fails with the following details:
    Job Construction Phase
    begin()
    Appended operation 'File System Construct' to object '0004fb0000090000865c3f26b528bc39 (Local FS OracleVS01)'.
    Appended operation 'Repository Construct' to object '0004fb0000030000182872191a913bac (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [LocalFileServer] 0004fb0000090000865c3f26b528bc39 (Local FS OracleVS01)
    Operation: File System Construct
    Object (CREATED): [LocalFileSystem] 0004fb0000050000f36710bdcca530d4 (fs_MyLocalRepository)
    Object (CREATED): [Repository] 0004fb0000030000182872191a913bac (MyLocalRepository)
    Operation: Repository Construct
    Object (IN_USE): [StorageElement] 0004fb00001800004e793d6f05fced03 (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750)
    Job Running Phase at 12:18 on Tue, Jan 31, 2012
    Job Participants: [00:25:22:dc:0a:ee:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff (OracleVS01)]
    Actioner
    Starting operation 'File System Construct' on object '0004fb0000050000f36710bdcca530d4 (fs_MyLocalRepository)'
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000865c3f26b528bc39] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750, Status: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012] OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1325)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:868)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:193)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:264)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1090)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:247)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:207)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:751)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:475)
    at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:427)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:369)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:864)
    ... 18 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:753)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:471)
    ... 21 more
    FailedOperationCleanup
    Starting failed operation 'File System Construct' cleanup on object 'fs_MyLocalRepository'
    Complete rollback operation 'File System Construct' completed with direction=fs_MyLocalRepository
    Rollbacker
    Objects To Be Rolled Back
    Object (IN_USE): [LocalFileServer] 0004fb0000090000865c3f26b528bc39 (Local FS OracleVS01)
    Object (CREATED): [LocalFileSystem] 0004fb0000050000f36710bdcca530d4 (fs_MyLocalRepository)
    Object (CREATED): [Repository] 0004fb0000030000182872191a913bac (MyLocalRepository)
    Object (IN_USE): [StorageElement] 0004fb00001800004e793d6f05fced03 (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750)
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=5063 method=addTransactionIdentifier accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=4954 method=createFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setFoundryContext accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=onPersistableCreate accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setRollbackLifecycleState accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setRefreshed accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setBackingDevices accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setUuid accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setPath accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setSimpleName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=addFileServer accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setStorageDevice accessLevel=6
    Class=StorageElementDbImpl vessel_id=4972 method=addLayeredFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=setSimpleName accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=createRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setName accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setFoundryContext accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=onPersistableCreate accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setRollbackLifecycleState accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setRefreshed accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setDom0Uuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setSharePath accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setFileSystem accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=addRepository accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setManagerUuid accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setVersion accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=addJobOperation accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setSimpleName accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=setDescription accessLevel=6
    Class=InternalJobDbImpl vessel_id=5063 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=5063 method=setAssociatedHandles accessLevel=6
    Class=InternalJobDbImpl vessel_id=5063 method=setFailedOperation accessLevel=6
    Class=LocalFileServerDbImpl vessel_id=4954 method=nextJobOperation accessLevel=6
    Class=LocalFileSystemDbImpl vessel_id=5072 method=nextJobOperation accessLevel=6
    Class=RepositoryDbImpl vessel_id=5078 method=nextJobOperation accessLevel=6
    Class=StorageElementDbImpl vessel_id=4972 method=nextJobOperation accessLevel=6
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000865c3f26b528bc39] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750, Status: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012] OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000865c3f26b528bc39] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750, Status: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012] OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1325)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:868)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
    at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:193)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:264)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1090)
    at sun.reflect.GeneratedMethodAccessor867.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:247)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:207)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:751)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    Tue Jan 31 12:18:46 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:475)
    at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:427)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:369)
    at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:864)
    ... 18 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
    Tue Jan 31 12:18:46 CST 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:753)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:471)
    ... 21 more
    End of Job
    I don't understand why I always get the error message 'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750' since that disk is blank with no partition.
    Please help me.
    Daniel.

    Hi Avi,
    Blkid returns the following:
    /dev/sda1: LABEL="/boot" UUID="00b66c82-c4e2-4067-bbc4-f22899fce856" TYPE="ext3"
    /dev/sda2: LABEL="/" UUID="b6bc51a3-884a-4d9d-8bc1-8c68792cbe57" TYPE="ext3"
    /dev/sda3: TYPE="swap" LABEL="SWAP-sda3" UUID="58f3de58-a13b-4239-9894-8dffa0856a1a"
    /dev/sdb1: TYPE="ntfs"
    /dev/sdb2: TYPE="ntfs"
    /dev/sdc: LABEL="OVSd62418098e5d5" UUID="0004fb00-0005-0000-d3ad-62418098e5d5" TYPE="ocfs2"
    /dev/mapper/SATA_ST750LX003-1AC1_W2001LHFp2: TYPE="ntfs"
    /dev/mapper/SATA_ST750LX003-1AC1_W2001LHFp1: TYPE="ntfs"
    /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750: LABEL="OVSd62418098e5d5" UUID="0004fb00-0005-0000-d3ad-62418098e5d5" TYPE="ocfs2"
    /dev/sdd1: SEC_TYPE="msdos" LABEL="CLE" UUID="827A-C3D5" TYPE="vfat"
    "CLE" is an attached USB jump drive.
    Thanks.

  • Getting only name of the physical disk from cluster group

    hi experts,
    i want to know the name of my cluster disk example - `Cluster Disk 2` available in a cluster group. Can i write a command something like below.
    Get-ClusterGroup –Name FileServer1 | Get-ClusterResource   where  ResourceType is Physical Disk
    Thanks
    Sid
    sid

    thanks Chaib... its working.
    However i tried this
    Get-ClusterGroup
    ClusterDemoRole
    | Get-ClusterResource
    | fl name
    |  findstr
    "physical Disk"
    which is also working.
    sid

  • How to determine physical disk size on solaris

    I would like to know whether there is a simple method available for determining physical hard disk sizes on Sun sparc machines. On HP based machines it is simple:
    1. run "ioscan -fnC disk" - to find all disk devices and there raw device target address ie /dev/rdsk/c0t2d2
    2. run "diskinfo /dev/rdsk/c0t2d2" - display the attributes of the physical disk including size in Kbyes.
    This simple process allows me create simple scripts that I can use to automate collation of audit data for a large number of HP machines.
    On Sun based machines I've looked at the prtvtoc, format, and devinfo commands and have had no joy. Methods and suggestion will be well appriciated.

    ok,
    format should say .....eg
    type format ..
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
    if this is not a Sun disk, and you do not get the info,
    select the required disk and select partition and then print. This will display what you need.
    hope this helps

  • Unable to determine Physical Disk from the VM manager : 3.2.1

    Unable to determine Physical Disk from the VM manager :
    OVM manager : 3.2.1.516
    VM Server : Available disks on vm server
    # fdisk -l
    Disk /dev/sda: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 1 13 104391 83 Linux
    /dev/sda2 14 405 3148740 83 Linux
    /dev/sda3 406 536 1052257+ 82 Linux swap / Solaris
    Disk /dev/sdb: 80.0 GB, 80000000000 bytes
    255 heads, 63 sectors/track, 9726 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 13 104391 83 Linux
    /dev/sdb2 14 9328 74822737+ 83 Linux
    /dev/sdb3 9329 9459 1052257+ 82 Linux swap / Solaris
    Disk /dev/dm-0: 80.0 GB, 80000000000 bytes
    255 heads, 63 sectors/track, 9726 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/dm-0p1 * 1 13 104391 83 Linux
    /dev/dm-0p2 14 9328 74822737+ 83 Linux
    /dev/dm-0p3 9329 9459 1052257+ 82 Linux swap / Solaris
    Disk /dev/dm-1: 106 MB, 106896384 bytes
    255 heads, 63 sectors/track, 12 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/dm-2: 76.6 GB, 76618483200 bytes
    255 heads, 63 sectors/track, 9315 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk /dev/dm-2 doesn't contain a valid partition table
    Disk /dev/dm-3: 1077 MB, 1077511680 bytes
    255 heads, 63 sectors/track, 131 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk /dev/dm-3 doesn't contain a valid partition table
    #

    thanks for your prompt response.
    I am aware of that we can't use any free spaces where the VM is installed.Hence the additional storage added to the server.
    1. Install VM Server on 80GB HD ( /dev/sdb)
    2. Later, i added 500GB to the storage ( no installation - just the disk ) ( /dev/sda)
    Same configuration worked fine in 3.0, now i am not able to identified the additional disk which was added later with 3.2.1.

  • Physical disk space after ORACLE reorganization with brtools

    Hi all! .
    I perform reorganization after arсhiving with brtools. In db02 shows that free size in tablespace are increased but physical disk space not changed.
    Can i make changes in online with brtools to increase physical disk space in reorganized tablespaces? 
    system ver. :  ECC 5.0,   oracle 9.2.0.7.0, BRTOOLS 6.40 (52)
    Thanks for help!
    Edited by: Andrey Burakov on Jun 14, 2011 7:39 AM

    Andrey Burakov wrote:
    Hi all! .
    > I perform reorganization after arсhiving with brtools. In db02 shows that free size in tablespace are increased but physical disk space not changed.
    > Can i make changes in online with brtools to increase physical disk space in reorganized tablespaces? 
    > system ver. :  ECC 5.0,   oracle 9.2.0.7.0, BRTOOLS 6.40 (52)
    > Thanks for help!
    >
    > Edited by: Andrey Burakov on Jun 14, 2011 7:39 AM
    Hi Andrey,
    You need to reorganize whole tablespace, not only tables in order to reclaim the free space to the OS.
    The prerequisite for online reorganization is:
    Oracle 9.2 database or higher.
    The prerequisite for online conversion of the LONG-fields into LOB-fields:
    Oracle 10g database or higher.
    Online conversion is supported for SAP kernel 7.00 or higher. For SAP kernel 6.40,
    it is supported only in a restricted manner (see "Caution" in point III). Conversion is
    not supported for SAP systems with a kernel lower than 6.40.
    Check the note 646681 - Reorganizing tables with BRSPACE
    Best regards,
    Orkun Gedik
    Edited by: Orkun Gedik on Jun 14, 2011 9:10 AM

  • Physical disk IO size smaller than fragment block filesystem size ?

    Hello,
    in one default UFS filesystem we have 8K block size (bsize) and 1K fragmentsize (fsize). At this scenary I thought all "FileSytem IO" will be 8K (or greater) but never smaller than the fragment size (1K). If a UFS fragment/blocksize is allwasy several ADJACENTS sectors on disk (in a disk with sector=512B), all "physical disk IO" it will allways, like "Filesystem IO", greater than 1K.
    But with dtrace script from DTrace Toolkit (bitesize.d) I can see IO with 512B size.
    ¿What is wrong in my assumptions or what is the explanation?
    Thank you very much in advance!!

    rar wrote:
    Like Jim has indicated me in unix.com forum, That cross-post thread happens to be:
    http://www.unix.com/unix-advanced-expert-users/215823-physical-disk-io-size-smaller-than-fragment-block-filesystem-size.html
    You could have pasted the URL to be polite ...

  • Cluster resource ' Disk Name' of type 'Physical Disk' in clustered role 'Role Name' failed.

    We have been observing issues with our file Cluster (Windows Server 2012 R2 Std Clustered with 2 Nodes) where File Server gets
    unresponsive for SMB access request event id 30809 in Microsoft-Windows-SMBClient/Connectivity is observed
    and when we try to failover the role clustered disks fail to get offline with an error in event id 1069 Cluster resource ' Disk Name' of type 'Physical Disk' in clustered role 'Role Name' failed, we have to force fully reboot the node which faces this
    issue. It works properly for a week and again we get the same issue, this happens with all the disks in different file server roles.
    Regards Ajinkya Ghare MCITP-Server Administrator | MCTS

    we didn't found any thing in the cluster logs, in the WitnessClientAdmin logs we found errors related to failed registration
    Witness Client failed registration request for \\fileserver\sharename with error (The request is not supported.)
    Regards Ajinkya Ghare MCITP-Server Administrator | MCTS

  • Shared Storage is not showing in SErver pool physical disk .

    Hi i have added couple of images to give you exact picture what i am doing .. please have a look
    OVM can see Shared storage from Openfiler
    http://www.picpanda.com/viewer.php?file=u1pf8hre1oqufkx6vyr3.gif
    OVM , SErver Pool (physical disk ) can see Shared Storage name
    http://www.picpanda.com/viewer.php?file=0h2dn48x6gh5avr6fob.gif
    OVM does not show the physical disk from Shared storage
    http://www.picpanda.com/viewer.php?file=c3njd7bg4exyjpymsit.gif
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    thanks
    is there any thing do i have to do to make it work ??

    Fosiul wrote:
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    is there any thing do i have to do to make it work ??Check to see if the physical disk appears under the default volume group for your san. If not, refresh the san and wait until it does. Also, make sure you have an admin server configured for that san and that your Oracle VM servers are in the default access group.

  • Is it safe to delete LUN/physical disk

    Hi all,
    I recently had a FC storage system, that had been attached to my former OVM 2 cluster. to my OVM 3 cluster. This storage system provided a RAID5 LUN of 5 TB. When I changed the zone on my FC network, the LUN became visible in OVM3.
    Now, I wanted to change the LUN from a RAID5 to a 4x RAID1 LUN, providing less capacity, but more IOPs. So I deleted the LUN and created a new one.
    OVMM now shows the old 5 TB LUN and the new 3 TB LUN and I cannot seem to get rid of the "old" LUN. I do have already restarted all my VM servers and none of them does show the 5 TB LUN, so why does OVMM still shows this LUN.
    I also tried to rescan the physical disk in the hardware section, but the LUN sticks…, but I am a bit reluctant to delete the nonexistent LUN.
    Any advice, anyone?
    Thanks

    budachst wrote:
    but I am a bit reluctant to delete the nonexistent LUN.Delete it. Oracle VM doesn't know that the LUN was purposefully deleted and is waiting for it to come back. It'll probably go into an Error state at some point as well. Keep in mind that the "Unmanaged" part of the Storage Array means that Oracle VM is not doing any pro-active management of that storage. You have to do that manually.

  • How to get information(using APIs) about physical disks on solaris 8,9???

    i want to extract physical disk vendor name ,model and all other information.
    which solaris APIs can be used to get this information.

    Hi,
    You can use:
    # iostat -En
    This will give you Vendor / Product / Revision / Serial Number of disks installed in the system relating to their logical name (c0t0d0 etc etc)
    HTH
    Tom

Maybe you are looking for