Sun 7420 (Storage Pool creation)

We have two Sun 7420 Storage controllers managing one disk shelf.
We have divided the existing disk shelf into two different pools so that we can use both the controllers in active-active mode.
Is there any way we can configure both the controllers in active-active mode without dividing the shelf into two different pools?
The purpose behind this is to use maximum bandwidth for a single share on a single pool.

No. With two controllers (cluster mode) each controller has to manage its own pool. In your case with only one disk shelf the best is to build only one mirrored pool and use the S7420c in active/passive.

Similar Messages

  • Creation of new storage pool on iomega ix12-300r failed

    I have a LenovoEMC ix12-300r (iomega version).
    IX12-300r serial number: 2JAA21000A
    There is at present one storagepool (SP0) consisting of 8 drives (RAID5).
    HDD 1-8 (existing SP0): ST31000520AS CC38
    I have aquired 4 new Seagate ST3000DM001 drives as per recommendation on this forum:
    https://lenovo-na-en.custhelp.com/app/answers/detail/a_id/33028/kw/ix12%20recommended%20hdd 
    I want to make a new storage pool with these 4 drives:
    HDD9-12 (new HDD and SP1): ST3000DM001-1CH166 CC29
    I have used diskpart to clean all 4 drives and the IX12-300r can see the drives just fine.
    When I try to make a new storage pool, naming it SP1 as the only storage pool is named SP0, I get an error: "Storage Pool Creation Failed"
    Please advise as to how I can get these drives up and running.
    Regards
    Kristen Thiesen
    adena IT

    I have pulled the 8 hdd from storage pool 0.
    Then I rebooted with the 4 new hdd in slot 9 - 12.
    Result: http://device IP/manage/restart.html, with the message: Confirmation required. You must authorize overwrite existing data to start using this device. Are you sure you want to overwrite existing data?k [yes] / [no]
    I then answer yes four times anticipating that each new drive needs an acceptance, but the dialog just keeps poping up...
    Then I shut down the device and repositioned the 4 new drives to slot 1-4 - but the same thing happened...
    Any suggestions?

  • Setting Up Storage Pool - Invalid Parameter

    Hey all,
    I've run in to an issue when trying to create a Storage Pool in Windows Server 2012 R2. I have two 4TB drives and I'm trying to create a pool out of them (intent is to Mirror), but when I've gone through the pool creation wizard I simply get the following
    error:
    "Could not create storage pool. Invalid Parameter"
    Does anyone have any hints on where to start diagnosing this? Both drives are wiped, both drives can be pooled, and both have different uniqueIDs.
    Thanks in advance!

    Hi,
    Which type of the physical disks are added to the Storage Pool? Are the virtual hard drives (VHDs) created by Windows Azure? If so, the physical disk size advertised by Azure VHDs is not compatible with Windows Server 2012 R2 Storage Spaces.
    For more detailed information, you could refer to the thread below:
    Error creating new Storage Pool in 2012R2 "invalid parameter"
    If the Windws Server 2012 R2 is a virtual machine on Windows Azure, please also check if there is any error message in the Event Log.
    Best Regards,
    Mandy 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Thanks for the reply,
    The Windows Server 2012 R2 is on my own physical server and isn't virtualized. I'm simply trying to create a new storage pool from two physical HDD:s, the disks are Seagate 4TB disks. Both the disks are listed in the storage pool creation wizard. I checked
    the Event Log also but there's nothing, the lack of proper error information is making this a bit challenging to figure out.
    I've also tried using PowerShell to do the work:
    New-StoragePool -FriendlyName POOL1 -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName -PhysicalDisks (Get-PhysicalDisk -CanPool $true)
    The same error results:
    New-StoragePool : Invalid Parameter
    At line:1 char:1
    + New-StoragePool -FriendlyName POOL1 -StorageSubSystemFriendlyName (Get-StorageSu ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : InvalidArgument: (StorageWMI:ROOT/Microsoft/...torageSubSystem) [New-StoragePool], CimException
        + FullyQualifiedErrorId : StorageWMI 5,New-StoragePool

  • Dynamic Connection Pool Creation Failing in a cluster

    Hi,
    I am trying to create a connection pool in a clustered environment. This connection
    pool is created lazily behind a Stateless Session Bean. We first attempt to determine
    whether, the connection pool exists using JdbcServices.poolExists(someName), and
    create it if it does not exist. A failure occurs on creation because it looks
    like the connection pool might have been created by a bean on a different weblogic
    VM instance. Is there any way to dynamically create a connection pool and make
    it visible to the whole cluster? Thanks in advance for any help. Michael Dolbear
    Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Checking existence of connection pool
    Content
    ConnectionPool requested by user guest>
    <Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Creating connection pool ContentConnectionPoo
    l requested by user guest>
    weblogic.common.ResourceException: weblogic.management.MBeanCreationException:
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
    at com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:134)
    at com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.java:
    2352)
    at com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:874)
    at weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBeanSer
    verImpl.java:181)
    at weblogic.management.internal.Helper.createMBean(Helper.java:376)
    at weblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(RemoteMBean
    ServerImpl.java:278)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    635)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    621)
    at weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBeanHome
    Impl.java:397)
    at weblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
    at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    --------------- nested within: ------------------
    weblogic.management.MBeanCreationException: - with nested exception:
    [javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool]
    at weblogic.management.internal.Helper.createMBean(Helper.java:383)
    at weblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(RemoteMBean
    ServerImpl.java:278)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    635)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    621)
    at weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBeanHome
    Impl.java:397)
    at weblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
    at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End server side stack trace
    - with nested exception:
    [javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
            at com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:134)
            at com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.java:
    2352)
            at com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:874)
            at weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBeanSer
    verImpl.java:181)
            at weblogic.management.internal.Helper.createMBean(Helper.java:376)
            at weblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
            at weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(RemoteMBean
    ServerImpl.java:278)
            at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    635)
            at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    621)
            at weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBeanHome
    Impl.java:397)
            at weblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
            at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
            at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
            at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
            at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
            at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End  server side stack trace
    at weblogic.jdbc.common.internal.ConnectionPool.dynaStartup(ConnectionPool.java:47
    2)
    at weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.java:727
    at weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.java:709
    at com.thc.ids.inf.framework.opf.rdbms.datastore.ConnectionPoolCreator.createConne
    ctionPool(ConnectionPoolCreator.java:82)
    at com.thc.ids.inf.framework.opf.datastore.DataStoreRepository.createConnectionPoo
    lIfNonExistent(DataStoreRepository.java:211)
    at com.thc.ids.inf.util.persistence.content.ConnectionPoolInitializer.createConnec
    tionPoolIfNeeded(ConnectionPoolInitializer.java:48)
    at com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.create
    ConnectionPoolIfNeeded(Unknown Source)
    at com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.getIma
    ge(Unknown Source)
    at com.thc.ids.inf.services.business.crs.ContentRetrievalService.getImage(Unknown
    Source)
    at java.lang.reflect.Method.invoke(Native Method)
    at com.thc.ids.inf.util.reflection.MethodDescription.invokeMethod(MethodDescriptio
    n.java:181)
    at com.thc.ids.inf.util.reflection.MethodInvocation.invoke(MethodInvocation.java:7
    9)
    at com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean.invoke(ServiceBean.java:
    186)
    at com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl.invoke(Ser
    viceBean_bjedmi_EOImpl.java:37)
    at com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl_WLSkel.inv
    oke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerRef.java:93
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
    at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    <Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Checking existence of connection
    pool Content
    ConnectionPool requested by user guest>
    <Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Creating connection pool ContentConnectionPoo
    l requested by user guest>
    weblogic.common.ResourceException: weblogic.management.MBeanCreationException:
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
    at com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:134)
    at com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.java:
    2352)
    at com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:874)
    at weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBeanSer
    verImpl.java:181)
    at weblogic.management.internal.Helper.createMBean(Helper.java:376)
    at weblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(RemoteMBean
    ServerImpl.java:278)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    635)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    621)
    at weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBeanHome
    Impl.java:397)
    at weblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
    at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    --------------- nested within: ------------------
    weblogic.management.MBeanCreationException: - with nested exception:
    [javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool]
    at weblogic.management.internal.Helper.createMBean(Helper.java:383)
    at weblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(RemoteMBean
    ServerImpl.java:278)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    635)
    at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    621)
    at weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBeanHome
    Impl.java:397)
    at weblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
    at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End server side stack trace
    - with nested exception:
    [javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException: domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
            at com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:134)
            at com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.java:
    2352)
            at com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:874)
            at weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBeanSer
    verImpl.java:181)
            at weblogic.management.internal.Helper.createMBean(Helper.java:376)
            at weblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
            at weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(RemoteMBean
    ServerImpl.java:278)
            at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    635)
            at weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.java:
    621)
            at weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBeanHome
    Impl.java:397)
            at weblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
            at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
            at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
            at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
            at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
            at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End  server side stack trace
    at weblogic.jdbc.common.internal.ConnectionPool.dynaStartup(ConnectionPool.java:47
    2)
    at weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.java:727
    at weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.java:709
    at com.thc.ids.inf.framework.opf.rdbms.datastore.ConnectionPoolCreator.createConne
    ctionPool(ConnectionPoolCreator.java:82)
    at com.thc.ids.inf.framework.opf.datastore.DataStoreRepository.createConnectionPoo
    lIfNonExistent(DataStoreRepository.java:211)
    at com.thc.ids.inf.util.persistence.content.ConnectionPoolInitializer.createConnec
    tionPoolIfNeeded(ConnectionPoolInitializer.java:48)
    at com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.create
    ConnectionPoolIfNeeded(Unknown Source)
    at com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.getIma
    ge(Unknown Source)
    at com.thc.ids.inf.services.business.crs.ContentRetrievalService.getImage(Unknown
    Source)
    at java.lang.reflect.Method.invoke(Native Method)
    at com.thc.ids.inf.util.reflection.MethodDescription.invokeMethod(MethodDescriptio
    n.java:181)
    at com.thc.ids.inf.util.reflection.MethodInvocation.invoke(MethodInvocation.java:7
    9)
    at com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean.invoke(ServiceBean.java:
    186)
    at com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl.invoke(Ser
    viceBean_bjedmi_EOImpl.java:37)
    at com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl_WLSkel.inv
    oke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerRef.java:93
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267)
    at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)

    The only way to do it is using MBeans. You could search this newsgroup
    for "JDBCConnectionPoolMBean" to get an idea of how it could be done.
    Slava
    P.S. http://search.bea.com/weblogic/gonews
    "Mark Mortensen" <[email protected]> wrote in message
    news:[email protected]...
    >
    Slava,
    I am working with Mike on this issue and wanted to add some moreclarifications.
    We have a two server cluster where one of the EJB's on one of the Managedservers
    creates the connection pool. The problem comes in when a request comes tothe
    second server in the cluster. The connection pool is created by the firstserver
    but it is only assigned to the first server in the targets section on theconsole.
    It isn't assigned to the cluster. Is there a way to programmaticallyassign the
    pool to the cluster instead of just the server that created the pool?
    -Mark
    "Michael Dolear" <[email protected]> wrote:
    Hi Slava,
    Here is what I am doing. The code is spread across a couple of classes.
    I am using
    what was described in BEA's doc on dynamic connection pool creation.
    I didn't
    see anything about MBean apis required:
    * Dynamically create a connection pool using
    aConnectionPoolProperties.
    Please
    see ConnectionPoolCreator
    * for the required properties that must be sent in.
    * @param aConnectionPoolProperties
    public synchronized void createConnectionPoolIfNonExistent(Properties
    aConnectionPoolProperties)
    throwsPersistenceFrameworkInitializationException
    ConnectionPoolCreator tempPoolCreator;
    Pool tempPool;
    tempPoolCreator = new ConnectionPoolCreator();
    tempPool =tempPoolCreator.getConnectionPool(aConnectionPoolProperties);
    if (tempPool == null)
    tempPoolCreator.createConnectionPool(aConnectionPoolProperties);
    >>
    * Create Connection pool given the properties that I have beenconfigured
    with
    * @return Pool
    public Pool createConnectionPool(Properties aConnectionProperties)
    throwsPersistenceFrameworkInitializationException
    JdbcServices tempServices;
    try
    tempServices = this.lookupJdbcServices();
    tempServices.createPool(aConnectionProperties);
    returntempServices.getPool(aConnectionProperties.getProperty(CONNECTION_POOL_NAME)
    catch (Exception e)
    PersistenceFrameworkUtils.logException(e);
    throw newPersistenceFrameworkInitializationException(e.getMessage());
    * Answer a connectionPool or null.
    * @return Pool
    public Pool getConnectionPool(Properties aConnectionProperties)
    throwsPersistenceFrameworkInitializationException
    JdbcServices tempServices;
    try
    tempServices = this.lookupJdbcServices();
    if (tempServices.poolExists(
    aConnectionProperties.getProperty(CONNECTION_POOL_NAME)))
    return tempServices.getPool(
    aConnectionProperties.getProperty(CONNECTION_POOL_NAME));
    else
    return null;
    catch (Exception e)
    PersistenceFrameworkUtils.logException(e);
    throw
    newPersistenceFrameworkInitializationException(e.getMessage());
    "Slava Imeshev" <[email protected]> wrote:
    Hi Michael,
    Could you show us the code? Without looking at the code
    I can only say that JdbcServices.poolExists(someName)
    returns true only in case the pool is up and running.
    If the connection pool MBean was created but not assigned
    a target, subsequent tries to create it would fail.
    Regards,
    Slava Imeshev
    "Michael Dolbear" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    I am trying to create a connection pool in a clustered environment.This
    connection
    pool is created lazily behind a Stateless Session Bean. We first
    attempt
    to determine
    whether, the connection pool exists usingJdbcServices.poolExists(someName), and
    create it if it does not exist. A failure occurs on creation becauseit
    looks
    like the connection pool might have been created by a bean on a
    different
    weblogic
    VM instance. Is there any way to dynamically create a connection pooland
    make
    it visible to the whole cluster? Thanks in advance for any help.
    Michael
    Dolbear
    Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Checking existence ofconnection pool
    Content
    ConnectionPool requested by user guest>
    <Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Creating connection poolContentConnectionPoo
    l requested by user guest>
    weblogic.common.ResourceException:weblogic.management.MBeanCreationException:
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
    at
    com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:1
    34
    at
    com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.
    ja
    va:
    2352)
    at
    com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:
    87
    4)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBe
    an
    Ser
    verImpl.java:181)
    atweblogic.management.internal.Helper.createMBean(Helper.java:376)
    atweblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(Remote
    MB
    ean
    ServerImpl.java:278)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    635)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    621)
    at
    weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBea
    nH
    ome
    Impl.java:397)
    atweblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    --------------- nested within: ------------------
    weblogic.management.MBeanCreationException: - with nested exception:
    [javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool]
    atweblogic.management.internal.Helper.createMBean(Helper.java:383)
    atweblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(Remote
    MB
    ean
    ServerImpl.java:278)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    635)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    621)
    at
    weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBea
    nH
    ome
    Impl.java:397)
    atweblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End server side stack trace
    - with nested exception:
    [javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
    at
    com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:1
    34
    at
    com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.
    ja
    va:
    2352)
    at
    com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:
    87
    4)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBe
    an
    Ser
    verImpl.java:181)
    atweblogic.management.internal.Helper.createMBean(Helper.java:376)
    atweblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(Remote
    MB
    ean
    ServerImpl.java:278)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    635)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    621)
    at
    weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBea
    nH
    ome
    Impl.java:397)
    atweblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End server side stack trace
    at
    weblogic.jdbc.common.internal.ConnectionPool.dynaStartup(ConnectionPool.ja
    va
    :47
    2)
    at
    weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.jav
    a:
    727
    at
    weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.jav
    a:
    709
    at
    com.thc.ids.inf.framework.opf.rdbms.datastore.ConnectionPoolCreator.create
    Co
    nne
    ctionPool(ConnectionPoolCreator.java:82)
    at
    com.thc.ids.inf.framework.opf.datastore.DataStoreRepository.createConnecti
    on
    Poo
    lIfNonExistent(DataStoreRepository.java:211)
    at
    com.thc.ids.inf.util.persistence.content.ConnectionPoolInitializer.createC
    on
    nec
    tionPoolIfNeeded(ConnectionPoolInitializer.java:48)
    at
    com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.c
    re
    ate
    ConnectionPoolIfNeeded(Unknown Source)
    at
    com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.g
    et
    Ima
    ge(Unknown Source)
    at
    com.thc.ids.inf.services.business.crs.ContentRetrievalService.getImage(Unk
    no
    wn
    Source)
    at java.lang.reflect.Method.invoke(Native Method)
    at
    com.thc.ids.inf.util.reflection.MethodDescription.invokeMethod(MethodDescr
    ip
    tio
    n.java:181)
    at
    com.thc.ids.inf.util.reflection.MethodInvocation.invoke(MethodInvocation.j
    av
    a:7
    9)
    at
    com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean.invoke(ServiceBean.
    ja
    va:
    186)
    at
    com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl.invok
    e(
    Ser
    viceBean_bjedmi_EOImpl.java:37)
    at
    com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl_WLSke
    l.
    inv
    oke(Unknown Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerRef.ja
    va
    :93
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    <Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Checking existence ofconnection
    pool Content
    ConnectionPool requested by user guest>
    <Mar 28, 2002 5:35:08 PM MST> <Info> <JDBC> <Creating connection poolContentConnectionPoo
    l requested by user guest>
    weblogic.common.ResourceException:weblogic.management.MBeanCreationException:
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
    at
    com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:1
    34
    at
    com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.
    ja
    va:
    2352)
    at
    com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:
    87
    4)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBe
    an
    Ser
    verImpl.java:181)
    atweblogic.management.internal.Helper.createMBean(Helper.java:376)
    atweblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(Remote
    MB
    ean
    ServerImpl.java:278)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    635)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    621)
    at
    weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBea
    nH
    ome
    Impl.java:397)
    atweblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    --------------- nested within: ------------------
    weblogic.management.MBeanCreationException: - with nested exception:
    [javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool]
    atweblogic.management.internal.Helper.createMBean(Helper.java:383)
    atweblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(Remote
    MB
    ean
    ServerImpl.java:278)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    635)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    621)
    at
    weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBea
    nH
    ome
    Impl.java:397)
    atweblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End server side stack trace
    - with nested exception:
    [javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=J
    DBCConnectionPool
    Start server side stack trace:
    javax.management.InstanceAlreadyExistsException:domain:Name=ContentConnectionPool,Type=JD
    BCConnectionPool
    at
    com.sun.management.jmx.RepositorySupport.addMBean(RepositorySupport.java:1
    34
    at
    com.sun.management.jmx.MBeanServerImpl.internal_addObject(MBeanServerImpl.
    ja
    va:
    2352)
    at
    com.sun.management.jmx.MBeanServerImpl.registerMBean(MBeanServerImpl.java:
    87
    4)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.registerMBean(RemoteMBe
    an
    Ser
    verImpl.java:181)
    atweblogic.management.internal.Helper.createMBean(Helper.java:376)
    atweblogic.management.internal.Helper.createAdminMBean(Helper.java:291)
    at
    weblogic.management.internal.RemoteMBeanServerImpl.createAdminMBean(Remote
    MB
    ean
    ServerImpl.java:278)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    635)
    at
    weblogic.management.internal.MBeanHomeImpl.createAdminMBean(MBeanHomeImpl.
    ja
    va:
    621)
    at
    weblogic.management.internal.AdminMBeanHomeImpl.createAdminMBean(AdminMBea
    nH
    ome
    Impl.java:397)
    atweblogic.management.internal.AdminMBeanHomeImpl_WLSkel.invoke(Unknown
    Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
    End server side stack trace
    at
    weblogic.jdbc.common.internal.ConnectionPool.dynaStartup(ConnectionPool.ja
    va
    :47
    2)
    at
    weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.jav
    a:
    727
    at
    weblogic.jdbc.common.internal.ConnectionPool.createPool(ConnectionPool.jav
    a:
    709
    at
    com.thc.ids.inf.framework.opf.rdbms.datastore.ConnectionPoolCreator.create
    Co
    nne
    ctionPool(ConnectionPoolCreator.java:82)
    at
    com.thc.ids.inf.framework.opf.datastore.DataStoreRepository.createConnecti
    on
    Poo
    lIfNonExistent(DataStoreRepository.java:211)
    at
    com.thc.ids.inf.util.persistence.content.ConnectionPoolInitializer.createC
    on
    nec
    tionPoolIfNeeded(ConnectionPoolInitializer.java:48)
    at
    com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.c
    re
    ate
    ConnectionPoolIfNeeded(Unknown Source)
    at
    com.thc.ids.inf.services.business.crs.spi.oracle.OracleRetrievalProvider.g
    et
    Ima
    ge(Unknown Source)
    at
    com.thc.ids.inf.services.business.crs.ContentRetrievalService.getImage(Unk
    no
    wn
    Source)
    at java.lang.reflect.Method.invoke(Native Method)
    at
    com.thc.ids.inf.util.reflection.MethodDescription.invokeMethod(MethodDescr
    ip
    tio
    n.java:181)
    at
    com.thc.ids.inf.util.reflection.MethodInvocation.invoke(MethodInvocation.j
    av
    a:7
    9)
    at
    com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean.invoke(ServiceBean.
    ja
    va:
    186)
    at
    com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl.invok
    e(
    Ser
    viceBean_bjedmi_EOImpl.java:37)
    at
    com.thc.ids.inf.framework.service.J2EE.ejb.ServiceBean_bjedmi_EOImpl_WLSke
    l.
    inv
    oke(Unknown Source)
    atweblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:298)
    at
    weblogic.rmi.cluster.ReplicaAwareServerRef.invoke(ReplicaAwareServerRef.ja
    va
    :93
    at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:267
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java
    :2
    2)
    at
    weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)

  • FC disk not available after storage repository creation failure

    Hi,
    A storage repo creation job failed because of some network problems. After I fixed up the network problem, I can create another storage repo with another FC disk. When I try creating the failed storage repo again, the FC disk is no longer available for selection. Could the FC disk be left in a "half cooked" state? How can I clear the state and make it available again?
    The log of the failed job seems to suggest rollback was completed.
    Anthony
    Here is the log of the failed job:
    Job Construction Phase
    Job ID: 1379462348446
    begin()
    Appended operation 'File System Construct' to object '0004fb0000050000cf5be81d5dd0fdd4 (fs_Prod Repository 1)'.
    Appended operation 'Cluster File System Present' to object '6be2487d18f31cdf'.
    Appended operation 'Repository Construct' to object '0004fb000003000067df53f34a8d8bdb (Prod Repository 1)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (CREATED): [LocalFileSystem] 0004fb0000050000cf5be81d5dd0fdd4 (fs_Prod Repository 1)
    Operation: File System Construct
    Object (IN_USE): [LocalFileServer] 0004fb0000090000539b1f10fd72eb68 (Local FS DCOVM3S)
    Object (IN_USE): [StorageElement] 0004fb0000180000c4d302d5ef81c194 (Prod Repository 1)
    Object (CREATED): [Repository] 0004fb000003000067df53f34a8d8bdb (Prod Repository 1)
    Operation: Repository Construct
    Object (IN_USE): [LocalFileServer] 0004fb0000090000b6e26fd56c6daeaf (Local FS DCOVM2S)
    Object (IN_USE): [LocalFileServer] 0004fb00000900007a5bac864d14bcd3 (Local FS DCOVM1S)
    Object (IN_USE): [Cluster] 6be2487d18f31cdf
    Operation: Cluster File System Present
    Job Running Phase at 2013-09-18 09:59:08,446
    Job Participants: [4c:4c:45:44:00:59:4a:10:80:43:b1:c0:4f:44:32:53 (DCOVM1S)]
    Actioner
    09:59:58,063: Starting operation 'File System Construct' on object '0004fb0000050000cf5be81d5dd0fdd4 (fs_Prod Repository 1)'
    10:00:44,300: Completed operation 'File System Construct' with direction ==> DONE
    10:00:44,600: Starting operation 'Repository Construct' on object '0004fb000003000067df53f34a8d8bdb (Prod Repository 1)'
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_mount] failed for storage server [0004fb00000900007a5bac864d14bcd3] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]] OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1358)
    at com.oracle.ovm.mgr.action.StoragePluginAction.mountFileSystem(StoragePluginAction.java:1145)
    at com.oracle.ovm.mgr.op.virtual.RepositoryConstruct.createRepository(RepositoryConstruct.java:102)
    at com.oracle.ovm.mgr.op.virtual.RepositoryConstruct.action(RepositoryConstruct.java:51)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1156)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:356)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:333)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:865)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:244)
    at com.oracle.ovm.mgr.api.virtual.RepositoryProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:230)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:322)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1340)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:356)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:333)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:106)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:92)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:752)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:467)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:525)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:512)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:449)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:383)
    at com.oracle.ovm.mgr.action.StoragePluginAction.mountFileSystem(StoragePluginAction.java:1141)
    ... 28 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013]
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:803)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:508)
    ... 31 more
    FailedOperationCleanup
    Starting failed operation 'Repository Construct' cleanup on object 'Prod Repository 1'
    Complete rollback operation 'Repository Construct' cleanup on object 'Prod Repository 1'
    Rollbacker
    10:01:25,461: Starting rollbacker...
    Executing rollback operation 'File System Construct' on object '0004fb0000050000cf5be81d5dd0fdd4 (fs_Prod Repository 1)'
    Complete rollback operation 'File System Construct' completed with direction=DONE
    Executing rollback operation 'Repository Construct' on object '0004fb000003000067df53f34a8d8bdb (Prod Repository 1)'
    Complete rollback operation 'Repository Construct' completed with direction=DONE
    10:01:26,616: Rollbacker completed...
    Objects To Be Rolled Back
    Object (CREATED): [LocalFileSystem] 0004fb0000050000cf5be81d5dd0fdd4 (fs_Prod Repository 1)
    Object (IN_USE): [LocalFileServer] 0004fb0000090000539b1f10fd72eb68 (Local FS DCOVM3S)
    Object (IN_USE): [StorageElement] 0004fb0000180000c4d302d5ef81c194 (Prod Repository 1)
    Object (CREATED): [Repository] 0004fb000003000067df53f34a8d8bdb (Prod Repository 1)
    Object (IN_USE): [LocalFileServer] 0004fb0000090000b6e26fd56c6daeaf (Local FS DCOVM2S)
    Object (IN_USE): [LocalFileServer] 0004fb00000900007a5bac864d14bcd3 (Local FS DCOVM1S)
    Object (IN_USE): [Cluster] 6be2487d18f31cdf
    Write Methods Invoked
    09:59:48,867 class="InternalJobDbImpl" vessel_id=94519 method=addTransactionIdentifier accessLevel=6 owningTx=1379462388858
    09:59:49,473 class="LocalFileServerDbImpl" vessel_id=33833 method=createFileSystem accessLevel=6 owningTx=1379462388858
    09:59:49,601 class="LocalFileSystemDbImpl" vessel_id=94528 method=setName accessLevel=6 owningTx=1379462388858
    09:59:50,056 class="LocalFileSystemDbImpl" vessel_id=94528 method=setFoundryContext accessLevel=6 owningTx=1379462388858
    09:59:50,064 class="LocalFileSystemDbImpl" vessel_id=94528 method=onPersistableCreate accessLevel=6 owningTx=1379462388858
    09:59:50,068 class="LocalFileSystemDbImpl" vessel_id=94528 method=setLifecycleState accessLevel=6 owningTx=1379462388858
    09:59:50,077 class="LocalFileSystemDbImpl" vessel_id=94528 method=setRollbackLifecycleState accessLevel=6 owningTx=1379462388858
    09:59:51,335 class="LocalFileSystemDbImpl" vessel_id=94528 method=setRefreshed accessLevel=6 owningTx=1379462388858
    09:59:51,404 class="LocalFileSystemDbImpl" vessel_id=94528 method=setBackingDevices accessLevel=6 owningTx=1379462388858
    09:59:51,428 class="LocalFileSystemDbImpl" vessel_id=94528 method=setUuid accessLevel=6 owningTx=1379462388858
    09:59:51,442 class="LocalFileSystemDbImpl" vessel_id=94528 method=setPath accessLevel=6 owningTx=1379462388858
    09:59:51,453 class="LocalFileSystemDbImpl" vessel_id=94528 method=setSimpleName accessLevel=6 owningTx=1379462388858
    09:59:51,460 class="LocalFileSystemDbImpl" vessel_id=94528 method=addFileServer accessLevel=6 owningTx=1379462388858
    09:59:51,466 class="LocalFileSystemDbImpl" vessel_id=94528 method=setStorageDevice accessLevel=6 owningTx=1379462388858
    09:59:51,484 class="StorageElementDbImpl" vessel_id=55930 method=addLayeredFileSystem accessLevel=6 owningTx=1379462388858
    09:59:51,494 class="LocalFileSystemDbImpl" vessel_id=94528 method=setSimpleName accessLevel=6 owningTx=1379462388858
    09:59:51,508 class="LocalFileSystemDbImpl" vessel_id=94528 method=addJobOperation accessLevel=6 owningTx=1379462388858
    09:59:51,652 class="LocalFileServerDbImpl" vessel_id=34997 method=addFileSystem accessLevel=6 owningTx=1379462388858
    09:59:51,657 class="LocalFileSystemDbImpl" vessel_id=94528 method=addFileServer accessLevel=6 owningTx=1379462388858
    09:59:51,664 class="LocalFileServerDbImpl" vessel_id=35300 method=addFileSystem accessLevel=6 owningTx=1379462388858
    09:59:51,667 class="LocalFileSystemDbImpl" vessel_id=94528 method=addFileServer accessLevel=6 owningTx=1379462388858
    09:59:51,785 class="ClusterDbImpl" vessel_id=25957 method=addLocalFileSystem accessLevel=6 owningTx=1379462388858
    09:59:51,821 class="LocalFileSystemDbImpl" vessel_id=94528 method=setCluster accessLevel=6 owningTx=1379462388858
    09:59:51,832 class="LocalFileSystemDbImpl" vessel_id=94528 method=setAsset accessLevel=6 owningTx=1379462388858
    09:59:52,153 class="LocalFileSystemDbImpl" vessel_id=94528 method=createRepository accessLevel=6 owningTx=1379462388858
    09:59:52,261 class="RepositoryDbImpl" vessel_id=94533 method=setName accessLevel=6 owningTx=1379462388858
    09:59:52,559 class="RepositoryDbImpl" vessel_id=94533 method=setFoundryContext accessLevel=6 owningTx=1379462388858
    09:59:52,567 class="RepositoryDbImpl" vessel_id=94533 method=onPersistableCreate accessLevel=6 owningTx=1379462388858
    09:59:52,574 class="RepositoryDbImpl" vessel_id=94533 method=setLifecycleState accessLevel=6 owningTx=1379462388858
    09:59:52,582 class="RepositoryDbImpl" vessel_id=94533 method=setRollbackLifecycleState accessLevel=6 owningTx=1379462388858
    09:59:54,559 class="RepositoryDbImpl" vessel_id=94533 method=setRefreshed accessLevel=6 owningTx=1379462388858
    09:59:54,565 class="RepositoryDbImpl" vessel_id=94533 method=setDom0Uuid accessLevel=6 owningTx=1379462388858
    09:59:54,600 class="RepositoryDbImpl" vessel_id=94533 method=setSharePath accessLevel=6 owningTx=1379462388858
    09:59:54,606 class="RepositoryDbImpl" vessel_id=94533 method=setSimpleName accessLevel=6 owningTx=1379462388858
    09:59:54,621 class="RepositoryDbImpl" vessel_id=94533 method=setFileSystem accessLevel=6 owningTx=1379462388858
    09:59:54,627 class="LocalFileSystemDbImpl" vessel_id=94528 method=addRepository accessLevel=6 owningTx=1379462388858
    09:59:54,640 class="RepositoryDbImpl" vessel_id=94533 method=setManagerUuid accessLevel=6 owningTx=1379462388858
    09:59:54,647 class="RepositoryDbImpl" vessel_id=94533 method=setVersion accessLevel=6 owningTx=1379462388858
    09:59:54,668 class="RepositoryDbImpl" vessel_id=94533 method=addJobOperation accessLevel=6 owningTx=1379462388858
    09:59:54,844 class="RepositoryDbImpl" vessel_id=94533 method=setSimpleName accessLevel=6 owningTx=1379462388858
    09:59:54,866 class="RepositoryDbImpl" vessel_id=94533 method=setDescription accessLevel=6 owningTx=1379462388858
    09:59:55,052 class="InternalJobDbImpl" vessel_id=94519 method=setCompletedStep accessLevel=6 owningTx=1379462388858
    09:59:55,123 class="InternalJobDbImpl" vessel_id=94519 method=setAssociatedHandles accessLevel=6 owningTx=1379462388858
    10:00:44,248 class="LocalFileSystemDbImpl" vessel_id=94528 method=setSize accessLevel=6 owningTx=1379462388858
    10:00:44,276 class="LocalFileSystemDbImpl" vessel_id=94528 method=setFreeSize accessLevel=6 owningTx=1379462388858
    10:00:44,583 class="LocalFileSystemDbImpl" vessel_id=94528 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:00:45,381 class="RepositoryDbImpl" vessel_id=94533 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:00:45,390 class="InternalJobDbImpl" vessel_id=94519 method=setFailedOperation accessLevel=6 owningTx=1379462388858
    10:01:25,618 class="LocalFileSystemDbImpl" vessel_id=94528 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:25,621 class="LocalFileServerDbImpl" vessel_id=35300 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:25,629 class="StorageElementDbImpl" vessel_id=55930 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:25,636 class="RepositoryDbImpl" vessel_id=94533 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:25,639 class="LocalFileServerDbImpl" vessel_id=34997 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:25,642 class="LocalFileServerDbImpl" vessel_id=33833 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:25,645 class="ClusterDbImpl" vessel_id=25957 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:26,011 class="LocalFileSystemDbImpl" vessel_id=94528 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    10:01:26,607 class="RepositoryDbImpl" vessel_id=94533 method=nextJobOperation accessLevel=6 owningTx=1379462388858
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_mount] failed for storage server [0004fb00000900007a5bac864d14bcd3] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]] OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_mount] failed for storage server [0004fb00000900007a5bac864d14bcd3] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]] OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]
    at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1358)
    at com.oracle.ovm.mgr.action.StoragePluginAction.mountFileSystem(StoragePluginAction.java:1145)
    at com.oracle.ovm.mgr.op.virtual.RepositoryConstruct.createRepository(RepositoryConstruct.java:102)
    at com.oracle.ovm.mgr.op.virtual.RepositoryConstruct.action(RepositoryConstruct.java:51)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1156)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:356)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:333)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:865)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:244)
    at com.oracle.ovm.mgr.api.virtual.RepositoryProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:230)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:322)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1340)
    at sun.reflect.GeneratedMethodAccessor393.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:356)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:333)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:106)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:92)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:752)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:467)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:525)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: DCOVM1S failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013] [Wed Sep 18 10:00:45 EST 2013]
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:512)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:449)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:383)
    at com.oracle.ovm.mgr.action.StoragePluginAction.mountFileSystem(StoragePluginAction.java:1141)
    ... 28 more
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/3 storage_plugin_mount oracle.ocfs2.OCFS2.OCFS2Plugin /OVS/Repositories/0004fb000003000067df53f34a8d8bdb true [], Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.OperationFailedEx:'Unable to mount file system "/dev/mapper/3600601600aa02c001cb0a6bfbcf4e211": mount.ocfs2: Cluster name is invalid while trying to join the group\r\n' [Wed Sep 18 10:00:45 EST 2013]
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:803)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:508)
    ... 31 more
    End of Job

    Deleting the LUN sounds like a good clean way but unfortunately it did not work. After I recreated the LUN from the storage array, OVM is left with a "phantom" LUN which I cannot delete. Secondly I now get a OCFS2 lock which I don't know how to clear.
    Please help.

  • Resetting ocfs2 /dev/mapper references ? ... storage pool repository

    So, am using 3 disks from SAN which were previously exported to another OVS server. The current setup is a fresh installed OVS server and OVM Manager. OVM is reading the disk paths thats written on the disks and would not allow me to proceed with new pool creation. How do i get rid of it ? I dont want to delete storage from the SAN side and reprovision
    from ovs-agent.log
    [2011-09-23 15:59:20 26901] DEBUG (OVSCommons:123) create_pool_filesystem: ('lun', '/dev/mapper/350002ac000a206ef', '9df421903d48fcc1', '0004fb0000050000cabdb766eeb75c85', '', '0004fb00000100003c815b9182723afd', '0004fb00000200009df421903d48fcc1')
    [2011-09-23 15:59:20 26901] ERROR (OVSCommons:142) catch_error: /dev/mapper/350002ac000a206ef is already a pool filesystem.

    lol, i overlooked the Delete FileSystem button ... but am curious to know what is done @ background ?
    [2011-09-23 16:23:54 29394] DEBUG (StoragePluginManager:36) storage_plugin_destroyFileSystem(oracle.ocfs2.OCFS2.OCFS2Plugin)
    [2011-09-23 16:23:54 29394] DEBUG (caller:347) Started worker process 29395
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [{'capability.clone.asynchronous': False, 'filesystem.api-version': ['1', '2', '7'], 'capability.snapclone': True, 'capability.resize': True, 'help.extra-info.file': 'None', 'capability.splitclone.open': False, 'filesystem.backing-device.type': 'device', 'capability.snapclone.asynchronous': False, 'help.extra-info.filesystem': 'None', 'capability.clone': True, 'capability.snapshot': True, 'capability.splitclone': False, 'capability.clone.online': True, 'capability.snapshot.custom-name': True, 'capability.access-control.max-entries': 0, 'type': 'ifs', 'capability.snapshot.asynchronous': False, 'capability.snapclone.online': True, 'filesystem.backing-device.multi': False, 'vendor': 'Oracle', 'description': 'Oracle OCFS2 File system Storage Connect Plugin', 'capability.splitclone.asynchronous': False, 'capability.splitclone.online': False, 'capability.storage-name-required': False, 'capability.snapshot.online': True, 'filesystem.type': 'LocalFS', 'help.extra-info.server': 'None', 'name': 'Oracle OCFS2 File system', 'capability.clone.custom-name': True, 'capability.access-control': False, 'capability.resize.asynchronous': False, 'capability.resize.online': True, 'api-version': ['1', '2', '7'], 'filesystem.name': 'ocfs2'}, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:357) Stopped worker process 29395
    [2011-09-23 16:23:54 29394] DEBUG (caller:347) Started worker process 29396
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:59) Result [None, None]
    [2011-09-23 16:23:54 29394] DEBUG (caller:357) Stopped worker process 29396
    [2011-09-23 16:23:54 29394] DEBUG (OVSCommons:131) storage_plugin_destroyFileSystem: call completed.

  • VirtualDisk on Windows Server 2012 R2 Storage Pool stuck in "Warning: In Service" state and all file transfers to and from is awfully slow

    Greetings,
    I'm having some trouble with my Windows Storage Pool and my VirtualDisk running on a Windows Server 2012 R2 installation. It consists of 8x Western Digital RE-4 2TB drives + 2x Western Digital Black Edition 2TB drives and have been configured in a single-disk
    parity setup and the virtual disk is running fixed provisioning (max size) and is formatted with ReFS.
    It's been running solid for months besides some awful write-speeds at times, it seems like the write performance running ReFS compared to NTFS is not that good.
    I was recommended to add SSD's for journalling in order to boost write-performance. Sadly I seemed to screw up this part, you need to due this through PowerShell and it needs to be done before creating the virtualdisk. I managed to add my SSD to the Storage
    Pool and then remove it.
    This seem to have caused some awkward issues, I'm not quite sure of why as the virtualdisk is "fixed" so adding the SSD to the Storage Pool shouldn't really do anything, right? But after I did this my virtual disk have been stuck in "Warning:
    In Service" and it seems to be stuck? It's been 4-5 days and it's still the same and the performance is currently horrible. Moving 40GB of data off the virtual disk took me about 20 hours or so. Launching files under 1mb of the virtual disk takes several
    minutes etc.. It's pretty much useless.
    The GUI is not providing any useful information about what's going on. What does "Warning: In Service" actually imply? How am I supposed to know how long this is supposed to take? Running Get-Virtualdisk in PowerShell does not provide any useful
    information either. I did try to do a repair through the Server Manager GUI but it goes to about 21% within 2-3 hours but drops back down to 10%. I have had the repair running for days but it wont go past 21% without dropping back down again.
    Running repair through PowerShell yields the same results, but if I detach the virtual disk and then try to repair through PowerShell (the GUI wont let me do repair on detached virtual disks) it will just run for a split second and then close.
    After doing some "Googeling" I've seen people mentioning that the repair is not able to finish unless I have at least the same amount of free space in the Storage Pool as the largest drive in my Storage Pool is housing so I added a 4TB drive as
    due to me running fixed provisioning I had used all the space in the pool but the repair is still not able to go past 21%.
    As am running "fixed provisioning" I guess adding a extra drive to the pool doesn't do much difference as it's not available for the virtual disk? So I went ahead and deleted 3 TB of data on the virtual disk so now I've got about 4 TB free space
    on the virtual disk so there should be plenty of room for Windows Server 2012 R2 to re-build the parity or whatever it's trying to do but it's still the same, the repair wont move past 21% and the virtual disk is still stuck in "Warning: In Service"
    mode and the performance keeps being horrible so taking a backup will take forever at these speeds...
    Am I missing something here? All the drives in the pool is working fine. I have verified using various bootable tools so why is this happening and what can I do to get the virtual disk running at full state again? Why doesn't the GUI prompt you with any
    kind of usable information?
    Best regards, Thomas Andre

    Hi,
    Please run chkdsk /f /r command on the virtual disk to have a try. In the meantime, run the following commands in PowerShell to share the output.
    get-virtualdisk -friendlyname <name> | get-physicaldisk | fl
    get-virtualdisk -friendlyname <name> |fl
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • After upgrading to 8.1 Pro from 8.0 Pro, my Storage Spaces and Storage Pool are all gone.

    Under 8.0 I had three 4-terabyte drives set up as a Storage Pool in Storage Spaces.  I had five storage-space drives using this pool  I had
    been using them for months with no problems.  After upgrading to 8.1 ( which gave no errors ) the drives no longer exist.  Going into "Storage Spaces" in the control panel, I do not see my existing storage pool or storage drives. I'm prompted
    to create a new Pool and Storage Spaces.  If I click the "Create Pool" it does not list the three drives I used previously as available to add.
    Device Manager shows all three drives as present and OK.  
    Disk Management shows Disks 0,1,2,6.  The gap in between 2 and 6 is where the 3,4,5 storage spaces drives were.  
    Nothing helpful in the event log or the services.
    I've downloaded the ReclaiMe Storage Spaces recovery tool and it sees all of my logical drives with a "good" prognosis for recovery.  I've not gone down that road yet though because it requires separate physical drives to copy everything to
    and they want $299 for the privilege.
    Does anyone have any ideas?  I'm thinking of doing a fresh 8.1 install to another drive to see if it can see it or reinstalling 8.1 to the existing drive in the hope that it will just suddenly work.  Or possibly going back to 8.0.
    Thanks for your help!
    Steve

    Hi,
    “For parity spaces you must backup your data and delete the parity spaces. At this point, you may upgrade or perform a clean install of Windows 8. After the upgrade or clean installation is complete, you may recreate parity spaces and
    restore your data.”
    I’d like to share the following article with you for reference:
    Storage Spaces Frequently Asked Questions
    (Pay attention to this part:How do I prepare Storage Spaces for upgrading from the Windows 8 Consumer Preview to Windows 8 Release Preview?)
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_prepare_Storage_Spaces_for_upgrading_from_the_Windows_8_Consumer_Preview_to_Windows_8_Release_Preview
    Regards,
    Yolanda
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 R2 Storage Pool Disk Identification Method

    Hi all,
    I'm currently using Server 2012 R2 Essentials with a Storage Space consisting of 7 3TB disks. The disks are connected to an LSI MegaRAID controller which does not support JBOD so each disk is configured as a single disk RAID0. The disks are connected to
    the controller using SAS Breakout Cables (SATA to SFF-8087).
    I am considering moving my server into a new chassis. The new chassis has a SAS Backplane for drive attachment which means I would be re-cabling to use SFF-8087 to SFF-8087 cables instead and in doing so, the channel and port assignment on the LSI MegaRAID
    will change.
    I know that the LSI card will have no problem identifying the disk as the same disk when it's connected to a different port or channel on the controller, but is the same true for the Storage Space?
    How does Storage Spaces track the identity of the individual disks?
    Just to be clear, the hardware configuration otherwise will not be changing. Motherboard, CPU, RAID controller etc will all be the same, it will just be moving everything into a new chassis.

    Hi,
    If the disks are still recognized as the same, the storage space should be recognized as well.
    You could test to do the replacement and see if the storage pools are being recognized. If not you can still change them back with original devices and storage pools will back to work. Then we may need to find a way to migrate your data. Personally I think
    it will work directly. 
    Note: backup important files is always recommended. 
    If you have any feedback on our support, please send to [email protected]

  • Win 8.1 Storage Pool not allowing "add drive" nor allow expand capacity

    Have one Storage Space within one Storage Pool (Parity mode) containing 4 identical hard drives.
    Used for data storage, it appears to be functioning normally
    and
    has filled 88% of capacity
    (ie. 88% x 2/3 of physical capacity (parity mode))
    The only other storage on this new PC is an SSD used for OS (win 8.1 pro) and application software.
    In "Manage Storage Spaces"
    displays this warning message to add drives:
    <   Warning                               >
    <   Low capacity; add 3 drives   >
    After clicking "add drives", it displays:
    "No drives that work with Storage Spaces are available. Make sure that the drives that you want to use are connected.".
    However I had connected another two identical hard drives via SATA cables and "Disk Management" displays these two drives available.
    in summary:
    "Manage Storage Spaces" does not find these drives as available although they show correctly in Disk Management.
    btw - I removed the pre-existing partitioning on the 'new' drives so now they show only as "unallocated" in "Disk Management". (I did
    likewise before Storage Pool found the 4 original drives)
    Perhaps the problem is to increase the total nominal capacity of the Storage Space, before can add more drives?
    Microsoft says that the capacity of Storage Pools can be increased but cannot be decreased -
    but computer displays no Change "button" by which this can be done. There is supposed to be a "Change" button but that is
    not displaying for me. So "Manage Storage Spaces" offer me no option to manage the "size" of the pool.
    only five options are displayed:
    Create a storage space     (ie. from the small amount remaining unused in the Pool)
    Add drives     (.... as explained already)
    Rename pool    (only renames the storage space)
    Format        (ie. re-format and so lose all current data)
    Delete         (ie. delete the storage space and so lose all current data)
    using Google, find nothing bearing on this problem
    except the most basic instructions to set up a storage space!
    Can you help?
    The problem is that the Storage Pool is not displaying a "button" to increase capacity, and when click "add drives" finds no hard drives available. 

    Hi,
    I would suggest you launch Device Manager, then expand
    Disk drives. Right-click the disk listed as "Disk drive", and select
    Uninstall. On the Action menu, click Scan for hardware changes to reinstall the disk.
    Please also take a look of this link:
    see this part: How do I increase pool capacity?
    http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx#How_do_I_increase_pool_capacity 
    According to the link, to extend a parity space, the pool would need the appropriate number of columns available to accommodate the layout of the disk.
    Yolanda Zhu
    TechNet Community Support

  • 2012 New Cluster Adding A Storage Pool fails with Error Code 0x8007139F

    Trying to setup a brand new cluster (first node) on Server 2012. Hardware passes cluster validation tests and consists of a dell 2950 with an MD1000 JBOD enclosure configured with a bunch of 7.2K RPM SAS and 15k SAS Drives. There is no RAID card or any other
    storage fabric, just a SAS adapter and an external enclosure.
    I can create a regular storage pool just fine and access it with no issues on the same box when I don't add it to the cluster. However when I try to add it to the cluster I keep getting these errors on adding a disk:
    Error Code: 0x8007139F if I try to add a disk (The group or resource is not in the correct state to perform the requested operation)
    When adding the Pool I get this error:
    Error Code 0x80070016 The Device Does not recognize the command
    Full Error on adding the pool
    Cluster resource 'Cluster Pool 1' of type 'Storage Pool' in clustered role 'b645f6ed-38e4-11e2-93f4-001517b8960b' failed. The error code was '0x16' ('The device does not recognize the command.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    if I try to just add the raw disks to the storage -- without using a pool or anything - almost every one of them but one fails with incorrect function except for one (a 7.2K RPM SAS drive). I cannot see any difference between it and the other disks. Any
    ideas? The error codes aren't anything helpful. I would imagine there's something in the drive configuration or hardware I am missing here I just don't know what considering the validation is passing and I am meeting the listed prerequisites.
    If I can provide any more details that would assist please let me know. Kind of at a loss here.

    Hi,
    You mentioned you use Dell MD 1000 as storage, Dell MD 1000 is Direct Attached Storage (DAS)
    Windows Server cluster do support DAS storage, Failover clusters include improvements to the way the cluster communicates with storage, improving the performance of a storage area network (SAN) or direct attached storage (DAS).
    But the Raid controller PERC 5/6 in MD 1000 may not support cluster technology. I did find its official article, but I found its next generation MD 1200 use Raid controller PERC H 800 is still not support cluster technology.
    You may contact Dell to check that.
    For more information please refer to following MS articles:
    Technical Guidebook for PowerVault MD1200 and MD 1220
    http://www.dell.com/downloads/global/products/pvaul/en/storage-powervault-md12x0-technical-guidebook.pdf
    Dell™ PERC 6/i, PERC 6/E and CERC 6/I User’s Guide
    http://support.dell.com/support/edocs/storage/RAID/PERC6/en/PDF/en_ug.pdf
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools.
    The RC is somewhat faster than the CP version.
    Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with “Bursts”.
    Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).
    Inserting an ARC 1882X RAID card increases speed with a factor 5 – 10.
    Hence hardware RAID on the same hardware is 5 – 10 times faster!
    We noticed that the “Recourse Monitor” becomes unstable (irresponsive) while testing.
    There are no heavy processor loads while testing.
    JanV.
    JanV

    Based on some testing, I have several new pieces of information on this issue.
    1. Performance limited by controller configuration.
    First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks
    that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally
    (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.
    So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may
    also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.
    My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for
    16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.
    However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.
    Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx,
    which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.
    2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:
    (a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14
    disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different
    sized disks is much higher with multiples of 8 disks in the pool.
    I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.
    (b) Use disks as similar sized as possible in the SS pool.
    This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there
    are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal
    sized) or JBOD (for different sized) before presenting them to SS. 
    It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6
    disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).
    (c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each.
    On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is
    still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.
    It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows
    RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.
    You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column
    width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.
    (d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to
    54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.
    Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.
    Final configuration
    For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore
    card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.
    So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better
    than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.
    I hope these results are interesting and helpful to others!

  • How to move a virtual disk's physical allocation within a Storage Pool

    I have a pool of 3x500GB where one the physical drives is having intermittent issues. Currently, there is only one parity Virtual Disk of 300GB Fixed across 3 columns. I want to replace the bad drive with a good one. The old way (pre-2012) was replace the
    disk, repair the RAID 5, resync and done. These basic steps are not working.
    So far I have added a 4th 500GB drive to the pool. After searching and failing to find a way to move the data non-destructively, I decided to just pull the data cable on the disk I wanted to replace. After refresh/rescan, the disconnected drive shows "lost
    communication" and the virtual disk (after trying to repair) shows "unknown" (but the volume on that disk is accessible in Explorer).  When I try to remove the physical disk in Server Manager, I get "The selected physical disk cannot
    be removed". Reading the error message, I see that the replacement disk cannot contain any part of a virtual disk. The replacement disk that I just added appears to have some space allocated (possibly because I have tried this same procedure a couple
    of times already?). When I look at the parity disk properties/health, it shows all four physical disks under "physical disks in use".
    I have deleted and recreated a lot of storage pools lately while trying to understand how they work but I would like to avoid that this time. The data on the virtual disk in question is highly deduplicated and it took quite a while to get it that way. Since
    I can't find a way to copy/mirror the disk while keeping it fully deduplicated, I would need 3x the space to copy it all off, or a lot of time to load up and deduplicate a new virtual disk.
    I have several questions:
    1. How can a 3 column parity disk use parts of four physical disks? And can that be fixed without recreating the virtual disk?
    2. When creating a virtual disk (for example a 3 column disk in a pool that has four or more physical drives), is there a way to specify which physical disks to use?
    3. I understand that after a physical disk failure, the recovery process will move a virtual disk's allocation to a replacement disk, but can a virtual disk's allocation be moved manually among physical disks within the same storage pool
    using a PS script?
    4. Can a deduplicated virtual disk be moved/mirrored/backed up without expanding the data?
    Any help is appreciated.

    Im still fighting with storage pools and need more tests to be done and have a lot of questions my self but ther goes what I understood so far.
    You may define physical disks used for virtual disk by Powershell ,
    for list of all commands follow this:
    http://technet.microsoft.com/en-us/library/hh848705(v=wps.620).aspx ,
    specific command defining physical disks to be used on already existing virtual disk:
    Example 4: Manually assigning physical disks to a virtual disk
    This example gets two physical disks that have already been added to the storage pool and designated as ManualSelect disks,
    PhysicalDisk3 and PhysicalDisk4, and assigns them to the virtual disk
    UserData.
    PS C:\> Add-PhysicalDisk –VirtualDiskFriendlyName UserData –PhysicalDisks (Get-PhysicalDisk -FriendlyName PhysicalDisk3, PhysicalDisk4)
    http://technet.microsoft.com/en-us/library/hh848702(v=wps.620).aspx
    If You haven't seen this yet You may check it out http://blogs.technet.com/b/yungchou/archive/2011/12/06/free-ebooks.aspx

  • Adding drives to storage pool with same unique id

    i have seen a lot of discussion about using storage pools with raid controllers that reporting the same unique id across multiple drives. 
    I am yet to find a solution to my problem is that i can't add 2 drives to storage pool because they share the same unique id. Is there a way i can get around this?
    Thanks brendon

    Thanks for your reply, 
    However, Storage spaces uses the uniqueid that the raid / sata controller reports for the drive. in my case this is the output from powershell
    PS C:\Users\tfs> get-physicaldisk | ft FriendlyName, uniqueid
    FriendlyName                                                uniqueid
    PhysicalDisk1                                               2039374232333633
    PhysicalDisk2                                               2039374232333633
    PhysicalDisk10                                              SCSI\Disk&Ven_Hitachi&Prod_HDS722020ALA330\4&37df755d&0&...
    PhysicalDisk8                                               SCSI\Disk&Ven_WDC&Prod_WD10EACS-00D6B0\4&37df755d&0&0300...
    PhysicalDisk6                                               SCSI\Disk&Ven_WDC&Prod_WD10EADS-00M2B0\4&37df755d&0&0100...
    PhysicalDisk7                                               SCSI\Disk&Ven_&Prod_ST2000DL003-9VT1\4&37df755d&0&020000...
    PhysicalDisk0                                               2039374232333633
    PhysicalDisk4                                               SCSI\Disk&Ven_&Prod_ST3000DM001-9YN1\5&10a0425f&0&010000...
    PhysicalDisk3                                               SCSI\Disk&Ven_Hitachi&Prod_HDS723030ALA640\5&10a0425f&0&...
    PhysicalDisk9                                               SCSI\Disk&Ven_&Prod_ST31500341AS\4&37df755d&0&040000:sho...
    PhysicalDisk5                                               SCSI\Disk&Ven_WDC&Prod_WD1001FALS-00J7B\4&37df755d&0&000...
    as you notice i have 3 drives with the same uniqueid. This i cannot change and this is what i am looking for a workaround for. 
    If you have any thoughts that would be great.
    Thanks in advance
    Brendon

  • Failover cluster storage pool cannot be added

    Hi.
    Environment: Windows Server 2012 R2 with Update.
    Storage: Dell MD3600F
    I created an LUN with 5GB space and map it to both node of this cluster. It can be seen on both side on Disk Management. I installed it as GPT based disk without any partition.
    The New Storage Pool wizard can be finished by selecting this disk without any error message, nor event logs.
    But after that, the pool will not be visible from Pools and the LUN will be gone from Disk Management. The LUN cannot be shown again even after rescanning.
    This can be repo many times.
    In the same environment, many LUNs work well in Storage - Disks. It just failed while acting as a pool.
    What's wrong here?
    Thanks.

    Hi EternalSnow,
    Please refer to following article to create clustered storage pool :
    http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx
    Any further information please feel free to let us know .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for