Failed to create resource - Error in Sun cluster 3.2

Hi All,
I have a 2 node cluster in place. When i trying to create a resource, i am getting following error.
Can anybody tell me why i am getting this. I have Sun Cluster 3.2 on Solaris 10.
I have created zpool called testpool.
clrs create -g test-rg -t SUNW.HAStoragePlus -p Zpools=testpool hasp-testpool-res
clrs: sun011:test011z - : no error
clrs: (C189917) VALIDATE on resource hasp-testpool-res, resource group test-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource hasp-testpool-res in resource group test-rg on node sun011:test011z failed.
clrs: (C891200) Failed to create resource "hasp-testpool-res".
Regards
Kumar

Thorsten,
testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
Regards
Kumar

Similar Messages

  • "failed to create resource mgr" in trace file

    I got error logs below when I opened the second oracle connection within a transaction, anyone can help?
    [01/06/2004-17:26:46:100][000004ec] kpntdbid::AllocNewSrvr(dbname) - allocated server 0x01ba68a0.
    [01/06/2004-17:26:48:444][000004ec] kpntsrvr::ServerAttach(dbname) - attached to Oracle successfully.
    [01/06/2004-17:26:48:444][000004ec] kpntdbid::AllocNewSrvr(dbname) - attach successful on 0x01ba68a0.
    [01/06/2004-17:26:48:444][000004ec] kpntdbid::GetSrvr(0x01ba68a0,0x00000000) - allocated a new server.
    [01/06/2004-17:26:48:444][000004ec] kpntsess::InitOCI(0x01bb6e08) - initialize successful.
    [01/06/2004-17:26:48:444][000004ec] kpntsess::SessionBegin(sa,****) - successfully initialized OCI handles.
    [01/06/2004-17:26:48:460][000004ec] kpntsess::SessionBegin(sa,****) - successful logon to Oracle.
    [01/06/2004-17:26:48:460][000004ec] kpntsess::resetExplicit() - rolledback txn.
    [01/06/2004-17:26:48:460][000004ec] kpntrmprx::completeMSDTCTxn() - tx guid: 6a265311-315c-44f8-9d00-55f9dfbefcc5.
    [01/06/2004-17:27:20:428][00000360] kpntrmprx::initialize() - failed to create resource mgr.
    [01/06/2004-17:27:20:428][00000360] kpntsenp::DeinitOCI() - error resetting attributes of OCI handle.
    [01/06/2004-17:27:32:662][00000360] kpntjobq::Deinitialize() - Posting kill to thread 2.
    [01/06/2004-17:27:32:662][00000360] kpntjobq::Deinitialize() - Posting kill to thread 1.
    [01/06/2004-17:27:32:662][00000360] kpntjobq::Deinitialize() - Posting kill to thread 0.

    Thorsten,
    testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
    Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
    Regards
    Kumar

  • Failed to create resource (Security:090310) when deploying .ear

    I'm having an issue when starting a portalapplication in weblogic (10.2). The stacktrace yields:
    ####<05.mai.2008 kl 10.39 CEST> <Error> <Deployer> <no-osl-m323-srv-013-z1.> <ManagedServer_3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1209976779500> <BEA-149205> <Failed to initialize the application 'test-portalEar' due to error weblogic.application.ModuleException: Exception preparing module: EJBModule(netuix.jar)
    Unable to deploy EJB: AsyncProliferation from netuix.jar:
    Exception while attempting to deploy Security Policy: weblogic.security.service.ResourceCreationException: weblogic.security.spi.ResourceCreationException: [Security:090310]Failed to create resource
    weblogic.application.ModuleException: Exception preparing module: EJBModule(netuix.jar)
    Unable to deploy EJB: AsyncProliferation from netuix.jar:
    Exception while attempting to deploy Security Policy: weblogic.security.service.ResourceCreationException: weblogic.security.spi.ResourceCreationException: [Security:090310]Failed to create resource
    at weblogic.ejb.container.deployer.EJBModule.prepare(EJBModule.java:399)
    at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93)
    at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:360)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:56)
    at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:46)
    at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:615)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26)
    at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:191)
    at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:147)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:61)
    at weblogic.deploy.internal.targetserver.AppDeployment.prepare(AppDeployment.java:137)
    at weblogic.management.deploy.internal.DeploymentAdapter$1.doPrepare(DeploymentAdapter.java:39)
    at weblogic.management.deploy.internal.DeploymentAdapter.prepare(DeploymentAdapter.java:187)
    at weblogic.management.deploy.internal.AppTransition$1.transitionApp(AppTransition.java:21)
    at weblogic.management.deploy.internal.ConfiguredDeployments.transitionApps(ConfiguredDeployments.java:233)
    at weblogic.management.deploy.internal.ConfiguredDeployments.prepare(ConfiguredDeployments.java:165)
    at weblogic.management.deploy.internal.ConfiguredDeployments.deploy(ConfiguredDeployments.java:122)
    at weblogic.management.deploy.internal.DeploymentServerService.resume(DeploymentServerService.java:173)
    at weblogic.management.deploy.internal.DeploymentServerService.start(DeploymentServerService.java:89)
    at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)
    weblogic.ejb20.interfaces.PrincipalNotFoundException: Exception while attempting to deploy Security Policy: weblogic.security.service.ResourceCreationException: weblogic.security.spi.ResourceCreationException: [Security:090310]Failed to create resource
    at weblogic.ejb.container.internal.SecurityHelperWLS.deployPolicy(SecurityHelperWLS.java:357)
    at weblogic.ejb.container.internal.SecurityHelper.deployPolicy(SecurityHelper.java:306)
    at weblogic.ejb.container.internal.SecurityHelper.deployPolicy(SecurityHelper.java:294)
    I have searched the web for this exception and so far found no answer/resource to help me. Also, other applications deployed in the same weblogic domain work without any problems. Can anyone help me? Any tip will be appreciated:)

    Thorsten,
    testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
    Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
    Regards
    Kumar

  • ORA-39083: Object type TRIGGER failed to create with error:

    i m getting these two error when i import data using impdp.
    ORA-39083: Object type TRIGGER failed to create with error:
    ORA-00942: table or view does not exist
    i have exported (expdp) data from production db, when i importing (impdp) the dump file to the test db i m geting the above two errors.
    example:
    ORA-39083: Object type TRIGGER failed to create with error:
    ORA-00942: table or view does not exist
    Failing sql is:
    CREATE TRIGGER "NEEDLE"."CC_BCK_TRG" BEFORE INSERT OR UPDATE
    ON NIIL.cc_bck_mgmt REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW
    DECLARE
    w_date DATE;
    w_user VARCHAR2(10);
    BEGIN
    SELECT USER,SYSDATE INTO w_user,w_date FROM DUAL;
    IF INSERTING THEN
    :NEW.cretuser :=w_user;
    :NEW.cretdate :=w_date;
    END IF;
    IF UPDATING THEN
    :NEW.modiuser :=w_user;
    :NEW.modidate :=w_date;
    END IF;
    END;
    status of the above trigger in pro db is valid. and source table also exist even though i m getting error when i import
    please suggest me...

    perhaps you don't have table... (impdp created trigger before create table)
    check about "NIIL.cc_bck_mgmt" table.
    and then create it (trigger) manual ;)
    Good Luck.

  • Creating Logical hostname in sun cluster

    Can someone tell me, what exactly logical hostname in sun cluster mean?
    For registering logical hostname resource in failoover group, what exactly i need to specify
    for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

    Thanks Thorsten for ur continue help...
    The output of clrs status abc_lg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    abc_lg node1 Offline Offline
    node2 Offline Offline
    The status is offline...
    the output of clresourcegroup status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    abc_rg node1 No Unmanaged
    node2 No Unmanaged
    You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
    I have 1) Logical hostname resorce.
    2) Application resource in my failover resource group
    When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

  • Failed to create componenetID error.

    Hi,
    We are using OSM 7.0.3 and O2A cartridges.
    In our one business scenario CRM is sending one revision order.
    While creating this order in OSM its failed with below error message.
    +"oracle.communications.ordermanagement.ws.InvalidOrderSpecificationFault: Failed to create and start the order due to java.lang.RuntimeException: com.mslv.oms.OMSException: encountered error starting orchestration caused by:Orchestration plan could not be generated due to unable to determine order component Id for order item: xxx"+
    I think something is missing in incoming request but im not able to find it.
    To debug this issue i have enabled debugging of class "oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions".
    And i got below things in logs.But there is not much inforamtion available about
    1. 'osmfn:ancestors("CommunicationsSalesOrderLine","parentChildHierarchy","CommunicationsSalesOrderFulfillmentPIP")' and
    2.osmfn:ancestors("CommunicationsSalesOrderLine","relatedItemHierarchy","CommunicationsSalesOrderFulfillmentPIP")''
    So its diffcult to find out what is missing in incoming request?
    Any Idea due to which fields these function failed?
    <27-Dec-2012 11:35:35,579 GMT+03:00 AM> <INFO> <jboss.JBossOrderCacheManager> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <Loading order /2453 into order cache.>
    Error on line 33 of module with no systemId:
    Exception in extension function java.lang.RuntimeException: XPath function
    osmfn:ancestors('parentChildHierarchy') failed.
    <27-Dec-2012 11:35:36,928 GMT+03:00 AM> <ERROR> <rule.XQueryHelper> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <Exception in extension function java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.. . File[*module with no systemId* (actual location resolved to [*module with no systemId*])] Line[33] Column[-1]
    >
    ; SystemID: module with no systemId; Line#: 33; Column#: -1
    net.sf.saxon.trans.XPathException: Exception in extension function java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.
         at net.sf.saxon.functions.ExtensionFunctionCall.call(ExtensionFunctionCall.java:368)
         at net.sf.saxon.functions.ExtensionFunctionCall.iterate(ExtensionFunctionCall.java:224)
         at net.sf.saxon.value.MemoClosure.iterate(MemoClosure.java:89)
         at net.sf.saxon.expr.Literal.iterate(Literal.java:202)
         at net.sf.saxon.expr.FilterExpression.iterate(FilterExpression.java:1058)
         at net.sf.saxon.functions.Existence.effectiveBooleanValue(Existence.java:105)
         at net.sf.saxon.instruct.Choose.iterate(Choose.java:748)
         at net.sf.saxon.expr.LetExpression.iterate(LetExpression.java:306)
         at net.sf.saxon.instruct.Choose.iterate(Choose.java:754)
         at net.sf.saxon.expr.LetExpression.iterate(LetExpression.java:306)
         at net.sf.saxon.query.XQueryExpression.iterator(XQueryExpression.java:307)
         at net.sf.saxon.query.XQueryExpression.evaluateSingle(XQueryExpression.java:244)
         at oracle.communications.ordermanagement.rule.f.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.n.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.XQueryHelper.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.XQueryHelper.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.XQueryHelper.evaluateString(Unknown Source)
         at oracle.communications.ordermanagement.rule.a.e(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.ad.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.ab.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.execution.impl.b.a(Unknown Source)
         at com.mslv.oms.handler.completeorder.CompleteOrderHandlerEJB.a(Unknown Source)
         at com.mslv.oms.handler.completeorder.CompleteOrderHandlerEJB.process(Unknown Source)
         at com.mslv.oms.handler.a.processRequest(Unknown Source)
         at com.mslv.oms.handler.createorder.CreateOrderHandlerEJB.process(Unknown Source)
         at com.mslv.oms.handler.a.processRequest(Unknown Source)
         at com.mslv.oms.jsp.processor.RequestProcessorSupport.a(Unknown Source)
         at com.mslv.oms.jsp.processor.RequestProcessorSupport.processRequest(Unknown Source)
         at com.mslv.oms.jsp.processor.RequestProcessorSupport.processRequest(Unknown Source)
         at oracle.communications.ordermanagement.ws.f.a(Unknown Source)
         at oracle.communications.ordermanagement.ws.a.a(Unknown Source)
         at oracle.communications.ordermanagement.ws.OrderManagementWSPortImpl.createOrder(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor1423.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at weblogic.wsee.component.pojo.JavaClassComponent.invoke(JavaClassComponent.java:112)
         at weblogic.wsee.ws.dispatch.server.ComponentHandler.handleRequest(ComponentHandler.java:84)
         at weblogic.wsee.handler.HandlerIterator.handleRequest(HandlerIterator.java:141)
         at weblogic.wsee.ws.dispatch.server.ServerDispatcher.dispatch(ServerDispatcher.java:114)
         at weblogic.wsee.ws.WsSkel.invoke(WsSkel.java:80)
         at weblogic.wsee.server.servlet.SoapProcessor.handlePost(SoapProcessor.java:66)
         at weblogic.wsee.server.servlet.SoapProcessor.process(SoapProcessor.java:44)
         at weblogic.wsee.server.servlet.BaseWSServlet$AuthorizedInvoke.run(BaseWSServlet.java:285)
         at weblogic.wsee.server.servlet.BaseWSServlet.service(BaseWSServlet.java:169)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:330)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.doIt(WebAppServletContext.java:3684)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3650)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2268)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2174)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1446)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.ancestors(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at net.sf.saxon.functions.ExtensionFunctionCall.invokeMethod(ExtensionFunctionCall.java:690)
         at net.sf.saxon.functions.ExtensionFunctionCall.call(ExtensionFunctionCall.java:343)
         ... 64 more
    Caused by: java.lang.RuntimeException: XPath function ancestors('parentChildHierarchy') failed. hierarchy[parentChildHierarchy] is not in scope
         ... 72 more
    java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.ancestors(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at net.sf.saxon.functions.ExtensionFunctionCall.invokeMethod(ExtensionFunctionCall.java:690)
         at net.sf.saxon.functions.ExtensionFunctionCall.call(ExtensionFunctionCall.java:343)
         at net.sf.saxon.functions.ExtensionFunctionCall.iterate(ExtensionFunctionCall.java:224)
         at net.sf.saxon.value.MemoClosure.iterate(MemoClosure.java:89)
         at net.sf.saxon.expr.Literal.iterate(Literal.java:202)
         at net.sf.saxon.expr.FilterExpression.iterate(FilterExpression.java:1058)
         at net.sf.saxon.functions.Existence.effectiveBooleanValue(Existence.java:105)
         at net.sf.saxon.instruct.Choose.iterate(Choose.java:748)
         at net.sf.saxon.expr.LetExpression.iterate(LetExpression.java:306)
         at net.sf.saxon.instruct.Choose.iterate(Choose.java:754)
         at net.sf.saxon.expr.LetExpression.iterate(LetExpression.java:306)
         at net.sf.saxon.query.XQueryExpression.iterator(XQueryExpression.java:307)
         at net.sf.saxon.query.XQueryExpression.evaluateSingle(XQueryExpression.java:244)
         at oracle.communications.ordermanagement.rule.f.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.n.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.XQueryHelper.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.XQueryHelper.a(Unknown Source)
         at oracle.communications.ordermanagement.rule.XQueryHelper.evaluateString(Unknown Source)
         at oracle.communications.ordermanagement.rule.a.e(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.j.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.ad.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.ab.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.execution.impl.b.a(Unknown Source)
         at com.mslv.oms.handler.completeorder.CompleteOrderHandlerEJB.a(Unknown Source)
         at com.mslv.oms.handler.completeorder.CompleteOrderHandlerEJB.process(Unknown Source)
         at com.mslv.oms.handler.a.processRequest(Unknown Source)
         at com.mslv.oms.handler.createorder.CreateOrderHandlerEJB.process(Unknown Source)
         at com.mslv.oms.handler.a.processRequest(Unknown Source)
         at com.mslv.oms.jsp.processor.RequestProcessorSupport.a(Unknown Source)
         at com.mslv.oms.jsp.processor.RequestProcessorSupport.processRequest(Unknown Source)
         at com.mslv.oms.jsp.processor.RequestProcessorSupport.processRequest(Unknown Source)
         at oracle.communications.ordermanagement.ws.f.a(Unknown Source)
         at oracle.communications.ordermanagement.ws.a.a(Unknown Source)
         at oracle.communications.ordermanagement.ws.OrderManagementWSPortImpl.createOrder(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor1423.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at weblogic.wsee.component.pojo.JavaClassComponent.invoke(JavaClassComponent.java:112)
         at weblogic.wsee.ws.dispatch.server.ComponentHandler.handleRequest(ComponentHandler.java:84)
         at weblogic.wsee.handler.HandlerIterator.handleRequest(HandlerIterator.java:141)
         at weblogic.wsee.ws.dispatch.server.ServerDispatcher.dispatch(ServerDispatcher.java:114)
         at weblogic.wsee.ws.WsSkel.invoke(WsSkel.java:80)
         at weblogic.wsee.server.servlet.SoapProcessor.handlePost(SoapProcessor.java:66)
         at weblogic.wsee.server.servlet.SoapProcessor.process(SoapProcessor.java:44)
         at weblogic.wsee.server.servlet.BaseWSServlet$AuthorizedInvoke.run(BaseWSServlet.java:285)
         at weblogic.wsee.server.servlet.BaseWSServlet.service(BaseWSServlet.java:169)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:330)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.doIt(WebAppServletContext.java:3684)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3650)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2268)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2174)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1446)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: java.lang.RuntimeException: XPath function ancestors('parentChildHierarchy') failed. hierarchy[parentChildHierarchy] is not in scope
         ... 72 more
    <Dec 27, 2012 11:35:36 AM GMT+03:00> <Error> <oms> <BEA-000000> <rule.XQueryHelper: Exception in extension function java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.. . File[*module with no systemId* (actual location resolved to [*module with no systemId*])] Line[33] Column[-1]
    net.sf.saxon.trans.XPathException: Exception in extension function java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.
         at net.sf.saxon.functions.ExtensionFunctionCall.call(ExtensionFunctionCall.java:368)
         at net.sf.saxon.functions.ExtensionFunctionCall.iterate(ExtensionFunctionCall.java:224)
         at net.sf.saxon.value.MemoClosure.iterate(MemoClosure.java:89)
         at net.sf.saxon.expr.Literal.iterate(Literal.java:202)
         at net.sf.saxon.expr.FilterExpression.iterate(FilterExpression.java:1058)
         Truncated. see log file for complete stacktrace
    Caused By: java.lang.RuntimeException: XPath function osmfn:ancestors('parentChildHierarchy') failed.
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.ancestors(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         Truncated. see log file for complete stacktrace
    Caused By: java.lang.RuntimeException: XPath function ancestors('parentChildHierarchy') failed. hierarchy[parentChildHierarchy] is not in scope
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.a(Unknown Source)
         at oracle.communications.ordermanagement.orchestration.generation.OrchestrationXQueryFunctions.ancestors(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         Truncated. see log file for complete stacktrace
    Regards,
    UJ

    Hi,
    I am facing exactly the same problem with OSM 7.0.3 while creating a new order.
    Have you managed to find any solution ????
    Which OSM 7.0.3 build are you using?
    BR,
    Dimitris

  • ORA-20011: Approximate NDV failed: ORA-31000: Resource Error in Oracle 11g

    Hi Friends,
    I am using Oracle 11.2.0.1 on linux (x64) and i am getting the following messages in Alert Log / Trace Log.
    Please let me know the fix.
    Alert Log:
    GATHER_STATS_JOB encountered errors. Check the trace file.
    Errors in file /u01/app/oracle/diag/rdbms/oraht01/oraht01/trace/oraht01_j001_30852.trc:
    ORA-20011: Approximate NDV failed: ORA-31000: Resource '' is not an XDB schema document
    ORA-06512: at "XDB.XDB$ACL_PKG_INT", line 18
    Trace File entries:
    ----- Guard Frame Metadata Dump -----
    ----- Java Stack -----
    ORA-20011: Approximate NDV failed: ORA-31000: Resource '' is not an XDB schema document
    ORA-06512: at "XDB.XDB$ACL_PKG_INT", line 18
    *** 2013-01-21 22:00:36.397
    GATHER_STATS_JOB: GATHER_TABLE_STATS('"XDB"','"XDB$ACL"','""', ...)
    ORA-20011: Approximate NDV failed: ORA-31000: Resource '' is not an XDB schema document
    ORA-06512: at "XDB.XDB$ACL_PKG_INT", line 18
    Regards,
    DB

    Hi Friends,
    Thanks for the info but the referred Thread and the Metalink Notes : 1290722.1 or 1305127.1 doesn't help me.
    Trace File :
    *** 2013-01-21 22:00:29.760
    *** SESSION ID:(28.36753) 2013-01-21 22:00:29.760
    *** CLIENT ID:() 2013-01-21 22:00:29.760
    *** SERVICE NAME:(SYS$USERS) 2013-01-21 22:00:29.760
    *** MODULE NAME:(DBMS_SCHEDULER) 2013-01-21 22:00:29.760
    *** ACTION NAME:(ORA$AT_OS_OPT_SY_1144) 2013-01-21 22:00:29.760
    SQL> SELECT JOB_NAME,STATUS,ADDITIONAL_INFO FROM DBA_SCHEDULER_JOB_RUN_DETAILS WHERE JOB_NAME = 'ORA$AT_OS_OPT_SY_1144';
    JOB_NAME
    STATUS
    ADDITIONAL_INFO
    ORA$AT_OS_OPT_SY_1144
    SUCCEEDED
    The job show completed but i want to avoid this issue in the Alert.
    Regards,
    DB

  • Didadm: unable to determine hostname.  error on Sun cluster 4.0 - Solaris11

    Trying to install Sun Cluster 4.0 on Sun Solaris 11 (x86-64).
    iscs sharedi Quorum Disk are available in /dev/rdsk/ .. ran
    devfsadm
    cldevice populate
    But don't see DID devices getting populated in /dev/did.
    Also when scdidadm -L is issued getting the following error. Has any seen the same error ??
    - didadm: unable to determine hostname.
    Found in cluster 3.2 there was a Bug 6380956: didadm should exit with error message if it cannot determine the hostname
    The sun cluster command didadm, didadm -l in particular, requires the hostname to function correctly. It uses the standard C library function gethostname to achieve this.
    Early in the cluster boot, prior to the service svc:/system/identity:node coming online, gethostname() returns an empty string. This breaks didadm.
    Can anyone point me in the right direction to get past this issue with shared quorum disk DID.

    Let's step back a bit. First, what hardware are you installing on? Is it a supported platform or is it some guest VM? (That might contribute to the problems).
    Next, after you installed Solaris 11, did the system boot cleanly and all the services come up? (svcs -x). If it did boot cleanly, what did 'uname -n' return? Do commands like 'getent hosts <your_hostname>' work? If there are problems here, Solaris Cluster won't be able to get round them.
    If the Solaris install was clean, what were the results of the above host name commands after OSC was installed? Do the hostnames still resolve? If not, you need to look at why that is happening first.
    Regards,
    Tim
    ---

  • "Failed to load resource" error

    Hello,
    Does anyone give me help?
    Some of my deployed applications became unable to run  at all.
    The error message in Chrome development tool's console is "Failed to load resource: the server responded with a status of 404 (Not Found)"
    Before last week, these applications were worked fine. But from last week (I am noy sure when), they do not work.
    So, I tryed to make easy application like helloworld servlet.  it works correctly on local server.
    Then I tried to upload my hanatrial.ondemand.com server, and run. But it does not work with the error like above.
    Additional info: 
    I restarted this application.  Application state on JAVA application dashboard in HCP consol, became "Could not read status of application benefits: Internal Server Error (500)".
    Does anyone have any idea for checking the reason ?
    Thanks and BR,
    Masashi

    Hi all,
    Now it works fine. I am not sure what's coused for this issue.
    Yesterday, I tried to execute these applications which did not work with "resource error".
    But now, I tried again.  Then  they worked well.
    What I did , is just delete Data source bindings. and then dployed and run again.
    I am not sure this is caused the issue.  Because same actions were tried  by myself on last week. But at that time, nothing has been changed.
    Anyway, I am gonig to run them for Demo purpose.
    Thanks and BR,
    Masashi

  • Live Preview - failed to load resource error

    Peter-
    I was having problems with the Live preview and tried your debugging method.
    I received the following in Chrome:
    Failed to load resource: http://127.0.0.1:9222/json
    Is there a security setting I need to change?
    Thanks!
    Josh

    Josh -- What OS are you on?
    Thanks!
    - Peter

  • Resource Failover on Sun Cluster

    Hi:
    I am a newbie on Solaris Cluster (i have worked with VCS since 4 yeasr ago) and I am evaluating SC like an alternative to VCS.
    I am testing in a two node cluster (SF v880 , 4 CPU's 16 Gb RAM). I have created a failover resource group with two resources:
    - A logical hostname
    - A HAStoragePlus resource (5 file systems)
    I have enabled the monitoring and managing of the resource group. In order to test the switch of the resource group I have executed:
    clresourcegroup switch -n xxxx app1_rg and works fine
    If I reboot one server (witch resource group online) the resource group is realocated in the other member of the cluster.
    I have found a problem (I suppose it sill be a configuration error) when I try to force a failure in the resources. By example If I umount all filesystems of the HAStoragePlus cluster doesn't detect this failure (the same when unplumb the network interface).
    Could somebody help me with this?
    Thanks in advance (I'm sorry because my bad English)

    Hi,
    It is not a configuration error, but a matter of expectations. The HAStoragePlus resource does not monitor the FS status, so the behaviour is as expected. This is not much of a problem, because an application probe will detect that the underlying FS is gon anyway. But becouse many people expressed the desire for a FS monitoring, there are discussions underway to implement one. But this is not available right now.
    The network resource is different. Unplumbing is not a valid test to insert a network error. The Logical Host monitors the status of the underlying IPMP group, and unplumbing does not change that. If you want to test a network error, you have to physically remove the cables.
    Cheers
    Detlef

  • Sharing resources among resource groups in Sun Cluster 3.1

    Hi all,
    Is it possible to share a resource among resource groups. For example:
    lh: resource of type Logical Hostname =lh-res
    /orahome: Oracle binaries and configuration files = orahome-res
    /oradata1: Data for instance 1 = oradata1-res
    /oradata2: Data for instance 2 = oradata2-res
    rg1 ( resource group for Oracle instance 1) ora1-rg = lh + orahome-res + oradata1-res
    rg2 (resource group for Oracle instance 2) ora2-rg = lh + orahome-res + oradata2-res
    Thanks,
    Enrique

    Hi Enrique,
    if lh represents the same address and the same resource name then the answer is: No not possible one resource can belong to only one resource group.
    If it would work and both rg's are running on different node you would create duplicate ip adress errors which can not be your intent.
    Which behavior do you want to achieve?
    Detlef

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • Error while creating Resource using GDS

    Hi
    Iam trying to create Resource using GDS and it is throwing me error
    clresource: (C189917) VALIDATE on resource egateq00-haegate_reg-res, resource group egateq00-rg, exited with non-zero exit status.
    clresource: (C720144) Validation of resource egateq00-haegate_reg-res in resource group egateq00-rg on node uhegateq02 failed.
    clresource: (C891200) Failed to create resource "egateq00-haegate_reg-res".
    This is the command I executed
    rclresource create -g egateq00-rg -t SUNW.gds
    -p Scalable=false -p Start_timeout=120 -p Stop_timeout=120 -p Probe_timeout=30
    -p Port_list="23001/tcp" -p Start_command="/egateq00/scripts/reg_START.sh" -p Stop_command="/egateq00/scripts/reg_STOP.sh"
    -p Probe_command="/egateq00/scripts/reg_PROBE.sh" -p Child_mon_level=1 -p Network_resources_used=egateq00-lh-res -p Failover_enabled=FALSE
    -p Stop_signal=15 egateq00-haegate_reg-res
    The log under /var/cluster/logs/DS says following
    07/01/2008 17:56:43 uhegateq02 START-INFO> scha_resource_open failed [14]. Keeping the old Log_level value
    07/01/2008 17:56:43 uhegateq02 START-ERROR> Cannot access the start command </egateq00/scripts/reg_START.sh> : <No such file or directory>
    07/01/2008 18:13:23 uhegateq02 START-INFO> scha_resource_open failed [14]. Keeping the old Log_level value
    07/01/2008 18:13:23 uhegateq02 START-ERROR> Cannot access the start command </egateq00/scripts/reg_START.sh> : <No such file or directory>
    However, I can open these scripts and run it from anywhere. I also tested these scripts and they all work fine. They are all set to chmod 777 , so everyone should have execute permission
    Iam not returning any return value from these Start and Stop script , is that the why it is failing
    thanks

    Hi
    I disabled the PMF as described on the http://blogs.sun.com/TF/entry/disabling_pmf_action_script_with. This is what I did
    1>Added following line in the top of my Start script
    while getopts 'R:G:' opt
    do
    case "${opt}" in
    R) RESOURCE=${OPTARG};;
    G) RESOURCEGROUP=${OPTARG};;
    esac
    done
    sleep 60 &
    /usr/cluster/bin/pmfadm -s ${RESOURCEGROUP},${RESOURCE},0.svc
    2>While creating the resource , I used property for Start_command="/egateq00/scripts/reg_START.sh -R %RS_NAME -G %RG_NAME"
    Now , after doing this , My RG is not getting lost. Also , in the message file I do not see the errors of "Start script failed to stay UP"
    However, My Application is not started either.
    This is what the message file says
    Jul 3 16:43:32 uhegateq01 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <gds_validate> completed successfully for resource <egateq00-haegat
    e-reg-res>, resource group <egateq00-rg>, node <uhegateq01>, time used: 0% of timeout <300 seconds>
    Jul 3 16:43:32 uhegateq01 Cluster.CCR: [ID 973933 daemon.notice] resource egateq00-haegate-reg-res added.
    Jul 3 16:43:32 uhegateq01 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <gds_svc_start> for resource <egateq00-haegate-reg-res>,
    resource group <egateq00-rg>, node <uhegateq01>, timeout <120> seconds
    Jul 3 16:43:32 uhegateq01 Cluster.RGM.rgmd: [ID 252072 daemon.notice] 50 fe_rpc_command: cmd_type(enum):<1>:cmd=</opt/SUNWscgds/bin/gds_svc_star
    t>:tag=<egateq00-rg.egateq00-haegate-reg-res.0>: Calling security_clnt_connect(..., host=<uhegateq01>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, .
    Jul 3 16:43:35 uhegateq01 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <gds_svc_start> completed successfully for resource <egateq00-haega
    te-reg-res>, resource group <egateq00-rg>, node <uhegateq01>, time used: 2% of timeout <120 seconds>
    Jul 3 16:43:35 uhegateq01 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <gds_monitor_start> for resource <egateq00-haegate-reg-re
    s>, resource group <egateq00-rg>, node <uhegateq01>, timeout <300> seconds
    Jul 3 16:43:35 uhegateq01 Cluster.RGM.rgmd: [ID 252072 daemon.notice] 50 fe_rpc_command: cmd_type(enum):<1>:cmd=</opt/SUNWscgds/bin/gds_monitor_
    start>:tag=<egateq00-rg.egateq00-haegate-reg-res.7>: Calling security_clnt_connect(..., host=<uhegateq01>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1
    , ...)Jul 3 16:43:35 uhegateq01 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <gds_monitor_start> completed successfully for resource <egateq00-h
    aegate-reg-res>, resource group <egateq00-rg>, node <uhegateq01>, time used: 0% of timeout <300 seconds>
    4>Also , in the /var/cluster/logs/DS , I see the Start script started succesfully
    07/03/2008 16:43:32 uhegateq01 START-INFO> Start succeeded. [egateq00/scripts/reg_START.sh -R egateq00-haegate-reg-res -G egateq00-rg]
    5>Also , in the /var/cluster/logs/DS , I see the Probe script returning 0 , but this is wierd because it should return Non zero number. When I run the Probe script from the command line , it is returning me non zero value when the application is down
    07/03/2008 16:43:35 uhegateq01 PROBE-INFO> The GDS monitor (gds_probe) has been started
    07/03/2008 16:44:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:44:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:45:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:45:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:46:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:46:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:47:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:47:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:48:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:48:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:49:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:49:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:50:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:50:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:51:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:51:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:52:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:52:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:53:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:53:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:54:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:54:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:55:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:55:35 uhegateq01 PROBE-INFO> The probe result is 0
    07/03/2008 16:56:35 uhegateq01 PROBE-INFO> Probe has been executed with exit code 0 [egateq00/scripts/reg_PROBE.sh]
    07/03/2008 16:56:35 uhegateq01 PROBE-INFO> The probe result is 0

  • Failing to create HA nfs storage on a shared 3310 HW Raid cluster 3.2

    Hi,
    I'm working on testing clustering on a couple v240s, running identitcal Sol10 10/08 and Sun Cluster 3.2. In trying things, I may have messed up the cluster. I may want to backout the cluster and start over. Is that possible, or do I need to install Solaris fresh.
    But first, the problem. I have the array connect to both machines and working. I mount 1 LUN on /global/nfs using the device /dev/did/dsk/d4s0. Then I ran the commands:
    # clrt register SUNW.nfs
    # clrt register SUNW.HAStoragePlus
    # clrt list -v
    Resource Type Node List
    SUNW.LogicalHostname:2 <All>
    SUNW.SharedAddress:2 <All>
    SUNW.nfs:3.2 <All>
    SUNW.HAStoragePlus:6 <All>
    # clrg create -n stnv240a,stnv240b -p PathPrefix=/global/nfs/admin nfs-rg
    I enabled them just now so:
    # clrg status
    Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    nfs-rg stnv240a No Online
    stnv240b No Offline
    Then:
    # clrslh create -g nfs-rg cluster
    # clrslh status
    Cluster Resources ===
    Resource Name Node Name State Status Message
    cluster stnv240a Online Online - LogicalHostname online.
    stnv240b Offline Offline
    I'm guessing that 'b' is offline because it's the backup.
    Finally, I get:
    # clrs create -t HAStoragePlus -g nfs-rg -p AffinityOn=true -p FilesystemMountPoints=/global/nfs nfs-stor
    clrs: stnv240b - Invalid global device path /dev/did/dsk/d4s0 detected.
    clrs: (C189917) VALIDATE on resource nfs-stor, resource group nfs-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource nfs-stor in resource group nfs-rg on node stnv240b failed.
    clrs: (C891200) Failed to create resource "nfs-stor".
    On stnv240a:
    # df -h /global/nfs
    Filesystem size used avail capacity Mounted on
    /dev/did/dsk/d4s0 49G 20G 29G 41% /global/nfs
    and on stnv240b:
    # df -h /global/nfs
    Filesystem size used avail capacity Mounted on
    /dev/did/dsk/d4s0 49G 20G 29G 41% /global/nfs
    Any help? Like I said, this is a test setup. I've started over once. So I can start over if I did something irreversible.

    I still have the issue. I reinstalled from scratch and installed the cluster. Then I did the following:
    $ vi /etc/default/nfs
    GRACE_PERIOD=10
    $ ls /global//nfs
    $ mount /global/nfs
    $ df -h
    Filesystem size used avail capacity Mounted on
    /dev/global/dsk/d4s0 49G 20G 29G 41% /global/nfs
    $ clrt register SUNW.nfs
    $ clrt register SUNW.HAStoragePlus
    $ clrt list -v
    Resource Type Node List
    SUNW.LogicalHostname:2 <All>
    SUNW.SharedAddress:2 <All>
    SUNW.nfs:3.2 <All>
    SUNW.HAStoragePlus:6 <All>
    $ clrg create -n stnv240a,stnv240b -p PathPrefix=/global/nfs/admin nfs-rg
    $ clrslh create -g nfs-rg patience
    clrslh: IP Address 204.155.141.146 is already plumbed at host: stnv240b
    $ grep cluster /etc/hosts
    204.155.141.140 stnv240a stnv240a.mns.qintra.com # global - cluster
    204.155.141.141 cluster cluster.mns.qintra.com # cluster virtual address
    204.155.141.146 stnv240b stnv240b.mns.qintra.com patience patience.mns.qintra.com # global v240 - cluster test
    $ clrslh create -g nfs-rg cluster
    $ clrs create -t HAStoragePlus -g nfs-rg -p AffinityOn=true -p FilesystemMountPoints=/global/nfs nfs-stor
    clrs: stnv240b - Failed to analyze the device special file associated with file system mount point /global/nfs: No such file or directory.
    clrs: (C189917) VALIDATE on resource nfs-stor, resource group nfs-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource nfs-stor in resource group nfs-rg on node stnv240b failed.
    clrs: (C891200) Failed to create resource "nfs-stor".
    Now, on the second machine (stnv240b), /dev/global does not exist, but the file system mounts anyway. I guess that's cluster magic?
    $ cat /etc/vfstab
    /dev/global/dsk/d4s0 /dev/global/dsk/d4s0 /global/nfs ufs 1 yes global
    $ df -h /global/nfs
    Filesystem size used avail capacity Mounted on
    /dev/global/dsk/d4s0 49G 20G 29G 41% /global/nfs
    $ ls -l /dev/global
    /dev/global: No such file or directory
    I followed the other thread. devfsadm and scgdevs
    One other thing I notice. Both nodes mount my global on node@1
    /dev/md/dsk/d6 723M 3.5M 662M 1% /global/.devices/node@1
    /dev/md/dsk/d6 723M 3.5M 662M 1% /global/.devices/node@1

Maybe you are looking for