Question about cluster node NodeWeight property

Hi,
I have a three nodes (A/B/C) windows 2008 r2 sp1 cluster testCluster, and installed KB2494036 for three nodes,suppose Node A is a active node.
  I configured node C's NodeWeight property to 0, and node A and node B keep default (NodeWeight=1). I also added a shared disk Q for cluster quorum.
So i want to know if node C and Node B are down , is the windows cluster testCluster down as lost of quorum or keep up?
At the first i thought testCluster should keep up , because the cluster has 2 votes (node A and quorum), node B is down, node C doesn't join voting. But after testing, testCluster  was down as  lost of quorum.
So anybody konw the reason,thanks.

Hello mark.gao,
Let me see if I understand correctly your steps, so I can think that if you create your cluster with three nodes at the beginning your quorum model should be "Node Majority", then you have three votes one per each node.
Then was removed the vote for Node "C" and added a disk to be witness for cluster quorum, at this point we have two out of three votes from the original configuration on "Node Majority"
Question:
At some point you changed the quorum model to be "Node and Disk Majority"???
Maybe this is the issue, you are stuck on "Node Majority" and when "B" and "C" nodes are down we have only one vote from node "A" therefore there is no quorum to keep the service online.
On 2012 we have the awesome option to configure a Dynamic Quorum:
Dynamic quorum management
In Windows Server 2012, as an advanced quorum configuration option, you can choose to enable dynamic quorum management by cluster. When this option is enabled, the cluster dynamically manages
the vote assignment to nodes, based on the state of each node. Votes are automatically removed from nodes that leave active cluster membership, and a vote is automatically assigned when a node rejoins the cluster. By default, dynamic quorum management is enabled.
Note
With dynamic quorum management, the cluster quorum majority is determined by the set of nodes that are active members of the cluster at any time. This is an important distinction from the cluster quorum in Windows Server 2008 R2, where the quorum
majority is fixed, based on the initial cluster configuration.
With dynamic quorum management, it is also possible for a cluster to run on the last surviving cluster node. By dynamically adjusting the quorum majority requirement, the cluster can sustain
sequential node shutdowns to a single node.
The cluster-assigned dynamic vote of a node can be verified with the DynamicWeight common property of the cluster node by using the Get-ClusterNodeWindows
PowerShell cmdlet. A value of 0 indicates that the node does not have a quorum vote. A value of 1 indicates that the node has a quorum vote.
The vote assignment for all cluster nodes can be verified by using the Validate Cluster Quorum validation test.
Additional considerations
Dynamic quorum management does not allow the cluster to sustain a simultaneous failure of a majority of voting members. To continue running, the cluster must always have a quorum majority at the time of a node shutdown or failure.
If you have explicitly removed the vote of a node, the cluster cannot dynamically add or remove that vote. 
Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
https://technet.microsoft.com/en-us/library/jj612870.aspx#BKMK_dynamic
Hope this info help you to reach your goal. :D
5ALU2 !

Similar Messages

  • Question about cluster node majority voting

    We've been having problems with a DB instance crashing regularly.  This weekend when it crashed, it seems to have taken the node it was on with it, or this was a separate incident...
    Right now I have 3 nodes in the cluster.  2 nodes are running 3 instances (2 on 1). The 3rd node is in a state where the OS is mostly unusable and the Cluster service will not start. 
    Event Log:
    "The failover cluster database could not be unloaded. If restarting the cluster service does not fix the problem, please restart the machine."
    Cluster Log from that machine:
    00003768.000067a0::2014/01/06-03:28:05.393 INFO  -----------------------------+ LOG BEGIN +-----------------------------
    00003768.000067a0::2014/01/06-03:28:05.393 INFO  [CS] Starting clussvc as a service
    00003768.000067a0::2014/01/06-03:28:05.394 INFO  [CS] cluster service logging level is 2
    00003768.00004c30::2014/01/06-03:28:05.521 DBG   [NETFTAPI] received NsiInitialNotification
    00003768.00004c30::2014/01/06-03:28:05.523 DBG   [NETFTAPI] received NsiInitialNotification
    00003768.000031f4::2014/01/06-03:28:05.588 DBG   [NETFTAPI] received NsiAddInstance  for 169.254.3.47
    00003768.00004eb4::2014/01/06-03:28:05.590 ERR   [DM] Error while restoring (refreshing) the hive: STATUS_INVALID_PARAMETER(c000000d
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   [DM] mscs::DmAgent::Start: STATUS_INVALID_PARAMETER(c000000d' because of 'Load(NOTHROW(), securityAttributes, discardError )'
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   [DM] Node 3: failed to unload cluster hive, error 87.
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   Hive unload failed (status = 87)
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   FatalError is Calling Exit Process.
    This is a 3 node cluster set to node majority, I don't have an available drive letter for a witness disk.  Since the cluster service won't start, I'm not certain how the cluster is still running, but am thankful that it is.
    A reboot might fix everything, but I'm very worried that if I reboot the server, and the cluster service still fails to start... it may prevent the entire cluster from starting and we won't be able to run the instances on the other 2 nodes.
    Does the 3rd server still act as an odd-number server, even if the cluster service won't start?  If I reboot and the cluster service still fails to start, will the cluster itself be able to be in an UP state and run the DB instances on the other nodes?
    I already need to open a MS Support incident on the DB instance crashing, so I'd rather not have to open a 2nd one just to answer this hopefully simple question.
    Thanks in advance!
    Mark

    I'll answer it here, since it matters fundamentally to SQL High Availability.
    There are a couple of entities you are conflating here, leading to much confusion.  There is a difference between the Cluster and the cluster service.
    The cluster service will run on a node once the Failover Cluster Feature is installed on that node.  The cluster service will run, even if a cluster is not created.  It may generate errors and not participate in a Cluster if it cannot talk to the
    other nodes, but it will not shut down.
    The Cluster itself requires a quorum, that is a majority of votes, in order to operate.  With three nodes, you should choose Node Majority quorum model, which sounds like what you have.  Any two votes will count, so the third node being offline
    does not matter.  You can safely restart the cluster service on the failed nod, and even restart the node.  Note that with the third node down, you have no redundancy.  (Windows 2012 and 2012 R2 have dynamic quorum, which adjusts the quorum
    count based on the last "settled" quorum vote, but that doesn't apply here).
    I am concerned with your statement that you are out of drive letters.  With three instances, you should have plenty of drive letters left.  I suggest investigating Mount Points.  You only need one drive letter per instance when using Mount
    Points.
    Geoff N. Hiten Principal Consultant Microsoft SQL Server MVP

  • A question about cluster

    Hi all,
    I have a question about the usage of cluster. I create a cluaster
    which includes three indicators on the front panel, but I can't use
    unbundle function on the diagram. If I change the indicator to control,
    it works. I didn't find on the manual that the unbundle function only
    accept cluster of control as input.
    Thanks for your help!
    Regards,
    Tao

    This is because of the way dataflow programming works. It probably didn't
    explicitly mention.
    Labview has data sources (controls, inputs) and data sinks (indicators,
    outputs). Unbundling accepts cluster as input and outputs the individual
    pieces of data. Similarly bundling accepts individual pieces of data as
    input and ouputs a cluster. So logically your cluster must be a control
    (input) for unbundling, and an indicator (output) for bundling.
    If I'm misunderstanding the question or oversimplifying the problem email me
    or post again.
    -joey
    "tsong" wrote in message news:b11154$qmq$[email protected]..
    > Hi all,
    >
    > I have a question about the usage of cluster. I create a cluaster
    > which includes three indicators on the front panel, but I can'
    t use
    > unbundle function on the diagram. If I change the indicator to control,
    > it works. I didn't find on the manual that the unbundle function only
    > accept cluster of control as input.
    >
    > Thanks for your help!
    >
    > Regards,
    >
    > Tao
    >
    >
    >

  • Question about Cluster/DSync/Load Balance

    According to the admin doc of iplanet, primary server is
    the "manager" for data sync, is there any impact on
    load balance when the iAS run as primary or backup?
    will the primary kxs get the request first and do dispatching?
    Thanks.
    Heng

    First of all lets discuss load balancing....
    The type of load balancing you are using will determine which process manages the load balancing. If you are using Response time (per server or per component response time) or round robin (regular or weighted) the web connector does the load balancing. If you are using User Defined (iAS based) load balancing then the kxs process becomes involved with load balancing of requests since the "Load Balancing System" is part of the kxs process.
    Now for Dsync and how it impacts load balancing.
    When a server is a sync primary or a sync backup role it is doing more work. For the sync primary the extra work is making sure the backup has the latest Dsync Data and processing requests from the other servers in the cluster about the Distributed data. All state/session information is updated/created/deleted on the sync primary, when this happens the sync primary immediately updates the sync backup(s) with this new information. As you can guess managing the Dsync information and making the updates to the sync backups causes extra processing on the sync primary, so this will impact the overall performance of the machine (whether it be in server load or response time of processing). All lookup of state/session information is done on the sync primary only so the more lookups/updates you have to more impact on the server.
    The sync backup(s) also have the extra work of managing their copy of the Dsync Data which will impact server performance but to a lessor degree of the sync primary.
    Ultimately the extra overhead involved does have an impact on loadbalancing due to the extra load on the sync primary and sync backups.
    Hope that helps,
    Chris Buzzetta

  • Questions about multiple nodes and licenses

    Nico, What would be the right forum to ask the licensing questions i stated above? Jimit

    I can't and won't discuss license issues, but for the last question (mixing nodes with different operating systems) there's a fairly easy answer why this can lead to all sorts of trouble.Windows and Linux/Unix usually work in different code pages, making it hard to interchange flat files.Text lines in Windows flat files are usually terminated by the two characters Carriage Return followed by Line Feed (0x0D followed by 0x0A) whereas under Linux/Unix text lines are always terminated by Line Feed characters (0x0a) only. Not all Windows and Linux/Unix programs can handle this difference without trouble.Very often Windows and Linux/Unix machines run with difference 8-bit code pages for the nodes themselves. This means that all sorts of diacritics (such as German Umlaute ä, ö, ü, ß) may be processed by programs on both platforms in different ways.Shell scripts and batch files resp. all sorts of operating system commands are very much incompatible between these systems, making it almost impossible (ok, just extremely difficult) to write scripts running on both system worlds.Last not least Integration Services in a server grid must be of the same operating system and run in the same code page, meaning that these two nodes will never be able to be part of the same server grid. That's just what came to my mind within a few seconds of thinking. Regards,Nico

  • A question about cluster of indicators

    Hi,
    Here is what I want to achieve:
    Three indicators, use cluster to change the display
    number
    Here is what I have done:
    1). Creat three indicators on the front panel
    2). Put them in a cluster
    3). Create a local variable and change its attribute to read
    4). Unbundle the cluster local variable
    5). Now I can't wire any vaule to the output element
    of the unbundle function. (It seems all indicators become
    data source).
    How can I solve this problem?
    Thanks a lot for your help!
    Regards,
    Tao
    4

    The issue is that a read local variable IS a data source. If you want to write to a control programatically (promise me you are only going to do this in your user interface code) you have to use a write local variable. In your case here, you need to bundle the three control values before writing the output of the bundler to the local variable.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Jdev 11g:  question about dvt:shapeAttributes and property "alt"

    Hi,
    Jdev 11.1.1.0.2
    I use dvt:shapeAttributes> within a declarative component.
    This code works fine:
              <dvt:shapeAttributesSet>
                <dvt:shapeAttributes component="GAUGE_INDICATOR"
                                     alt="Your Tooltiptext"/>
              </dvt:shapeAttributesSet>The tooltip text should come from an attribute (parameter) of the declarative component
              <dvt:shapeAttributesSet>
                <dvt:shapeAttributes component="GAUGE_INDICATOR"
                                     alt="#{attrs.tooltipLed1}"/>
              </dvt:shapeAttributesSet>But this gives an runtime error:
    29.06.2009 11:43:20 oracle.adfinternal.view.faces.config.rich.RegistrationConfigurator handleError
    SCHWERWIEGEND: Server Exception during PPR, #9
    javax.el.MethodNotFoundException: Method not found: {}.tooltipLed1(oracle.dss.dataView.ComponentHandle)
         at com.sun.el.util.ReflectionUtil.getMethod(ReflectionUtil.java:143)
         at com.sun.el.parser.AstValue.invoke(AstValue.java:154)
         at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:283)
         at oracle.adfinternal.view.faces.bi.renderkit.imageView.RenderUtils.invokeMethodExpression(RenderUtils.java:271)
         at oracle.adfinternal.view.faces.bi.renderkit.imageView.RenderUtils.getDHTMLHandler(RenderUtils.java:285)
         at oracle.adfinternal.view.faces.bi.renderkit.imageView.RenderUtils.getAlt(RenderUtils.java:312)
         at oracle.adfinternal.view.faces.bi.renderkit.imageView.ImageViewRendererUtils.renderImagemap(ImageViewRendererUtils.java:1636)
         at oracle.adfinternal.view.faces.bi.renderkit.imageView.ImageViewRendererUtils.encodeBeginIMG(ImageViewRendererUtils.java:373)
         at oracle.adfinternal.view.faces.bi.renderkit.imageView.RichImageViewRenderer.encodeAll(RichImageViewRenderer.java:775)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeChild(CoreRenderer.java:304)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer._encodeChild(PanelGroupLayoutRenderer.java:372)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer.access$300(PanelGroupLayoutRenderer.java:30)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer$EncoderCallback.processComponent(PanelGroupLayoutRenderer.java:621)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer$EncoderCallback.processComponent(PanelGroupLayoutRenderer.java:540)
         at org.apache.myfaces.trinidad.component.UIXComponent.processFlattenedChildren(UIXComponent.java:111)
         at org.apache.myfaces.trinidad.component.UIXComponent.processFlattenedChildren(UIXComponent.java:187)
         at org.apache.myfaces.trinidad.component.UIXComponent.processFlattenedChildren(UIXComponent.java:153)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer.encodeAll(PanelGroupLayoutRenderer.java:292)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeChild(CoreRenderer.java:304)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer._encodeHorizontalChild(PanelGroupLayoutRenderer.java:438)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer.access$100(PanelGroupLayoutRenderer.java:30)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer$EncoderCallback.processComponent(PanelGroupLayoutRenderer.java:598)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer$EncoderCallback.processComponent(PanelGroupLayoutRenderer.java:540)
         at org.apache.myfaces.trinidad.component.UIXComponent.processFlattenedChildren(UIXComponent.java:111)
         at org.apache.myfaces.trinidad.component.UIXComponent.processFlattenedChildren(UIXComponent.java:187)
         at org.apache.myfaces.trinidad.component.UIXComponent.processFlattenedChildren(UIXComponent.java:153)
         at oracle.adfinternal.view.faces.renderkit.rich.PanelGroupLayoutRenderer.encodeAll(PanelGroupLayoutRenderer.java:292)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeChild(CoreRenderer.java:304)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeAllChildren(CoreRenderer.java:321)
         at oracle.adfinternal.view.faces.renderkit.rich.DeclarativeComponentRenderer.encodeAll(DeclarativeComponentRenderer.java:61)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.encodeEnd(ContextSwitchingComponent.java:133)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeChild(CoreRenderer.java:304)
         at oracle.adfinternal.view.faces.renderkit.rich.table.BaseColumnRenderer.renderDataCell(BaseColumnRenderer.java:1072)
         at oracle.adfinternal.view.faces.renderkit.rich.table.BaseColumnRenderer.encodeAll(BaseColumnRenderer.java:101)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeChild(CoreRenderer.java:304)
         at oracle.adfinternal.view.faces.renderkit.rich.TableRenderer.renderDataBlockRows(TableRenderer.java:1714)
         at oracle.adfinternal.view.faces.renderkit.rich.TableRenderer._renderSingleDataBlock(TableRenderer.java:1424)
         at oracle.adfinternal.view.faces.renderkit.rich.TableRenderer._handleDataFetch(TableRenderer.java:836)
         at oracle.adfinternal.view.faces.renderkit.rich.TableRenderer.encodeAll(TableRenderer.java:393)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at org.apache.myfaces.trinidad.component.UIXCollection.encodeEnd(UIXCollection.java:533)
         at org.apache.myfaces.trinidad.render.RenderUtils.encodeRecursive(RenderUtils.java:70)
         at oracle.adfinternal.view.faces.util.rich.InvokeOnComponentUtils$RenderCallback.invokeContextCallback(InvokeOnComponentUtils.java:97)
         at org.apache.myfaces.trinidad.component.UIXCollection.invokeOnComponent(UIXCollection.java:1030)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at oracle.adf.view.rich.component.fragment.UIXRegion.invokeOnComponent(UIXRegion.java:551)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.invokeOnComponent(ContextSwitchingComponent.java:153)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.invokeOnComponent(ContextSwitchingComponent.java:153)
         at oracle.adf.view.rich.component.fragment.UIXPageTemplate.invokeOnComponent(UIXPageTemplate.java:208)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponent.invokeOnComponent(UIComponent.java:731)
         at javax.faces.component.UIComponentBase.invokeOnComponent(UIComponentBase.java:664)
         at oracle.adfinternal.view.faces.util.rich.InvokeOnComponentUtils.renderChild(InvokeOnComponentUtils.java:29)
         at oracle.adfinternal.view.faces.streaming.StreamingDataManager._pprComponent(StreamingDataManager.java:577)
         at oracle.adfinternal.view.faces.streaming.StreamingDataManager.execute(StreamingDataManager.java:442)
         at oracle.adfinternal.view.faces.renderkit.rich.DocumentRenderer._encodeStreamingResponse(DocumentRenderer.java:2125)
         at oracle.adfinternal.view.faces.renderkit.rich.DocumentRenderer.encodeAll(DocumentRenderer.java:788)
         at oracle.adf.view.rich.render.RichRenderer.encodeAll(RichRenderer.java:1050)
         at org.apache.myfaces.trinidad.render.CoreRenderer.encodeEnd(CoreRenderer.java:224)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:764)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.__encodeRecursive(UIXComponentBase.java:1352)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.encodeAll(UIXComponentBase.java:784)
         at javax.faces.component.UIComponent.encodeAll(UIComponent.java:942)
         at com.sun.faces.application.ViewHandlerImpl.doRenderView(ViewHandlerImpl.java:273)
         at com.sun.faces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:204)
         at view.backing.CustomViewHandler.renderView(CustomViewHandler.java:57)
         at javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:189)
         at org.apache.myfaces.trinidadinternal.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:188)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._renderResponse(LifecycleImpl.java:652)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:243)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:203)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:266)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:181)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:85)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:279)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._invokeDoFilter(TrinidadFilterImpl.java:239)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:196)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:139)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.security.jps.wls.JpsWlsFilter$1.run(JpsWlsFilter.java:85)
         at java.security.AccessController.doPrivileged(Native Method)
         at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:257)
         at oracle.security.jps.wls.JpsWlsSubjectResolver.runJaasMode(JpsWlsSubjectResolver.java:250)
         at oracle.security.jps.wls.JpsWlsFilter.doFilter(JpsWlsFilter.java:100)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:65)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:149)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)From the documentation I can see the property "tag" takes an reference to backing bean.
    http://www.oracle.com/technology/products/adf/adffaces/11/doc/dvt/tagdoc/dvt_shapeAttributes.html
    It's not the problem to write this backing bean to access the value of attrs.tooltipLed1 but just for my understanding.
    Why it's possible to pass a literal string for the property "tag" but not the EL "#{attrs.tooltipLed1}" ?
    regards
    Peter

    Hello Branislav,
    attrs is just the parameter interface to the declarative component and contains the parameters tooltipLed1, tooltipLed2, ...
        <af:xmlContent>
          <component xmlns="http://xmlns.oracle.com/adf/faces/rich/component">
            <display-name>ledBar</display-name>
            <component-class>component.LedBar</component-class>
            <attribute>
              <attribute-name>tooltipLed1</attribute-name>
              <attribute-class>java.lang.String</attribute-class>
            </attribute>
            <attribute>
              <attribute-name>tooltipLed2</attribute-name>
              <attribute-class>java.lang.String</attribute-class>
            </attribute>
            <component-extension>
              <component-tag-namespace>component</component-tag-namespace>
              <component-taglib-uri>/tannStandardComponents</component-taglib-uri>
            </component-extension>
          </component>regards
    Peter

  • Question about context node filling

    Hi all gurus; I'm struggling over a simple task, hope that someone could help .
    Shortly: I defined a data structure as follows:
    DATA: BEGIN OF ls_struct,
         vendor   TYPE bbp_bp_orga,
         cptable  TYPE zebp_contpers_t, "this is a Table Type!
             END OF ls_struct,
             lt_struct LIKE TABLE OF ls_struct.
    So, there's a table lt_struct with a line that is build up by:
    - a "flat" field;
    - an internal table (made up by some fields).
    I need to TRANSPORT these information from one view to another using a common Context Node shared via ComponentController.
    I then tried to create such a node in the Context but there's something I must have done wrong:
    Here's my sketch:
    CP_FOR_BIDDERS has no Dictionary structure; here's the subnode CPTABLE:
    I tried then to store the values in these nodes in my methods:
    IF lt_struct IS NOT INITIAL.
         DATA lo_nd_cp_for_bidders TYPE REF TO if_wd_context_node..
         lo_nd_cp_for_bidders = wd_context->get_child_node( name = wd_this->wdctx_cp_for_bidders ).
         CALL METHOD lo_nd_cp_for_bidders->bind_table
           EXPORTING
             new_items            = lt_struct
    *        set_initial_elements = ABAP_TRUE
    *        index                =
    However, if I try then to GET the values from the node, I can see only VENDOR values, while the associated internal table is always blank.
    What am I missing?
    Thanks in advance

    Hi Matteo,
    You also want to bind the CPTABLE node for each CP_FOR_BIDDERS element. One example is below; you can also loop through the table of elements for node CP_FOR_BIDDERS, fetch each element's CPTABLE node and bind the CPTABLE data to each CPTABLE node.
        lo_nd_cptable = wd_context->path_get_node( path = `CP_FOR_BIDDERS.CPTABLE` ).
        CALL METHOD lo_nd_cptable>bind_table(
            new_items               = lt_cptable_data
            set_initial_elements = abap_true
    Cheers,
    Amy

  • Question about adding an Extra Node to SOFS cluster

    Hi, I have a fully functioning SOFS cluster, with two nodes, it uses SAN FC storage, Not SAS JBODS. its running about 100VM's in production at the moment.
    Both my nodes currently sit on one blade chassis, but for resiliency, I want to add another node from a blade chassis in our secondary onsite smaller DC.
    I've done plenty of cluster node upgrades before on SQL and Hyper-V , but never with a SOFS cluster. 
    I have the third node fully prepaired, it can see the Disks the FC Luns, on the SAN (using powerpath, disk manager) and all the roles are installed.
    so in theory I can just add this node in the cluster manager and it should all be good, my question is has anyone else done this, and is there anything else I should be aware of, and what's the best way to check the new node will function , and be able
    to migrate the File role over without issues. I know I can run a validation when adding the node, I presume this is the best option ?
    cannot find much information on the web about expanding a SOFS cluster.
    any advice or information would be greatfully received !!
    cheers
    Mark

    Hi Mark,
    Sorry for the delay in reply.
    As you said there is no much information which related to add a node to a SOFS cluster.
    The only ones I could find is related to System Center (VMM):
    How to Add a Node to a Scale-Out File Server in VMM
    http://technet.microsoft.com/en-us/library/dn466530.aspx
    However adding a node to SOFS cluster should be simple as you just prepared. You can have a try and see the result. 
    If you have any feedback on our support, please send to [email protected]

  • Question about DBCA generate script o create RAC database 2 node cluster

    Question about creating two node RAC database 11g after installing and configuration 11g clusterware. I've used DBCA to generate script to create a rac database. I've set
    environment variable ORACLE_SID=RAC and the creating script creates instance of RAC1 and RAC2. My understanding is that each node will represent a node, however there should only be one database with a name of 'RAC'. Please advise

    You are getting your terminology mixed up.
    You only have one database. Take a look, there are one set of datafiles on shared storage.
    You have 2 instances which are accessing one database.
    Database name is RAC. Instance names are RAC1, RAC2, etc, etc.
    Also, if you look at the listener configuration and if your tnsnames is setup properly then connecting to RAC will connect you to either one of the instances wheras connecting to RAC1 will connect you to that instance.

  • BizTalk Enterprise Passive Cluster Node Licensing Question

    Can anyone confirm if you need to license a passive BizTalk Enterprise cluster node if all BizTalk components are not running? If so is there an official reference I can refer to? The PUR has a section on Running Instances that states components must be
    in memory to require a license, but no BizTalk components would be in memory if all services are stopped. All references I have read about passive nodes state they must be licensed though.
    Nikolai Blackie Adaptiv Integration

    The reason I am asking this question is there is a site with a small cluster with all hosts and SSO under cluster management, on the effectively passive node there are no actually running instance of any BizTalk components. Not strictly HA, but certainly
    quicker than DR. Personally I would have just used VM failover but that design decision was made a long time ago.
    http://msdn.microsoft.com/en-us/library/aa578057.aspx
    This is a relatively grey area in terms of licensing the configuration and depending on how you interpret PUR non licensable passive nodes appear to valid under the documented terms.
    It would just be great if there was something somewhere that said outright all BizTalk servers in a cluster must have an assigned server license, or cluster nodes with no running components are not licensable =)
    Nikolai Blackie Adaptiv Integration

  • A question about Job schdueling in cluster

    hi all
    I have a weblogic cluster and want to use the build-in commonj support to do some scheduling work.the pdf version document "Timer and Work Manager API (CommonJ) Programmer's Guide" has something like this on page 7,"The Timer Listener class must be pesent in the server system classpath" .does it mean that I should not put it in web-inf/classes?instead, I should jar it and put the jar somewhere inside wls_home/server/lib or ext ?
    thanks a lot :-]

    hi mchellap
    here is another question about timers in the cluster,
    1) I implemented a serializable timerlistener which I want to make it cluster aware
    2) put the JNDI items "timer/MyTimer" in web.xml which is to commonj.timers.TimerManager
    3) I created a datasource on cluster in console with the tables created in db
    after the cluster is started,the job is to print out the "new Date()" in console every 40 second,and it worked very well
    I am expecting something in the db table,but there is nothing,not even a exception ,anything wrong here?
    thanks a lot

  • Question about Local Variables (Multiple answers welcomed!)

    A couple of questions about Local Variables
    1. Programmers always say: “Do not abuse of Local Variables”. I’d like to know, when and where Local variable are most efficiently used?
    2. If I have to create a couple of local variables, is there anyway to “clone” them without going through the repetitive “create/local variables” mouse click each time? (When I try to copy and paste, it creates a new variables instead of the one that I am trying to reproduce)
    3. Which is faster in execution: Updating a variable through a) writing to property node/value or b) through local variable
    Everyone’s input is welcomed, so if this question is already answered, please
    feel free to add additional comments/answers!

    1. Use Local Variables in user interface code and no where else. The only exception is using a local variable of a cluster output to define the datatype for a bundle by name node.
    2. You can drag copy them then right click to get to a menu of all the currently defined controls and indicators on the VI.
    3. B. The problem with A is that it forces a thread switch to the user interface thread--which can take time if you aren't already in it, and it's a very convoluted process under the hood. NI's advice never update indicator values through a property node unless you absolutely, positively can't figure out some other way of doing it.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Question about the programming of a legend

    Hello everybody,
    I have a question about the programming of a waveform's legend. I
    already asked here in this forum about the legend programming (03)
    months ago.
    I went satisfied but I ve just noticed that this code
    (See Code old_legend_test.llb with main.vi as main function) operates a
    little different from my expectances.
    Therefore I have a new question and I want to know if it
    is possible by labview programming to plot and show, on a waveform
    chart, a signal with activ plot superior to zero (0) without to be
    obliged to plot and show a signal with activ plot equal to zero (0) or
    inferior to the desired activ plot.
    I give you an example
    of what I m meaning. I have by example 4 signals (Signal 0, 1, 2 and 3)
    and each signal corresponds respectively to a channel (Chan1, Chan2,
    Chan3, Chan4). I want to control the legend (activ plot, plot name and
    plot color) programmatically. Is it possible with labview to plot signal
    1 or 2 or 3 or (1, 3) or (2,3) or (1,2,3) or other possible combination
    without to active the signal with the corresponding activ plot zero
    (0)?
    Let see the labview attached data
    (new_legend_test.llb with main.vi as main function). When I try to
    control the input selected values again I get them back but I don't
    understand why they have no effect on the legend of my waveform chart.
    Could somebody explain me what I m doing wrong or show me how to get a
    correct legend with desired plots? Thank by advance for your assistance.
    N.B.
    The
    both attached data are saved with labview 2009.
    Sincerly,PrinceJack
    Attachments:
    old_legend_test.llb ‏65 KB
    new_legend_test.llb ‏65 KB

    Hi
    princejack,
    Thanks for
    posting on National Instruments forum.
    The behavior
    you have is completely normal. You can control the number of row displayed in
    the legend and this rows are linked to the data you send to your graph. Thus,
    if you have 3 arrays of data, let say chan1, chan2 and chan3, you can choose
    which data you want to display in your graph using the property node (Active
    plot and visible). But for the legend as you send 3 plots there is an array of
    the plot name [chan1, chan2, chan3] and you can display 0, 1, 2 or 3 rows of
    this array but you cannot control the order in this array! So, to be able to
    change this array you have to only send data you need to you graph. I'm not
    sure my explanations are clear so I have implemented a simple example doing
    that.
    Benjamin R.
    R&D Software Development Manager
    http://www.fluigent.com/
    Attachments:
    GraphLegend.vi ‏85 KB

  • The question about the HA installation on ECC6.0

    Hi Experts,
    We are about to implement a project with HA environment on the ECC6.0 in the near future, which is just about the ABAP stack. After reading the Installating Guide, I stil have several questions related to the procedures of HA installation.
    In the guide document, I got the following steps to process for realizing the HA of ECC6.0:
    1. Run SAPinst to install the central services instance (ASCS) using the virtual host name on the primary cluster node, host A.
    2. Prepare the standby node, host B, making sure that it meets the hardware and software requirements and it has all the necessary file systems, mount points, and (if required) Network File System (NFS), as described in Preparing for Switchover .
    3. Set up the user environment on the standby node, host B. For more information, see Creating Operating System Users and Groups Manually. Make sure thatyou use the same user and group IDs as on the primary node. Create the home directories of users and copy all files from the home directory of the primary node.
    4. Configure the switchover software and test that switchover functions correctly.
    5. Install the database instance on the primary node, host A.
    6. Install the central instance with SAPinst on the primary node, host A.
    7.If required, install additional dialog instances with SAPinst to replicate the SAP system services that are not a SPOF. These nodes do not need to be part of the cluster.
    My Question is that does standby node(host B in above context) need to install the ASCS, database instance and Central Instance?
    If host B does not need to install the database instance, how about the whole system would be when the primary cluster node (Host A in above context) totally crash, such as power failure.

    Hi Rong,
    I would try to explain it in simple words...
    My Question is that does standby node(host B in above context) need to install the ASCS,
    database instance and Central Instance?
    If host B does not need to install the database instance, how about the whole system would be when
    the primary cluster node (Host A in above context) totally crash, such as power failure.
    1. You don't need to install ASCS on Node B. You are installing it using a VIRTUAL HOSTNAME which represent cluster not individual node. VIRTUAL HOSTNAME is assigned to cluster package, so whichever Node is the owner of the package, will have the VIRTUAL HOSTNAME. (it will switch with cluster switchover)
    2. It is actually a cluster package configuration magic. When Node A is active, cluster package owner is Node A. So all mount points (which is on SAN disk) is mounted on Node A. When you switchover the cluster, those packages will be mounted on Node B.
    Some time single cluster package is used (which includes mount points for SAP instance + Database directories). You can also use 2 cluster package seperating SAP and Database directory structure.
    Only OS related directories should be on servers local disk. All other application related mount points should be on SAN disk which is configured in "Cluster Package". (For example /sapmnt, /usr/sap, /oracle etc.)
    You only need identical users and their enviornment settings on both Nodes.
    In simple words, When primary node fails or crashed only users and thier enviornment setting will be lost. On second node, because of identical users and their profiles, same settings will be available to bring up the SAP system. Your all SAP and Database data is intact as it is on SAN Disk.
    I hope, your confusion is cleared now...
    Regards.
    Rajesh Narkhede

Maybe you are looking for