Indicateurs cluster redondants

Bonjour,
J'ai 95 indicateurs LED circulaire de type booléen dans un cluster en face avant.
J'ai créé mes led, une par une, de sorte que leur étiquette soit différente, de "Booléen 1" à Booléen 95".
J'ai réalisé un chenillard, dont je suis sûr du bon fonctionnement. (voir le VI que j'avais donné dans le forum pour une autre question: ici)
J'ai juste ajouté la fonction "tableau en cluster" avant mon 'indicateurs-cluster' composé de 95 booléens, pour visualiser l'état de mes sorties.
Cependant, je n'ai pas trouvé de moyen sur le diagramme ou à partir de la boite cluster de vérifier dans quel ordre sont rangés mes booléens. 
Je n'arrive pas à avoir le détail du cluster.
Cela ne me poserait pas de problèmes si les led s'allumaient dans l'ordre de Booléen 1 à Booléen 95, mais ce n'est pas le cas.
Certaines de mes led s'allument même plusieurs fois.
J'espère avoir été suffisament clair, et que vous pourrez m'indiquer comment obtenir le détail de mon cluster, ou une autre procédure.
Résolu !
Accéder à la solution.

Bonjour,
Pour savoir dans quel ordre sont rangés les commandes :
Clique droit sur le cadre du cluster » Ordonner les commandes dans le cluster.
Le redimensionnement automatique peut aussi etre utile...
Cordialement,
Da Helmut

Similar Messages

  • Erreur (exemples)

    Bonjour à tous,
    Petite question de débutant (que je suis)
    J'aborde la question de la gestion des erreurs
    Je me rends compte qu'il s'agit d'une chose importante.
    Je ne me rends pas bien compte de ce qu'est une "erreur" et quand elle peut se produire.
    J'ai pas mal de doc ... mais il me manque une chose ... des exemples de VI générant une erreur.
    Rien de tel qu'un exemple que l'on peut analyser, modifier, retourner dans tous les sens.
    Et sans quelques exemples, l'idée reste abstraite.
    Un de vous pourrait-il, s'il vous plaît, me passer 1 ou 2 exemples de code simple générant une erreur.
    par exemple, un boucle while ou For, avec un code minimum générant une erreur.
    Que je puisse "voir" et "toucher" la chose !
    Merci à tous,
    Résolu !
    Accéder à la solution.

    Bonjour Ouadji,
    Sous labview une erreur est représentée par un cluster de 3 éléments :
    un booléen : il indique si une erreur est générée (true) ou non (false)
    un entier numérique : il indique un code d'erreur si on est en présence d'une erreur
    une chaine de caractères : elle propose une déscription de l'erreur générée
    Vous pouvez donc générer votre propre erreur si vous le souhaitez.
    Il est en effet possible de créer une constante "cluster d'erreur", à vous de manipuler cette constante comme bon vous semble pour générer votre erreur.
    Voici un petit vi générant une erreur. Ne sachant pas exactement quel type d'erreur vous souhaitez obtenir pour vous analyses j'ai fait un programme d'E/S sur fichier très simple, avec une boucle While.
    l'erreur est due au fait que le fichier désigné "fail.ex" est introuvable.
    En cablant un indiquateur "cluster d'erreur" au flux d'erreur vous aurez plus de détails sur les valeurs des différents éléments du cluster.
    En éspérant vous avoir aidé !
    Cordialement,
    Vincent.O
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Été de LabVIEW 2014
    12 présentations en ligne, du 30 juin au 18 juillet

  • How do I add indication to a cluster

    I have a couple of radio buttons in a cluster and I'd like to add indication to the individual buttons within the cluster.  How is this done?

    Disregard this post,I figured out how to do it in the help section.

  • Unable to create cluster, hangs on forming cluster

     
    Hi all,
    I am trying to create a 2 node cluster on two x64 Windows Server 2008 Enterprise edition servers. I am running the setup from the failover cluster MMC and it seems to run ok right up to the point where the snap-in says creating cluster. Then it seems to hang on "forming cluster" and a message pops up saying "The operation is taking longer than expected". A counter comes up and when it hits 2 minutes the wizard cancels and another message comes up "Unable to sucessfully cleanup".
    The validation runs successfully before I start trying to create the cluster. The hardware involved is a HP EVA 6000, two Dell 2950's
    I have included the report generated by the create cluster wizard below and the error from the event log on one of the machines (the error is the same on both machines).
    Is there anything I can do to give me a better indication of what is happening, so I can resolve this issue or does anyone have any suggestions for me?
    Thanks in advance.
    Anthony
    Create Cluster Log
    ==================
    Beginning to configure the cluster <cluster>.
    Initializing Cluster <cluster>.
    Validating cluster state on node <Node1>
    Searching the domain for computer object 'cluster'.
    Creating a new computer object for 'cluster' in the domain.
    Configuring computer object 'cluster' as cluster name object.
    Validating installation of the Network FT Driver on node <Node1>
    Validating installation of the Cluster Disk Driver on node <Node1>
    Configuring Cluster Service on node <Node1>
    Validating installation of the Network FT Driver on node <Node2>
    Validating installation of the Cluster Disk Driver on node <Node2>
    Configuring Cluster Service on node <Node2>
    Waiting for notification that Cluster service on node <Node2>
    Forming cluster '<cluster>'.
    Unable to successfully cleanup.
    To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.
    Event Log
    =========
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          29/08/2008 19:43:14
    Event ID:      1570
    Task Category: None
    Level:         Critical
    Keywords:     
    User:          SYSTEM
    Computer:      <NODE 2>
    Description:
    Node 'NODE2' failed to establish a communication session while joining the cluster. This was due to an authentication failure. Please verify that the nodes are running compatible versions of the cluster service software.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{baf908ea-3421-4ca9-9b84-6689b8c6f85f}" />
        <EventID>1570</EventID>
        <Version>0</Version>
        <Level>1</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2008-08-29T18:43:14.294Z" />
        <EventRecordID>4481</EventRecordID>
        <Correlation />
        <Execution ProcessID="2412" ThreadID="3416" />
        <Channel>System</Channel>
        <Computer>NODE2</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">node2</Data>
      </EventData>
    </Event>
    ====
    I have also since tried creating the cluster with the firewall and no success.
    I have tried creating the node from the other cluster and this did not work either
    I tried creating a cluster with just  a single node and this did create a cluster. I could not join the other node and the network name resource did not come online either. The below is from the event logs.
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          01/09/2008 12:42:44
    Event ID:      1207
    Task Category: Network Name Resource
    Level:         Error
    Keywords:     
    User:          SYSTEM
    Computer:      Node1.Domain
    Description:
    Cluster network name resource 'Cluster Name' cannot be brought online. The computer object associated with the resource could not be updated in domain 'Domain' for the following reason:
    Unable to obtain the Primary Cluster Name Identity token.
    The text for the associated error code is: An attempt has been made to operate on an impersonation token by a thread that is not currently impersonating a client.
    The cluster identity 'CLUSTER$' may lack permissions required to update the object. Please work with your domain administrator to ensure that the cluster identity can update computer objects in the domain.

    I am having the exact same issue... but these are on freshly created virtual machines... no group policy or anything...
    I am 100% unable to create a Virtual Windows server 2012 failover cluster using two virtual fiber channel adapters to connect to the shared storage.
    I've tried using GUI and powershell, I've tried adding all available storage, or not adding it, I've tried renaming the server and changing all the IP addresses....
    To reproduce:
    1. Create two identical Server 2012 virtual machines
    (My Config: 4 CPU's, 4gb-8gb dynamic memory, 40gb HDD, two network cards (one for private, one for mgmt), two fiber cards to connect one to each vsan.)
    2. Update both VM's to current windows updates
    3. Add Failover Clustering role, Reboot, and try to create cluster.
    Cluster passed all validation tests perfectly, but then it gets to "forming cluster" and times out =/
    Any assistance would be greatly appreciate.

  • Saving cluster of different data types to a file

    Hi,
    I use LV 8.6 SDK. I need to save clusters of different data types to a file on a disk, row by row.
    To be specific: I have a program that performs various investigations on a signal collected by DAQmx. Each time the quality of the signal is not in a specified boundaries, i get an indication. It is a cluster of time stamp, string, dbl, and Boolean. The program is supposed to run for few weeks in a row so there can be a lot of these indications. I expect to have around 200 000 rows a week (Altogether, divided into several groups).  
    I thought about TDMS but I am not able to save such a cluster. And I would like to save it as tdms cause i could divide the data to different groups. I also thought about data base but that would be the first time i use db and I really do not  have time to learn that now.
    I know it is possible to change some of the data types to others, ex Boolean to 0-1, but i need a string and a time stamp there. 
    Can someone advise me which data format should I use? Which one is the best one in this situation?
    Thanks in advance
    handre

    If you do not need to access data from another application (other than Labview) you can just save it as a binary file.
    It is the best choice (for me).
    I made an example with one cluster. You can replace that with an array of clusters, of that data type.
    Attachments:
    Example_VI_BD.png ‏2 KB

  • Multiple Senior Cluster Members?

    Hi Guys,
    We've had a few nodes kicked out of one of our production clusters all with messages similar to this:
    ERROR 2008-04-21 18:17:05.753 Oracle Coherence GE 3.3.1/389p1 <Error> (thread=Cluster, member=29): Received cluster heartbeat from the senior Member(Id=2, Timestamp=2008-04-18 11:07:21.948, Address=172.21.205.151:8089, MachineId=29847, Location=process:17367@trulxfw0006,member:trulxfw0006-2) that does not contain this Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5); stopping cluster service.
    DEBUG 2008-04-21 18:17:05.753 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=29): Service Cluster left the cluster
    The logs on the senior member (2) are interesting though:
    DEBUG 2008-04-21 18:17:05.601 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left service Management with senior member 2
    DEBUG 2008-04-21 18:17:05.602 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left service WriteQueueSync with senior member 3
    DEBUG 2008-04-21 18:17:05.602 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left service WriteQueueAsync with senior member 3
    DEBUG 2008-04-21 18:17:05.603 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left service DistributedCache with senior member 3
    DEBUG 2008-04-21 18:17:05.603 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left service ServiceControl with senior member 3
    DEBUG 2008-04-21 18:17:05.604 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left service InvocationService with senior member 2
    DEBUG 2008-04-21 18:17:05.607 Oracle Coherence GE 3.3.1/389p1 <D5> (thread=Cluster, member=2): Member 29 left Cluster with senior member 2
    I wasn't aware that there could be multiple senior members. Is this indicative of something bad going on?
    Thanks, Paul
    PS Metalink is not behaving so I can't raise it there.

    Hi Jon,
    I've done some more digging into the logs for the whole cluster.
    Of the five nodes that left the cluster, three have the same reason:
    trulxfw0002/180-primary-0.log.1:ERROR 2008-04-21 17:05:19.763 Oracle Coherence GE 3.3.1/389p1 <Error> (thread=Cluster, member=34): This node appears to have partially lost the connectivity: it receives responses from MemberSet(Size=2, BitSetCount=2, ids=[8, 32]) which communicate with Member(Id=22, Timestamp=2008-04-18 11:07:29.911, Address=172.21.205.149:8092, MachineId=29845, Location=process:11858@trulxfw0004,member:trulxfw0004-5), but is not responding directly to this member; that could mean that either requests are not coming out or responses are not coming in; stopping cluster service.
    trulxfw0003/180-primary-7.log.3:ERROR 2008-04-21 00:40:02.153 Oracle Coherence GE 3.3.1/389p1 <Error> (thread=Cluster, member=28): This node appears to have partially lost the connectivity: it receives responses from MemberSet(Size=2, BitSetCount=3, ids=[35, 38]) which communicate with Member(Id=5, Timestamp=2008-04-18 11:07:21.992, Address=172.21.205.151:8090, MachineId=29847, Location=process:17351@trulxfw0006,member:trulxfw0006-1), but is not responding directly to this member; that could mean that either requests are not coming out or responses are not coming in; stopping cluster service.
    trulxfw0006/180-primary-0.log.6:ERROR 2008-04-18 23:13:33.896 Oracle Coherence GE 3.3.1/389p1 <Error> (thread=Cluster, member=1): This node appears to have partially lost the connectivity: it receives responses from MemberSet(Size=2, BitSetCount=3, ids=[14, 42]) which communicate with Member(Id=28, Timestamp=2008-04-18 11:07:34.381, Address=172.21.205.148:8091, MachineId=29844, Location=process:17152@trulxfw0003,member:trulxfw0003-7), but is not responding directly to this member; that could mean that either requests are not coming out or responses are not coming in; stopping cluster service.
    Member 29 had been marked as paused during its lifetime by various other nodes, however there are not frequent at the time of eviction:
    grep -R "member:trulxfw0003-5" * | grep "failed to respond" | grep "18:17"
    grep -R "member:trulxfw0003-5" * | grep "failed to respond" | grep "18:16"
    trulxfw0004/180-primary-6.log.8:DEBUG 2008-04-18 18:16:26.337 Oracle Coherence GE 3.3.1/389p1 <D6> (thread=PacketPublisher, member=23): Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5) has failed to respond to 17 packets; declaring this member as paused.
    trulxfw0005/180-primary-1.log.7:DEBUG 2008-04-21 18:16:37.315 Oracle Coherence GE 3.3.1/389p1 <D6> (thread=PacketPublisher, member=10): Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5) has failed to respond to 17 packets; declaring this member as paused.
    [xflow@lonrs00342 machines]$ grep -R "member:trulxfw0003-5" * | grep "failed to respond" | grep "18:15"
    trulxfw0002/180-primary-7.log.8:DEBUG 2008-04-18 18:15:44.161 Oracle Coherence GE 3.3.1/389p1 <D6> (thread=PacketPublisher, member=37): Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5) has failed to respond to 17 packets; declaring this member as paused.
    trulxfw0006/180-primary-7.log.9:DEBUG 2008-04-18 18:15:51.477 Oracle Coherence GE 3.3.1/389p1 <D6> (thread=PacketPublisher, member=4): Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5) has failed to respond to 17 packets; declaring this member as paused.
    grep -R "member:trulxfw0003-5" * | grep "failed to respond" | grep "18:14"
    trulxfw0002/180-primary-0.log.41:DEBUG 2008-04-18 22:18:14.220 Oracle Coherence GE 3.3.1/389p1 <D6> (thread=PacketPublisher, member=34): Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5) has failed to respond to 17 packets; declaring this member as paused.
    [xflow@lonrs00342 machines]$
    grep -R "member:trulxfw0003-5" * | grep "failed to respond" | grep "18:13"
    trulxfw0002/180-primary-0.log.43:DEBUG 2008-04-18 18:13:41.083 Oracle Coherence GE 3.3.1/389p1 <D6> (thread=PacketPublisher, member=34): Member(Id=29, Timestamp=2008-04-18 11:07:34.491, Address=172.21.205.148:8092, MachineId=29844, Location=process:17098@trulxfw0003,member:trulxfw0003-5) has failed to respond to 17 packets; declaring this member as paused.
    How closely does a paused declaration correlate with a vote for eviction?
    The last node which was kicked out had a similar pattern to trulxfw0003-5 above, however it had significantly more paused declaration messages in the minutes preceding eviction.
    We have a listener on the Cluster service which detects the local node leaving the cluster and kills itself. Is this a good idea, or is it safer to let Coherence sort itself out?
    Thanks, Paul

  • Basic problem with creation of HDInsight cluster with sql-server meta db

    We have been testing HDInsight capabilities in the last few days since the introduction of 3.2 I can no longer create HDInsight Haddoop cluster from the old portal https://manage.windowsazure.com with the meta data being in external sql-server (the
    new portal does not yet support hdinsight) every which way of trying fails with Error and no indication as what the error is, can anyone help
    Thanks

    Hi Medz-pigh,
    Can you tell me the stage where it fails? I mean did it stuck when providing metadata info on portal or during deployment it hungs? If you can tell more info it will help.
    Thanks and Regards
    Sudhir Rawat

  • Hyper-V cluster Backup causes virtual machine reboots for common Cluster Shared Volumes members.

    I am having a problem where my VMs are rebooting while other VMs that share the same CSV are being backed up. I have provided all the information that I have gather to this point below. If I have missed anything, please let me know.
    My HyperV Cluster configuration:
    5 Node Cluster running 2008R2 Core DataCenter w/SP1. All updates as released by WSUS that will install on a Core installation
    Each Node has 8 NICs configured as follows:
     NIC1 - Management/Campus access (26.x VLAN)
     NIC2 - iSCSI dedicated (22.x VLAN)
     NIC3 - Live Migration (28.x VLAN)
     NIC4 - Heartbeat (20.x VLAN)
     NIC5 - VSwitch (26.x VLAN)
     NIC6 - VSwitch (18.x VLAN)
     NIC7 - VSwitch (27.x VLAN)
     NIC8 - VSwitch (22.x VLAN)
    Following hotfixes additional installed by MS guidance (either while build or when troubleshooting stability issue in Jan 2013)
     KB2531907 - Was installed during original building of cluster
     KB2705759 - Installed during troubleshooting in early Jan2013
     KB2684681 - Installed during troubleshooting in early Jan2013
     KB2685891 - Installed during troubleshooting in early Jan2013
     KB2639032 - Installed during troubleshooting in early Jan2013
    Original cluster build was two hosts with quorum drive. Initial two hosts were HST1 and HST5
    Next host added was HST3, then HST6 and finally HST2.
    NOTE: HST4 hardware was used in different project and HST6 will eventually become HST4
    Validation of cluster comes with warning for following things:
     Updates inconsistent across hosts
      I have tried to manually install "missing" updates and they were not applicable
      Most likely cause is different build times for each machine in cluster
       HST1 and HST5 are both the same level because they were built at same time
       HST3 was not rebuilt from scratch due to time constraints and it actually goes back to Pre-SP1 and has a larger list of updates that others are lacking and hence the inconsistency
       HST6 was built from scratch but has more updates missing than 1 or 5 (10 missing instead of 7)
       HST2 was most recently built and it has the most missing updates (15)
     Storage - List Potential Cluster Disks
      It says there are Persistent Reservations on all 14 of my CSV volumes and thinks they are from another cluster.
      They are removed from the validation set for this reason. These iSCSI volumes/disks were all created new for
      this cluster and have never been a part of any other cluster.
     When I run the Cluster Validation wizard, I get a slew of Event ID 5120 from FailoverClustering. Wording of error:
      Cluster Shared Volume 'Volume12' ('Cluster Disk 13') is no longer available on this node because of
      'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the
      volume is reestablished.
     Under Storage and Cluster Shared VOlumes in Failover Cluster Manager, all disks show online and there is no negative effect of the errors.
    Cluster Shared Volumes
     We have 14 CSVs that are all iSCSI attached to all 5 hosts. They are housed on an HP P4500G2 (LeftHand) SAN.
     I have limited the number of VMs to no more than 7 per CSV as per best practices documentation from HP/Lefthand
     VMs in each CSV are spread out amonst all 5 hosts (as you would expect)
    Backup software we use is BackupChain from BackupChain.com.
    Problem we are having:
     When backup kicks off for a VM, all VMs on same CSV reboot without warning. This normally happens within seconds of the backup starting
    What have to done to troubleshoot this:
     We have tried rebalancing our backups
      Originally, I had backup jobs scheduled to kick off on Friday or Saturday evening after 9pm
      2 or 3 hosts would be backing up VMs (Serially; one VM per host at a time) each night.
      I changed my backup scheduled so that of my 90 VMs, only one per CSV is backing up at the same time
       I mapped out my Hosts and CSVs and scheduled my backups to run on week nights where each night, there
       is only one VM backed up per CSV. All VMs can be backed up over 5 nights (there are some VMs that don't
       get backed up). I also staggered the start times for each Host so that only one Host would be starting
       in the same timeframe. There was some overlap for Hosts that had backups that ran longer than 1 hour.
      Testing this new schedule did not fix my problem. It only made it more clear. As each backup timeframe
      started, whichever CSV the first VM to start was on would have all of their VMs reboot and come back up.
     I then thought maybe I was overloading the network still so I decided to disable all of the scheduled backup
     and run it manually. Kicking off a backup on a single VM, in most cases, will cause the reboot of common
     CSV members.
     Ok, maybe there is something wrong with my backup software.
      Downloaded a Demo of Veeam and installed it onto my cluster.
      Did a test backup of one VM and I had not problems.
      Did a test backup of a second VM and I had the same problem. All VMs on same CSV rebooted
     Ok, it is not my backup software. Apparently it is VSS. I have looked through various websites. The best troubleshooting
     site I have found for VSS in one place it on BackupChain.com (http://backupchain.com/hyper-v-backup/Troubleshooting.html)
     I have tested almost every process on there list and I will lay out results below:
      1. I have rebooted HST6 and problems still persist
      2. When I run VSSADMIN delete shadows /all, I have no shadows to delete on any of my 5 nodes
       When I run VSSADMIN list writers, I have no error messages on any writers on any node...
      3. When I check the listed registry key, I only have the build in MS VSS writer listed (I am using software VSS)
      4. When I run VSSADMIN Resize ShadowStorge command, there is no shadow storage on any node
      5. I have completed the registration and service cycling on HST6 as laid out here and most of the stuff "errors"
       Only a few of the DLL's actually register.
      6. HyperV Integration Services were reconciled when I worked with MS in early January and I have no indication of
       further issue here.
      7. I did not complete the step to delete the Subscriptions because, again, I have no error messages when I list writers
      8. I removed the Veeam software that I had installed to test (it hadn't added any VSS Writer anyway though)
      9. I can't realistically uninstall my HyperV and test VSS
      10. Already have latest SPs and Updates
      11. This is part of step 5 so I already did this. This seems to be a rehash of various other stratgies
     I have used the VSS Troubleshooter that is part of BackupChain (Ctrl-T) and I get the following error:
      ERROR: Selected writer 'Microsoft Hyper-V VSS Writer' is in failed state!
      - Status: 8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
      - Writer Failure code: 0x800423f0 (<Unknown error code>)
      - Writer ID: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
      - Instance ID: {d55b6934-1c8d-46ab-a43f-4f997f18dc71}
      VSS snapshot creation failed with result: 8000FFFF
    VSS errors in event viewer. Below are representative errors I have received from various Nodes of my cluster:
    I have various of the below spread out over all hosts except for HST6
    Source: VolSnap, Event ID 10, The shadow copy of volume took too long to install
    Source: VolSnap, Event ID 16, The shadow copies of volume x were aborted because volume y, which contains shadow copy storage for this shadow copy, wa force dismounted.
    Source: VolSnap, Event ID 27, The shadow copies of volume x were aborted during detection because a critical control file could not be opened.
    I only have one instance of each of these and both of the below are from HST3
    Source: VSS, Event ID 12293, Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an
    unexpected error.
    Source: VSS, Event ID 8193, Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult.  hr = 0x80070057, The parameter is incorrect.
    So, basically, everything I have tried has resulted in no success towards solving this problem.
    I would appreciate anything assistance that can be provided.
    Thanks,
    Charles J. Palmer
    Wright Flood

    Tim,
    Thanks for the reply. I ran the first two commands and got this:
    Name                                                            
    Role Metric
    Cluster Network 1                                              
    3  10000
    Cluster Network 2 - HeartBeat                              1   1300
    Cluster Network 3 - iSCSI                                    0  10100
    Cluster Network 4 - LiveMigration                         1   1200
    When you look at the properties of each network, this is how I have it configured:
    Cluster Network 1 - Allow cluster network communications on this network and Allow clients to connect through this network (26.x subnet)
    Cluster Network 2 - Allow cluster network communications on this network. New network added while working with Microsoft support last month. (28.x subnet)
    Cluster Network 3 - Do not allow cluster network communications on this network. (22.x subnet)
    Cluster Network 4 - Allow cluster network communications on this network. Existing but not configured to be used by VMs for Live Migration until MS corrected. (20.x subnet)
    Should I modify my metrics further or are the current values sufficient.
    I worked with an MS support rep because my cluster (once I added the 5th host) stopped being able to live migrate VMs and I had VMs host jumping on startup. It was a mess for a couple of days. They had me add the Heartbeat network as part of the solution
    to my problem. There doesn't seem to be anywhere to configure a network specifically for CSV so I would assume it would use (based on my metrics above) Cluster Network 4 and then Cluster Network 2 for CSV communications and would fail back to the Cluster Network
    1 if both 2 and 4 were down/inaccessible.
    As to the iSCSI getting a second NIC, I would love to but management wants separation of our VMs by subnet and role and hence why I need the 4 VSwitch NICs. I would have to look at adding an additional quad port NIC to my servers and I would be having to
    use half height cards for 2 of my 5 servers for that to work.
    But, on that note, it doesn't appear to actually be a bandwidth issue. I can run a backup for a single VM and get nothing on the network card (It caused the reboots before any real data has even started to pass apparently) and still the problem occurs.
    As to Backup Chain, I have been working with the vendor and they are telling my the issue is with VSS. They also say they support CSV as well. If you go to this page (http://backupchain.com/Hyper-V-Backup-Software.html)
    they say they support CSVs. Their tech support has been very helpful but unfortunately, nothing has fixed the problem.
    What is annoying is that every backup doesn't cause a problem. I have a daily backup of one of our machines that runs fine without initiating any additional reboots. But most every other backup job will trigger the VMs on the common CSV to reboot.
    I understood about the updates but I had to "prove" it to the MS tech I was on the phone with and hence I brought it up. I understand on the storage as well. Why give a warning for something that is working though... I think that is just a poor indicator
    that it doesn't explain that in the report.
    At a loss for what else I can do,
    Charles J. Palmer

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • Hyper-V Failover Cluster virtual guests suddenly reboots

    The environment is Server 2012 R2 using dual clusters--a Hyper-V Failover Cluster running guest application virtual machines and a Scale-Out File Server Cluster using Tiered Storage Spaces which are used to supply SMB3 shares
    for Quorum and CSV. Has anyone had this problem?

    Anything relevant in the host or guest event logs? I would also check the cluster event logs to see if there are any indications there as well.
    Does the guest go down hard or gracefully reboot?
    Need more info.
    Andy Syrewicze
    Come talk more about Hyper-V and the Microsoft Server Stack at
    Syrewiczeit.com and the Altaro Hyper-V Hub!
    Post are my own and in no way reflect the views of my employer or any other entity in which I produce technical content for.

  • How to make waaveform chart cmpatible to access cluster of data

    I am new to LV.. and the concept of Clusters..
    I have one sample program of two channel oscilloscope..
    The waveform control is having pink border...but when I am trying to do same thing..my control is coming with orange border..
    What property do I need to change to make my waveform control  compatible to a access cluster of input signals.

    The color of a wire/control/indicator is indicative of the data type.
    The pink color you are seeing means that your data is in a cluster.
    The orange color you see means your control is either double or single precision float.
    A cluster of data can be sent to an XY graph, but not a waveform chart/graph.
    To use a cluster for an XY graph, you should have a 1D array of X data clustered with a 1D array of Y data.
    I'm assuming this is not what your cluster is right now.
    You probably have a cluster with both channels of data from your oscilloscope.
    Use the 'unbundle' function to extract both arrays of data.
    Then build an array of each of the two 1D arrays. Pass this newly created 2D array to a waveform chart, and you should be all set.
    Cory K

  • Servers in cluster go down : twice with different reasons

              Hi All,
              Our environment is deployed as a cluster with two servers. The configuration being
              WL server 5.1 with SP8 on Solaris. Everything was dandy until the cluster went down
              twice in a period of 5 days each time with a different reason. Any help in this regard
              is highly appreciated.
              Thanks-
              vijay
              First fall :
              The first server in the cluster went down after spitting out the "listen failed"
              message a couple of thousand times. Immediately following it the second server shut
              down with the same error. I understand that in answer to a similar posting in the
              group it was suggested to increase the file descriptor count to 1024 or all the way
              upto 4096, but doing so would it solve the problem in total.
              Fri Jul 05 09:38:31 GMT 2002:<E> <ListenThread> Listen failed, failure count:
              '2148'
              java.net.SocketException: Too many open files
                   at java.net.PlainSocketImpl.socketAccept(Native Method)
                   at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:421)
                   at java.net.ServerSocket.implAccept(ServerSocket.java:243)
                   at java.net.ServerSocket.accept(ServerSocket.java:222)
                   at weblogic.t3.srvr.ListenThread.run(ListenThread.java:277)
              Fri Jul 05 09:38:31 GMT 2002:<A> <ListenThread> ListenThread.run() failed:
              java.lang.IllegalArgumentException: timeout value is negative
                   at java.lang.Thread.sleep(Native Method)
                   at weblogic.t3.srvr.ListenThread.run(ListenThread.java:307)
              Fall two:
              This time around the first server had the following exception when looking up the
              JNDI for a JMS queue specifically but preceeding with the following exception. And
              following it the second server timed with the same exception.
              Mon Jul 08 16:41:18 GMT 2002:<I> <WebAppServletContext-SmartChain> MovementControllerServlet:
              init
              Mon Jul 08 17:59:14 GMT 2002:<E> <HttpSessionContext> Unexpected error in HTTP session
              timeout callback
              weblogic.cluster.replication.NotFoundException: unregister unable to find object
              1549818748307493344
                   at weblogic.cluster.replication.ReplicationManager.find(ReplicationManager.java:596)
                   at weblogic.cluster.replication.ReplicationManager.unregister(ReplicationManager.java:644)
                   at weblogic.servlet.internal.session.ReplicatedSession.invalidate(ReplicatedSession.java:259)
                   at weblogic.servlet.internal.session.ReplicatedSessionContext.invalidateSession(ReplicatedSessionContext.java:131)
                   at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.invalidateSessions(SessionContext.java:502)
                   at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.trigger(SessionContext.java:479)
                   at weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigger.java:197)
                   at weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java:191)
                   at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java:60)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:129)
              --------------- nested within: ------------------
              weblogic.utils.NestedError: Can't unregister an ROID that does not exist - with nested
              exception:
              [weblogic.cluster.replication.NotFoundException: unregister unable to find object
              1549818748307493344]
                   at weblogic.servlet.internal.session.ReplicatedSession.invalidate(ReplicatedSession.java:267)
                   at weblogic.servlet.internal.session.ReplicatedSessionContext.invalidateSession(ReplicatedSessionContext.java:131)
                   at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.invalidateSessions(SessionContext.java:502)
                   at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.trigger(SessionContext.java:479)
                   at weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigger.java:197)
                   at weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java:191)
                   at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java:60)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:129)
              Mon Jul 08 18:03:28 GMT 2002:<I> <Cluster> Timed out server
              Mon Jul 08 18:03:28 GMT 2002:<I> <ConflictHandler> ConflictStop smartchain.SecurityFactory:com.smartchain.datacenter.eam.SecurityFactoryImpl
              (from [email protected]:[7001,7001,7002,7002,7001,-1])
              Mon Jul 08 18:03:28 GMT 2002:<I> <ConflictHandler> ConflictStop smartchain.LoginManager:com.smartchain.datacenter.eam.LoginManagerImpl
              (from [email protected]:[7001,7001,7002,7002,7001,-1])
              Mon Jul 08 18:03:49 GMT 2002:<I> <RJVM> Signaling peer -2914132898719179429S10.7.68.69:[7001,7001,7002,7002,7001,-1]
              gone: weblogic.rjvm.PeerGoneException:
              - with nested exception:
              [java.io.EOFException]
              javax.naming.NameNotFoundException: 'javax.jms.SEDEM'; remaining name 'SEDEM'
                   at weblogic.jndi.toolkit.BasicWLContext.resolveName(BasicWLContext.java:745)
                   at weblogic.jndi.toolkit.BasicWLContext.lookup(BasicWLContext.java:133)
                   at weblogic.jndi.toolkit.BasicWLContext.lookup(BasicWLContext.java:574)
                   at javax.naming.InitialContext.lookup(InitialContext.java:350)
                   at com.smartchain.datacenter.sedem.servlet.BLEventsHandleServlet.dispatch(BLEventsHandleServlet.java:240)
                   at com.smartchain.datacenter.sedem.servlet.BLEventsHandleServlet.service(BLEventsHandleServlet.java:176)
                   at javax.servlet.http.HttpServlet.service(HttpServlet.java:865)
                   at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:106)
                   at weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImpl.java:907)
                   at weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImpl.java:851)
                   at weblogic.servlet.internal.ServletContextManager.invokeServlet(ServletContextManager.java:252)
                   at weblogic.socket.MuxableSocketHTTP.invokeServlet(MuxableSocketHTTP.java:364)
                   at weblogic.socket.MuxableSocketHTTP.execute(MuxableSocketHTTP.java:252)
                   at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:129)
              

              Hi All
              I'm also having similar problem with them. The server went a few times with the
              similar error.
              We are using WL 5.1 SP12 with file descriptor set to 4096 in Solaris.
              I'm very desperate to find a solution to this.
              Thanks in advance.
              Herman Wijaya
              Rajesh Mirchandani <[email protected]> wrote:
              >You should try increasing the FDs to 4096 and see if the same problem
              >can be reproduced.
              >
              >Kumar Allamraju wrote:
              >
              >> vijay singh wrote:
              >>
              >> > Hi All,
              >> >
              >> > Our environment is deployed as a cluster with two servers. The
              >configuration being
              >> > WL server 5.1 with SP8 on Solaris. Everything was dandy until the
              >cluster went down
              >> > twice in a period of 5 days each time with a different reason. Any
              >help in this regard
              >> > is highly appreciated.
              >> >
              >> > Thanks-
              >> >
              >> > vijay
              >> >
              >> > First fall :
              >> > ------------
              >> >
              >> > The first server in the cluster went down after spitting out the
              >"listen failed"
              >> > message a couple of thousand times. Immediately following it the
              >second server shut
              >> > down with the same error. I understand that in answer to a similar
              >posting in the
              >> > group it was suggested to increase the file descriptor count to 1024
              >or all the way
              >> > upto 4096, but doing so would it solve the problem in total.
              >> >
              >> > Fri Jul 05 09:38:31 GMT 2002:<E> <ListenThread> Listen failed,
              >failure count:
              >> > '2148'
              >> > java.net.SocketException: Too many open files
              >> > at java.net.PlainSocketImpl.socketAccept(Native Method)
              >> > at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:421)
              >> > at java.net.ServerSocket.implAccept(ServerSocket.java:243)
              >> > at java.net.ServerSocket.accept(ServerSocket.java:222)
              >> > at weblogic.t3.srvr.ListenThread.run(ListenThread.java:277)
              >>
              >> This is indeed a fairly harmless failure. Whenever a
              >> ServerSocket.accept() fails, we increment a count and wait for a period
              >> of time (starting with 10secs) that increases on successive failures.
              >> Although, the failures are harmless, they need to be monitored - if
              >> there are too many of them, it is indicative of something that is not
              >> quite right with the network [or in the tuning of the native OS for
              >TCP].
              >>
              >> You also might want to monitor netstat output and see how long the
              >> sockets are being kept in CLOSE_WAIT state. Sometimes tuning one of
              >> those TCP/IP parameters might help.
              >>
              >> >
              >> > Fri Jul 05 09:38:31 GMT 2002:<A> <ListenThread> ListenThread.run()
              >failed:
              >> > java.lang.IllegalArgumentException: timeout value is negative
              >> > at java.lang.Thread.sleep(Native Method)
              >> > at weblogic.t3.srvr.ListenThread.run(ListenThread.java:307)
              >> >
              >>
              >> I guess this might have been fixed in one of the latest SP's of 51.
              >> I would suggest that you try with latest SP (SP12) and let us know
              >if
              >> you still this problem.
              >>
              >> > Fall two:
              >> > ---------
              >> >
              >> > This time around the first server had the following exception when
              >looking up the
              >> > JNDI for a JMS queue specifically but preceeding with the following
              >exception. And
              >> > following it the second server timed with the same exception.
              >> >
              >> > Mon Jul 08 16:41:18 GMT 2002:<I> <WebAppServletContext-SmartChain>
              >MovementControllerServlet:
              >> > init
              >> > Mon Jul 08 17:59:14 GMT 2002:<E> <HttpSessionContext> Unexpected
              >error in HTTP session
              >> > timeout callback
              >> > weblogic.cluster.replication.NotFoundException: unregister unable
              >to find object
              >> > 1549818748307493344
              >> > at weblogic.cluster.replication.ReplicationManager.find(ReplicationManager.java:596)
              >> > at weblogic.cluster.replication.ReplicationManager.unregister(ReplicationManager.java:644)
              >> > at weblogic.servlet.internal.session.ReplicatedSession.invalidate(ReplicatedSession.java:259)
              >> > at weblogic.servlet.internal.session.ReplicatedSessionContext.invalidateSession(ReplicatedSessionContext.java:131)
              >> > at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.invalidateSessions(SessionContext.java:502)
              >> > at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.trigger(SessionContext.java:479)
              >> > at weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigger.java:197)
              >> > at weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java:191)
              >> > at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java:60)
              >> > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:129)
              >> > --------------- nested within: ------------------
              >> > weblogic.utils.NestedError: Can't unregister an ROID that does not
              >exist - with nested
              >> > exception:
              >> > [weblogic.cluster.replication.NotFoundException: unregister unable
              >to find object
              >> > 1549818748307493344]
              >> > at weblogic.servlet.internal.session.ReplicatedSession.invalidate(ReplicatedSession.java:267)
              >> > at weblogic.servlet.internal.session.ReplicatedSessionContext.invalidateSession(ReplicatedSessionContext.java:131)
              >> > at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.invalidateSessions(SessionContext.java:502)
              >> > at weblogic.servlet.internal.session.SessionContext$SessionInvalidator.trigger(SessionContext.java:479)
              >> > at weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigger.java:197)
              >> > at weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java:191)
              >> > at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java:60)
              >> > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:129)
              >> >
              >> > Mon Jul 08 18:03:28 GMT 2002:<I> <Cluster> Timed out server
              >> > Mon Jul 08 18:03:28 GMT 2002:<I> <ConflictHandler> ConflictStop smartchain.SecurityFactory:com.smartchain.datacenter.eam.SecurityFactoryImpl
              >> > (from [email protected]:[7001,7001,7002,7002,7001,-1])
              >> > Mon Jul 08 18:03:28 GMT 2002:<I> <ConflictHandler> ConflictStop smartchain.LoginManager:com.smartchain.datacenter.eam.LoginManagerImpl
              >> > (from [email protected]:[7001,7001,7002,7002,7001,-1])
              >> > Mon Jul 08 18:03:49 GMT 2002:<I> <RJVM> Signaling peer -2914132898719179429S10.7.68.69:[7001,7001,7002,7002,7001,-1]
              >> > gone: weblogic.rjvm.PeerGoneException:
              >> > - with nested exception:
              >> > [java.io.EOFException]
              >> > javax.naming.NameNotFoundException: 'javax.jms.SEDEM'; remaining
              >name 'SEDEM'
              >> > at weblogic.jndi.toolkit.BasicWLContext.resolveName(BasicWLContext.java:745)
              >> > at weblogic.jndi.toolkit.BasicWLContext.lookup(BasicWLContext.java:133)
              >> > at weblogic.jndi.toolkit.BasicWLContext.lookup(BasicWLContext.java:574)
              >> > at javax.naming.InitialContext.lookup(InitialContext.java:350)
              >> > at com.smartchain.datacenter.sedem.servlet.BLEventsHandleServlet.dispatch(BLEventsHandleServlet.java:240)
              >> > at com.smartchain.datacenter.sedem.servlet.BLEventsHandleServlet.service(BLEventsHandleServlet.java:176)
              >> > at javax.servlet.http.HttpServlet.service(HttpServlet.java:865)
              >> > at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:106)
              >> > at weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImpl.java:907)
              >> > at weblogic.servlet.internal.ServletContextImpl.invokeServlet(ServletContextImpl.java:851)
              >> > at weblogic.servlet.internal.ServletContextManager.invokeServlet(ServletContextManager.java:252)
              >> > at weblogic.socket.MuxableSocketHTTP.invokeServlet(MuxableSocketHTTP.java:364)
              >> > at weblogic.socket.MuxableSocketHTTP.execute(MuxableSocketHTTP.java:252)
              >> > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:129)
              >> >
              >>
              >> See the above <I> message "Timed out server" which is not a good sign.
              >> Either GC is running for more than 30 secs or this server is not
              >> receiving heartbeats from the other servers causing the other server
              >be
              >> dropping out of the cluster view. You may want to enable verbosegc
              >and
              >> monitor the heap usage and GC times.
              >>
              >> --
              >> Kumar
              >
              >--
              >Rajesh Mirchandani
              >Developer Relations Engineer
              >BEA Support
              >
              >
              

  • How come you can't put a Tab in a Cluster?

    Hello,
    How come you can't put a Tab Control in a Cluster?
    It looks like users have been asking this question
    for years, and all the way back to LV versions 6.1.
    Someday, please, pretty please, NI.
    Please. Maybe by version 9.5 of LV?
    Kevin.

    Ben wrote:  This idea just weirds me out!
    Kevin:  Sorry Ben, but think how I must feel.
    Ben wrote:  I understand the cosmetic and GUI interaction part of it OK.  Its the "how would it code?" part that send my head spinning.
    Kevin:  We don't need to worry about that.  We'll let the NI LABview guiruis worry about that.  They know how to program for TABs and they know how to program for Clustors.  Now they just need to put them together.
    Ben wrote:  Would the tab control be available via bundle by name and unbundle?
    Kevin:  Yes.  It would simply return the value indicating which page is selected.
    Ben wrote:  If the cluster was an indicator and the tab value was "page1" would the user be able to select page2?
    Kevin:  Of course, the block diagram would simply pass the enum indicating which page to display.
    Ben wrote:  On the block diagram, would there be a relationship between the tab and the elements on each tab?
    Kevin:  None.  Or no more so than the relationship of any element on a panel or front panel.
    Ben wrote:  On the FP the tab control has a "pages" property and from those we can get an array of refs to the controls on the pages. Would we still be able to that within a cluster.
    Kevin:  Yes.  No Differrence.
    Ben wrote:  Aside from the above Q's I would guess that the tab in a cluster would only allow greater abuse of clusters i.e. "Super Clusters".
    Kevin:  It is not my intent to abuse any Super Cluster in any way shape or from.  In contrast, would help to spread out some clutter from ones front pannels and move the clutter into tabs within a cluster.  The beauty of it would be that it would come automatic, and handled by labview.
    Ben wrote:  There was also until recently, limitations on the size of the type descriptors. Very complex FP's and large clusters would be hit by the problem that the type descriptor need more than the 16 bit allocated.
    Kevin:  I would never suggest even coming close to putting more that 65535 items in a cluser in any fashion.
    Ben wrote:  I do not pretend to know why NI did or did not allow this. In the end I can only share my guess and hope somone who does know will post.
    Kevin:  Yes, somebody from NI might know something about why we can't put a Tab in a Cluster.
    Ben wrote:
    Related story:
    I had often been mystified by "Why I could not create a waveform datatype block diagram constant?". While at NI Week and talking to Jeff Kodosky I asked him (thinking that I would get a good under-the-hood explanation of the data type used blah, blah, blah... He said "I'm not sure. It ws probably an over-sight."   I noticed that in LV 8, you can now create a waveform data type constant.
    Ben
    Kevin:
    Maybe it was an oversite.  Maybe we can have Tabs in Clusters in 64bit Lab View 9.
    Kevin's Disclaimer:  The variation in case usage, while referring to LabVIEW, should not be construed as disrespect, nor should it be interpreted as an indication of Kevin's LabVIEW programming skill or experience or knowlege.

  • Why virtual interfaces added to ManagementOS not visible to Cluster service?

    Hello All, 
    I"m starting this new thread since the one before is answered by our friend Udo. My problem in short is following. Diagram will be enough to explain what I'm trying to achieve. I've setup this lab to learn Hyper-V clustering with 2 nodes. It is Hyper-V
    server 2012. Both nodes have 3x physical NIcs, 1 in each node is dedicated to managing the Node. Rest of the two are used to create a NIC team. Atop of that NIC team, a virtual switch is created with -AllowManagementOS
    $False. Next I created and added following virtual interfaces to host partition, and plugged them into virtual switch created atop of teamed interface. These virtual interfaces should serve the purpose of various networks available. 
    For SAN i'm running a Linux VM which has iSCSI target server and clustering service has no problem with that. All tests pass ok.
    The problem is......when those virtual interfaces added to hosts; do not appear as available networks
    to cluster service; instead it only shows the management NIC as the available network to leverage. 
    This is making it difficult to understand how to setup a cluster of 2x Hyper-V Server nodes. Can someone help please?
    Regards,
    Shahzad.

    Shahzad,
    I've read this thread a couple of times and I don't think I'm clear on the exact question you're asking.
    When the clustering service goes out to look for "Networks", what it does is scan the IP addresses on each node. Every time it finds an IP in a unique subnet, that subnet is listed as a network. It can't see virtual switches and doesn't care about
    virtual vs. teamed vs. physical adapters or anything like that. It's just looking at IP addresses. This is why I'm confused when you say, "it won't show virtual interfaces available as networks". "Networks" in this context are IP subnets.
    I'm not aware of any context where a singular interface would be treated like a network.
    If you've got virtual adapters attached to the management operating system
    and have assigned IPs to them, the cluster should have discovered those networks. If you have multiple adapters on the same node using IPs in the same subnet, that network will only appear once and the cluster service will only use
    one adapter from that subnet on that node. The one it picked will be visible on the "Network Connections" tab at the bottom of Failover Cluster Manager when you're on the Networks section.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."
    Hello Eric and friends, 
    Eric, much appreciated about your interest about the issue and yes I agree with you when you said... "When the clustering service goes out to look for "Networks",
    what it does is scan the IP addresses on each node. Every time it finds an IP in a unique subnet, that subnet is listed as a network. It can't see virtual switches and doesn't care about virtual vs. teamed vs. physical adapters or anything like that. It's
    just looking at IP addresses. This is why I'm confused when you say, "it won't show virtual interfaces available as networks". "Networks" in this context are IP subnets. I'm not aware of any context where a singular interface would be treated
    like a network."
    By networks I meant to say subnets. Let me explain what I've configured so far:
    Node 1 & Node 2 installed with 3x NICs. All 3 NICs/node plugged into same switch. 
    Node1:  131.107.0.50/24
    Node2:  131.107l.0.150/24
    A Core Domain controller VM running on Node 1:   131.107.0.200/24 
    A JUMPBOX (WS 2012 R2 Std.) VM running on Node 1: 131.107.0.100/24
    A Linux SAN VM running on Node 2: 10.1.1.100/8 
    I planed to configured following networks:
    (1) Cluster traffic:  10.0.0.50/24     (IP given to virtual interface for Cluster traffic in Node1)
         Cluster traffic:  10.0.0.150/24   (IP given to virtual interface for Cluster traffic in Node2)
    (2) SAN traffic:      10.1.1.50/8      (IP given to virtual interfce for SAN traffic in Node1)  
         SAN traffic:      10.1.1.150/8    (IP given to virtual interfce for SAN traffic in Node2)
    Note: Cluster service has no problem accessing the SAN VM (10.1.1.100) over this network, it validates SAN settings and comes back OK. This is an indication that virtual interface is
    working fine. 
    (3) Migration traffic:   172.168.0.50/8     (IP given to virtual interfce for
    Migration traffic in Node1) 
         Migration traffic:   172.168.0.150/8    (IP given to virtual interfce for
    Migration  traffic in Node2)
    All these networks (virtual interfaces) are made available through two virtual switches which are configured EXACTLY identical on both Node1/Node2.
    Now after finishing the cluster validation steps (which comes all OK), when create cluster wizard starts, it only shows one network; i.e. network of physical Layer 2 switch i.e. 131.107.0.0/24.
    I wonder why it won't show IPs of other networks (10.0.0.0/8, 10.1.1.0/8 and  172.168.0.0/8)
    Regards,
    Shahzad

  • Nbean in use in Cluster

    Version 9.2.3
    Cluster | 2 Nodes
    Hi all
    i have a mysterious problem when i use a client(UC4) to transfer data between 2 DB's over a application. (every 2 min)
    If the App runs on just one ClusterNode everything is fine. But when i tell the client to use the 2 members of the cluster, the batch(managed by UC4) sometimes(up to 10 times a day) get a response from the weblogic -> (nBeanUsed: true).
    that's why the batch isn't starting and wait's 2 Min again.
    We do so because the batch is not allowed to run parallel at the same time.
    To check if the queue is already running, the developer uses the following
    boolean beanInUsed = iqueueobserver.isIQueueBeanInUse()
    But i can't find anything in the logfiles - even if i configured the logging to Debug.
    Any Ideas?

    Screen: 14.1-inch SXGA+ 1400 x 1050 (do you have a 14.1 inch standard aspect ratio laptop? your speaker location on the machine indicates that it is a 15.4 inch T61, and the 9 cell battery size relative to the machine is also indicative of that)
    Processor: 2.0GHz Intel Core 2 Duo T7300 (4MB L2 Cache,800MHz FSB)
    Hard Drive: 500GB hard drive 5200RPM  WD Scorpio blue (not standard) 
    Memory: 2GB x 2, 4GB Total (4GB Max) <-- 8 gig is the actual max capacity.
    Optical Drive: DVD+-R Double layer / DVD+-RW Drive
    External Ports and Slots: Three USB 2.0, one ExpressCard slot, VGA, headphone / line-out, microphone-in, modem, 1Gb Ethernet, Firewire
    Wireless: WiFi (Intel® PRO/Wireless 3945ABG), Bluetooth 2.0 w/ EDR
    Graphics: Dedicated nVidia NVS 140M (256MB) <-- it should be 128 mb for a NVS 140M
    Operating System: Windows XP Pro
    9-cell Li-Ion battery (10.8V, 7.8AH)
    Dimensions: (WxDxH): 33.55cm x 23.7cm x 2.76-3.19cm
    Weight: Approx 2,5 kg.
    http://www.notebookreview.com/default.asp?newsID=4028 <--- this is what 14.1 inch standard aspect ratio T61 looks like.
    http://www.thinkcomputers.org/reviews/lenovo_t61p/16.jpg <--- this is what 15.4 inch and 14.1 inch widescreen T61 looks like.
    http://www.notebookreview.com/default.asp?newsID=3889 <---- this is what 15.4 inch T61 looks like in detail. 
    Regards,
    Jin Li
    May this year, be the year of 'DO'!
    I am a volunteer, and not a paid staff of Lenovo or Microsoft

Maybe you are looking for