Multi graph cluster

Hi all,
I have to plot data coming from 64 different sites. 
What I'd like to do is to create an array (8x8) of graphs and plot each incoming trace in a different graph.
To do so, I've created a 2D array of clusters, where each cluster contains a graph.
By means of a for loop I converted the 1D data array related to each channel in a cluster (bundle function).
The indexed cluster produced by the for loop is finally sent to the array of clusters containing the data. 
I've attached a simple VI I did to test what i was doing. The idea is to plot 4 different ramps with programmable gains.
However the resultant plots are some flikkering.
Anybody could help me to solve this problem please?
Kind regards
Gian Nicola
Attachments:
GraphOrder.vi ‏16 KB

you should have been answered by now-
So I will answer.
<you may wish to wrap your head in Duct tape>
I do not believe your "fickering" is a bug.
The fact that you can drop a cluster containing a graph into an array MAY be a bug.- let me attempt to explain why.....
so for a Graph we pass an array of (DBL)  to the UI theead tranfer buffer.... Whoops!  property "Value" has very little do do with "Property" Plot
What is that again?  And how does that affect your example code?
Elements of an array must have identical properties except for "Value" (and by extension "text.text")  "Plot" is a property of a "Graph"  o..o ....It is related to the value of the 1D array.  but, before it is displayed on the plot area the plot is subjected to either compression or expansion..(WaddUmean Jeff?)
The "PLOT" sends the data array through a processing plant.  the data points that cannot be displayed (like a 200,000,000 points) are decimated with some turning point algorithm and others (like a 5 point array) are interporlateded to fill in the pixels that are missing!
Yes ,  a line (PLOT with color and "glyphs") is different from an array of DBL.  they are not directly related.
So what you see a flicker is LabVIEW refreshing the properties of the graphs in the array of clusters of graph indicators. 
Is that helping at all?
Jeff

Similar Messages

  • 2008 R2 Multi-Site Cluster Possible without Third Party Software?

    2008 R2 Multi-Site Cluster Possible without Third Party Software?  I know it has been asked, but I haven't seen a specific answer.  We are using Dell Compellent Storage and I created several read only replicated volumes at our DR site.  I
    am having issues failing over to the volumes after I use cluster cli to mount the read only volumes.
    Any help would be appreciated greatly!
     

    2008 R2 Multi-Site Cluster Possible without Third Party Software?  I know it has been asked, but I haven't seen a specific answer.  We are using Dell Compellent Storage and I created several read only replicated volumes at our DR site.  I
    am having issues failing over to the volumes after I use cluster cli to mount the read only volumes.
    Any help would be appreciated greatly!
    To have this working reliably you need to have 1) redundant and pretty fatty pipes between your sites (Primary and Disaster Recovery) and 2) your storage (Compellent in your case) being placed in third location. If you'll have both conditions met you'll
    have decent performance and would avoid brain split issue when both sites would be up and uplink between sites is down. Good links on geo clustering:
    Geo Cluster for Windows
    http://clusteringformeremortals.com/2009/09/15/step-by-step-configuring-a-2-node-multi-site-cluster-on-windows-server-2008-r2-%E2%80%93-part-1/
    Geo
    Cluster for Windows 2008
    http://blogs.technet.com/b/johnbaker/archive/2008/03/28/windows-server-2008-geo-clustering-multi-site-clustering.aspx
    Geographically Dispersed Clustering
    http://msmvps.com/blogs/jtoner/
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SVG Multi Graph/Chart with line and bar

    I have a request for a "Multi-Graph-Chart"
    showing both - line chart and
    - bar-chart with respective data.
    Can you provide for the sourcecode, which is behind the SVG chart wizard enabling us to develop an own/individual SVG chart ?
    pls kindly advise.
    Bernhard
    nb. I mailed u this request already - but thought this might be of interest to others as well.

    Marc
    I would support this addition to functionality. I currently have a client who has a requirement for complex charting in HTMLDB, and I am looking at the SVG packages in HTMLDB, ie: WWV_FLOW_SVG, but as you say they are not independent APIs and rely heaviliy on the application's metadata. This is a shame as I could quite easily handle a nice PL/SQL API that would let me create custom SVG, instead of generating the SVG manually and embedding that in the page.
    Cheers
    Ben

  • Graph cluster

    bonjour, c'est tout bete mais je n'arrive pas à faire une graph 2 voies sur le vi ci joint! quelqu'un pourrais me dire comment on fait?
    quand il y a juste 3 constantes numériques dans le cluster on faire un graph et des que j'en rajoute une (comme dans le vi) c'est plus bon! comment on fait alors?
    merci d'avance
    labview 2013
    Pièces jointes :
    graph.vi ‏10 KB

    Actually, Shane, for once I believe your advice is bad, because if you do this, you will change the order of the elements in the cluster. In this case, it doesn't matter, because the graph is the last element anyway, but in a code with existing wiring, this will be a problem. This can be fixed by reordering the controls in the cluster, but that's a real pain and there is no reason for it anyway, BECAUSE...
    Any element of a cluster (or an array) can be reached through the list of variables (right click the property node and select "link to"). A refernce can also be created by right clicking on the graph and selecting Create>>Reference, although I don't know if these features were available in 6.1.
    In any case, all this doesn't matter, because I believe Stijn wants only the first graph to have a different background and the answer to that one is: Sorry, can't be done. An array is a collection of elements with identical properties. You can't change the properties of only one element.
    Try to take over the world!

  • How to create an intensity waveform graph cluster with t_0 and dt ?

    Hi all,
    I would like to know whether it is possible to create an intensity waveform like you can do with a 1-d waveform (with "build waveform") so that you get a cluster with the waveform array, the t_0, the dtand the attributes. 
    If not I would like to know the following:  I use references to cluster typedefs to update my controls and indicaters on the front panel. Now if I use a property node for the intensity graph to set the offset and multiplier on the x-scale, the x-scale on the graphs on the sub-VI work perfectly, however not on the real front panel, probably since these get updated through a reference. Does anyone have a clue how to fix this?
    Regards, Pieter

    You are only writing the "value" of the type definition via the property node. This does not include properties such as offset and multiplier.
    On a sidenote, you are using way too much code for most operations.  
    For example, the to-I32 can be placed on the cluster, so only one instance is needed.
    Also property nodes are resizeable, so only one instance is needed.
    There are also Rube Goldberg constructs, such as ">0  *AND* TRUE", which is the same as a simple ">0"
    Overall, you are really fragmenting memory by constantly building and then resizing arrays to keep them at a max size of 2880. This can cause performance problems due to the constant need of reallocations. It might be better to use fixed size arrays and do things "in place".
    Message Edited by altenbach on 03-19-2009 09:57 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    OneIsEnough.png ‏8 KB
    CombineProperties.png ‏3 KB

  • Splitting data from waveform output from niScope Multi Fetch Cluster

    I am trying to split the data of a 1-D array of clusters.  I am using a PXI-5421 function generator and a PXI-5122 digitizer.  The NiScope Multifetch Cluster.vi retruns output data of the waveform as a 1-D array of cluster of 3 elements.  This array contains information from channels 0 and 1.  I am trying to extract the waveform component information from each of the channels so I can operate on them an re-assemble two waveforms.  Can someone point me in the right direction?  I can't seem to come up with the right tool from the array or cluster tools.  Thanks.
    Jeff
    Solved!
    Go to Solution.

    You just use an Index Array and an Unbundle by Name or Unbundle.
    Message Edited by Dennis Knutson on 04-30-2009 10:41 PM
    Attachments:
    Index and Unbundle.PNG ‏4 KB

  • Scalable service instance deregistered on multi-zone cluster

    I have a pair of systems clustered with multiple zones configured on each. The zones are not clustered, however the dataservices are run on the zones in pairs. Some services are failover, some are scalable.
    The problem arises with the scalable resources. There are multiple instances of the same application running on different zones (by intances I mean dev on one pair, tst on another pair, etc.). These instances all use the same ports on different IP addresses, where the IPs are configured as shared addresses. If I stop the application on one zone the ports that it uses will be deregistered on all of the zones therefore killing instances that I'm not working on. This problem happens even for instances of the application which have not yet been configured into the cluster as dataservices. This defeats the purpose of having zones if I can't work on them in isolation.
    I could cluster the zones, but I have no need to fail over the whole zone, and I need to have both failover and scalable resources so I'd need double the number of zones if I clustered the zones themselves.
    If anyone has some thoughts I'd appreciate it.
    Edited by: taccooper on Dec 8, 2009 10:14 AM

    Hi,
    you are hitting a restriction with scalable addresse and normal zones (zone nodes), let me elaborate a bit.
    Sun Cluster is supporting 3 type of local zone models.
    1. the failover zoone, where a zone is installed on shared storage and is failed over between the nodes. Note that sclabel addresse do not work here.
    2. zone nodes you are failing over resource groups between zones, scalable addresses are supported here, but you can have one port bound to only one address.
    3. zone clusters, in a zone cluster you have an almost complete cluster running between th zones. the zone clustrs are isolated against each other. Here you are completely free in deploying scalble address. The zones of a zone cluster are from the special brand cluster.
    You have configured model 2, but you need model 3 to deploy what you want. The bad news is that you have to delete your zones and reinstall them with the clzonecluster utility. If you do not want this you must configure different ports between the multiple instances of your application. This is the only way to keep model two.
    Hope that helps
    Detlef

  • Setting up multi-Mac cluster for Compressor is not working

    Hi
    I have Mac Pro 8 Core Intel Xenon ( Master ) and MacBook Pro Intel Core 2 Duo ( slave ) .
    I am using FCS 3 .
    I try set up Cluster for both of them. Everything is fine to the point when is starting sheering jobs between computers. MacBook Pro doesn't take any action. I can see in Batch Monitor that job is segmented for all cores ( we have total 10 cores- if I sign all off them in Cluster I got in Batch Monitor 20 segments ) but all jobs take my master.
    I called Apple Care but they told me to reinstall compressor but this doesn't help.
    What I should do ??

    Thank you for closing your thread with a workable solution. Hardly anyone does that around here. It is a valuable and gracious act.
    congratulations on figuring it out on your own. I do not believe any of us would have suggested you do a complete reinstallation of both the MacOS and FCS to solve that problem because you did not include the single most vital factor: you had migrated your FCS system to a new Macintosh using Migration Assistant.
    bogiesan

  • Post macro in LabView to generate multi-graphs

    Hi,
    I would like to ask how to link and use an Excel Macro in LabView 7.1 to generate multiple graphs in Excel 2000.
    The most important thing is to be able to generate Excel-quality graphs in Excel without physically going through the ChartWizard procedure.
    Thanks a lot. Any idea would be appreciated.

    Mr. Liang,
    There is an example that ships with LabVIEW that shows how to call a macro in Excel.  It is called Excel Macro Example.vi. There are also quite a few discussion forum threads on calling into Excel.
    The Report Generation Toolkit also provides a lot of functionality for controlling Excel from LabVIEW.
    Doug M
    Applications Engineer
    National Instruments
    For those unfamiliar with NBC's The Office, my icon is NOT a picture of me

  • Windows member won't join multi platform cluster

    I have a set of applications that communicate through a Tangosol Coherence Cache and I am running one of them on a Windows 2003 server and the other on a Sun Solaris box. If I bring up the Solaris member first the Windows member will try to join the cluster for 30 seconds and issue the following warnings:
    2007-06-13 11:51:18.562 Tangosol Coherence 3.2.1/369 <Warning> (thread=Cluster, member=n/a): This Member(Id=0, Timestamp=2007-06-13 11:50:48.125, Address=10.3.4.145:8088, MachineId=3473) has been attempting to join the cluster at address 224.3.0.128:30315 with TTL 4 for 30 seconds without success; this could indicate a mis-configured TTL value, or it may simply be the result of a busy cluster or active failover.
    2007-06-13 11:51:18.562 Tangosol Coherence 3.2.1/369 <Warning> (thread=Cluster, member=n/a): Received a discovery message that indicates the presence of an existing cluster that does not respond to join requests; this is usually caused by a network layer failure:
    After 90 seconds the Windows side times out and shuts down.
    If the Windows member is the first to join the cache then the Solaris side comes up just fine*. This behavior also happens if I start the coherence.jar test application on both environments in the same order.
    Both machines are on the same subnet so TTL shouldn't be a problem (I bumped it up to 4 just to be sure).
    I suspect that this is some configuration issue on the Windows box because in my test environment this is not a problem (but that's only a hunch).
    * The work around of having the Windows come up first won't work because our applications are architected so that the Solaris side has to come up first.

    The OS on the Windows side is running Service Pack 2 which according to our contact at Microsoft has the fix for the IP multicast issue in it.
    I ran the multicast test with the following results:
    On the Windows side I saw packets from both machines:
    2007-06-20 11:49:25.781 Tangosol Coherence 3.2.1/369 <Info> (thread=main, member
    =n/a): Loaded operational overrides from resource "jar:file:/D:/navis/cohUtil/co
    herence.jar!/tangosol-coherence-override-dev.xml"
    2007-06-20 11:49:25.796 Tangosol Coherence 3.2.1/369 <Info> (thread=main, member
    =n/a): Loaded operational overrides from resource "file:/D:/navis/cohUtil/tangos
    ol-coherence-override.xml"
    Starting test on ip=HONWDA04/10.3.4.145, group=/224.3.0.13:9000, ttl=4
    Configuring multicast socket...
    Starting listener...
    Wed Jun 20 11:49:25 PDT 2007: Sent packet 1.
    Wed Jun 20 11:49:25 PDT 2007: Received test packet 1 from self (sent 15ms ago).
    Wed Jun 20 11:49:27 PDT 2007: Received test packet 4 from ip=ohquda02/10.3.4.179
    , group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:27 PDT 2007: Sent packet 2.
    Wed Jun 20 11:49:27 PDT 2007: Received test packet 2 from self
    Wed Jun 20 11:49:29 PDT 2007: Received test packet 5 from ip=ohquda02/10.3.4.179
    , group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:29 PDT 2007: Sent packet 3.
    Wed Jun 20 11:49:29 PDT 2007: Received test packet 3 from self
    Wed Jun 20 11:49:31 PDT 2007: Received test packet 6 from ip=ohquda02/10.3.4.179
    , group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:31 PDT 2007: Sent packet 4.
    Wed Jun 20 11:49:31 PDT 2007: Received test packet 4 from self
    Wed Jun 20 11:49:33 PDT 2007: Received test packet 7 from ip=ohquda02/10.3.4.179
    , group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:33 PDT 2007: Sent packet 5.
    Wed Jun 20 11:49:33 PDT 2007: Received test packet 5 from self
    Wed Jun 20 11:49:35 PDT 2007: Received test packet 8 from ip=ohquda02/10.3.4.179
    , group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:35 PDT 2007: Sent packet 6.
    Wed Jun 20 11:49:35 PDT 2007: Received test packet 6 from self
    Wed Jun 20 11:49:37 PDT 2007: Received test packet 9 from ip=ohquda02/10.3.4.179
    , group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:37 PDT 2007: Sent packet 7.
    Wed Jun 20 11:49:37 PDT 2007: Received test packet 7 from self
    Wed Jun 20 11:49:39 PDT 2007: Received test packet 10 from ip=ohquda02/10.3.4.17
    9, group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:39 PDT 2007: Sent packet 8.
    Wed Jun 20 11:49:39 PDT 2007: Received test packet 8 from self
    Wed Jun 20 11:49:41 PDT 2007: Received test packet 11 from ip=ohquda02/10.3.4.17
    9, group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:41 PDT 2007: Sent packet 9.
    Wed Jun 20 11:49:41 PDT 2007: Received test packet 9 from self
    Wed Jun 20 11:49:43 PDT 2007: Received test packet 12 from ip=ohquda02/10.3.4.17
    9, group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:43 PDT 2007: Sent packet 10.
    Wed Jun 20 11:49:43 PDT 2007: Received test packet 10 from self
    Wed Jun 20 11:49:45 PDT 2007: Received test packet 13 from ip=ohquda02/10.3.4.17
    9, group=/224.3.0.13:9000, ttl=4.
    Wed Jun 20 11:49:45 PDT 2007: Sent packet 11.
    Wed Jun 20 11:49:45 PDT 2007: Received test packet 11 from self
    Terminate batch job (Y/N)?
    The Solaris side only saw its own packets:
    2007-06-20 11:51:43.687 Tangosol Coherence 3.2.1/369 <Info> (thread=main, member
    resource "jar:file:/home/hstutes/cohUtil/coherence.jar!/tangosol-coherence-over
    2007-06-20 11:51:43.715 Tangosol Coherence 3.2.1/369 <Info> (thread=main, member
    resource "file:/home/hstutes/cohUtil/tangosol-coherence-override.xml"
    Starting test on ip=ohquda02/10.3.4.179, group=/224.3.0.13:9000, ttl=4
    Configuring multicast socket...
    Starting listener...
    Wed Jun 20 11:51:43 PDT 2007: Sent packet 1.
    Wed Jun 20 11:51:43 PDT 2007: Received test packet 1 from self (sent 29ms ago).
    Wed Jun 20 11:51:45 PDT 2007: Sent packet 2.
    Wed Jun 20 11:51:45 PDT 2007: Received test packet 2 from self (sent 1ms ago).
    Wed Jun 20 11:51:47 PDT 2007: Sent packet 3.
    Wed Jun 20 11:51:47 PDT 2007: Received test packet 3 from self
    Wed Jun 20 11:51:49 PDT 2007: Sent packet 4.
    Wed Jun 20 11:51:49 PDT 2007: Received test packet 4 from self
    Wed Jun 20 11:51:51 PDT 2007: Sent packet 5.
    Wed Jun 20 11:51:51 PDT 2007: Received test packet 5 from self
    Wed Jun 20 11:51:53 PDT 2007: Sent packet 6.
    Wed Jun 20 11:51:53 PDT 2007: Received test packet 6 from self
    Wed Jun 20 11:51:55 PDT 2007: Sent packet 7.
    Wed Jun 20 11:51:55 PDT 2007: Received test packet 7 from self
    Wed Jun 20 11:51:57 PDT 2007: Sent packet 8.
    Wed Jun 20 11:51:57 PDT 2007: Received test packet 8 from self
    Wed Jun 20 11:51:59 PDT 2007: Sent packet 9.
    Wed Jun 20 11:51:59 PDT 2007: Received test packet 9 from self
    Wed Jun 20 11:52:01 PDT 2007: Sent packet 10.
    Wed Jun 20 11:52:01 PDT 2007: Received test packet 10 from self
    Wed Jun 20 11:52:03 PDT 2007: Sent packet 11.
    Wed Jun 20 11:52:03 PDT 2007: Received test packet 11 from self
    Wed Jun 20 11:52:05 PDT 2007: Sent packet 12.
    Wed Jun 20 11:52:05 PDT 2007: Received test packet 12 from self
    Wed Jun 20 11:52:07 PDT 2007: Sent packet 13.
    Wed Jun 20 11:52:07 PDT 2007: Received test packet 13 from self
    Wed Jun 20 11:52:09 PDT 2007: Sent packet 14.
    Wed Jun 20 11:52:09 PDT 2007: Received test packet 14 from self
    Wed Jun 20 11:52:11 PDT 2007: Sent packet 15.
    Wed Jun 20 11:52:11 PDT 2007: Received test packet 15 from self
    bash-3.00$
    I just noticed that the clocks on the two boxes are about two minutes off. Could that make a difference?

  • Asking for advice for Jabber deployment - multi CUCM cluster\AD domains

    I would like some design advice for deploying Jabber and CUPS in our company. We have 2 locations, west coast (SiteA) and east coast (SiteB). Each site have their own CUCM 7.15 clusters, Unity clusters, AD domains (trusted, but not in the same forest).
    At SiteA I have setup CUPS (8.6.3.10000-20) and jabber and have it working great.
    I would like to setup CUPS\Jabber for SiteB, but they need to be able to IM\call\etc to SiteA (And vice-versa).
    SiteA and SiteB both have CUCM LDAP sync turned on, and LDAP directory synced with both domains (although SiteA cannot authenticate to CUCM at SiteB, and vice-versa due to the fact you can only LDAP sync authentication with one domain, CUCM user database contain users from SiteA and SiteB).
    We have SIP trucks setup to pass internal calls and line status(BLF) between the trunks, and can communicate via internal extensions just fine.
    The problem I’m running into is my jabber-config files uses the EDI directory – which can only look at one domain, so I cannot search the other domain. I believe  changing to UDS fixes this, but I understand it would require me to upgrade both CUCM clusters to 8.6.2 - unless I’m mistaken.
    I’m aware the desktop sharing will not work until CUCM is upgraded to 8.6.1 or 8.6.2.
    I’m wondering if anyone has any advice, or can confirm I’m on the right track. Thanks in advance!

    The thing that's important to understand is how CUP and Jabber build the XMPP URI. The URI has a left- and right-hand side; the left is the username while the right is the XMPP domain. CUP uses the LDAP attribute specified in CUCM's LDAP System page, sAMAccountName by default, for the left-hand-side. The right-hand side is the FQDN of the CUP cluster. Jabber must use the same values as CUP when displaying search results. Take note that nowhere in this process does the entire XMPP URI originate from the directory source.
    In your case you have two separate CUP clusters in two separate domains. This won't work because when a user searches for a contact in the directory using Jabber, the client will build the XMPP URI as [email protected]. Even if you got the other domain's user objects into the search results the right-hand-side of the URI would be wrong and the presence subscription would never succeed since the other cluster is in another domain. As such your first task must be to move the CUP clusters into the exact same fully-qualified DNS domain. Once this is done you can use Inter-Cluster Peering to build a larger XMPP network in which all users have the same presence domain. If you intend to do Inter-Domain Federation in the future this must be your public DNS domain, not your internal active directory domain. If you use a non-public DNS domain TLS handshake will never succeed for inter-domain federation requests.
    Once you have Inter-Cluster Peering in place you can use Active Directory Lightweight Directory Services (the new name for ADAM) to front-end both forests. Both CUCM clusters would need to import the full list of users representing both domains and the sAMAccountNames must be unique across both domains.
    Finally, you can instruct Jabber to use UDS and query it's local CUCM cluster which will be able to return a search result from both domains. Since the CUP clusters are peered in the same domain the XMPP URI can be built properly, the presence subscription can be routed to the correct cluster, and life will be good.
    By this point hopefully it's clear that EDI won't cut it since it would be limited to only returning search results from the local forest.
    Please remember to rate helpful responses and identify helpful or correct answers.

  • Windows 2008 R2 Multi-Site (geo) Cluster File Server

    We need to come up with a new HA file server (user drive data) solution complete with DR. It needs to be 2008 R2, cater for about 25TB of data, and be suitable for 500 users (nothing high end on I/O). I don't want to rely on DFS for any form of resilience
    due to its limitations for open files. We have two active-active data centers (a third can be used for file share quorum).
    We could entertain:
    1)
    Site1 - 2 x HP ProLiants with MSA storage, replicating with something like DoubleTake to a third HP Proliant at site 2 for DR.
    2)
    Site1 - 2 x HP ProLiants with local storage and VSA or HP StoreVirtual array (aka LeftHand), using SAN replication to site 2 where we could have a one or two node config of the same setup.
    Ideally I would like all 3/4 nodes in these configurations to be part of the same multi-site cluster to ensure resources like file shares are in sync. With two pieces of storage across this single cluster (either a DoubleTake or SAN replication to local
    disks in DR) will this work? How will the cluster/SAN fail over the storage?
    We do have VMWare 5.0/1 (not 5.5 yet). We don't have Hyper-V yet either. Any thoughts on the above, and possible alternatives welcome. HA failover RTO we'd like in seconds. DR longer, perhaps 30 mins.
    Thanks in advance for any thoughts and guidance.

    For automated failover between sites, the storage replication needs to have a way to script the failover so you can have a custom resource that performs the failover at the SAN level before the disks come online. 
    DoubleTake has GeoCluster which should accomplish this. I'm not sure about how automated Lefthand's solution is for multi-site clusters.
    VMware has Site Recovery Manager, though this is really an assisted failover and not really an automatic site failover solution. It's automated so that you can failover between sites at the push of a button, but this would need to be a planned failover.
    RTO of seconds might be difficult to accomplish as you need to give the storage replication enough time to reverse direction while giving the MS cluster enough time to bring cluster applications online. 
    When planning your multi-site cluster, I'd recommend going with 2 nodes on each site and then use the file share witness quorum on your 3rd site. If you only had one node on the remote site, the primary site would never be able to failover to the remote
    site without manually overriding the quorum as 1 node isn't enough to gain enough votes for quorum. With 2 nodes on each site and a FSW, each site has the opportunity to gain enough votes to maintain quorum should one of the sites go down.
    Hope this helps.
    Visit my blog about multi-site clustering

  • Windows Multi Site/Geo Cluster

    Hi,
    I am looking for a Multi Site Cluster solution for File Server and SQL DB. Could you pls help with any case study of solution approach for building the Multi Site cluster using Windows 2012 ?
    What is recommended for Replication
    Which all Application/DB are compatible for Multi Site
    Are there any implementation documents
    pls suggest
    Regards:Mahesh

    Hi,
    I am looking for a Multi Site Cluster solution for File Server and SQL DB. Could you pls help with any case study of solution approach for building the Multi Site cluster using Windows 2012 ?
    What is recommended for Replication
    Which all Application/DB are compatible for Multi Site
    Are there any implementation documents
    pls suggest
    General suggestion: if possible use async replication rather then sync. Sync mirror puts high stress on WAN and lack of reliable channels can lead to cluster either non-operational (all WAN links down so cluster node w/o quorum kills itself) or brain split
    issue (all WAN links down so separated cluster nodes continue working AS IS damaging data). For async replication you may look @ DFS-(R) as a file server, see:
    Distributed File System
    http://technet.microsoft.com/en-us/library/cc753479(v=ws.10).aspx
    For SQL Server look at AlwaysOn in async way, see:
    AlwaysOn for SQL Server
    http://msdn.microsoft.com/en-us/library/ff877884.aspx
    Availability Modes
    The availability mode is a property of each availability replica. The availability mode determines whether the primary replica waits to commit transactions on a database until a given secondary replica has written the transaction log records to disk (hardened
    the log). AlwaysOn Availability Groups supports two availability modes—asynchronous-commit mode and synchronous-commit mode.
    Asynchronous-commit mode
    An availability replica that uses this availability mode is known as an asynchronous-commit replica. Under asynchronous-commit mode, the primary replica commits transactions without waiting for acknowledgement that
    an asynchronous-commit secondary replica has hardened the log. Asynchronous-commit mode minimizes transaction latency on the secondary databases but allows them to lag behind the primary databases, making some data loss possible.
    Of course it's possible to configure SMB 3.0 failover file share on geo-separated sites and use Failover Cluster Instances for SQL Server in the same scenario. Just make sure you configure reliable witness (you'll have to do it for SQL Server AlwaysOn either
    way...) on some third location. For file server I'd suggest to use some sort of a software replication in sync way instead of a geo-distributed physical shared storage. That one is for SQL Server and SteelEye Data Keeper but you can use any Windows-native
    (preferred) software as well. See:
    SQL Server Geo-Cluster with software shared storage
    http://clusteringformeremortals.com/2009/10/07/step-by-step-configuring-a-2-node-multi-site-cluster-on-windows-server-2008-r2-–-part-3/
    Good luck! :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Passing multiple graphs to sub vi

    Hello,
    I'm rather new to labview and could use some advice.  My current project utilizes a for loop to create a graph for the I-V measurements of each sample solar cell (each plot is on a seperate graph).  Multiplexing is done externally via labview control.  I have a subvi to show a sample report to the user before they are prompted to save the report.
    What would be the best way to pass the data for 6 (or more) graphs to a subvi, and then create the graphs again in the subvi?  I think an option would be to utilize a shift register to concate the arrays from each run.  Unfortunately, the array size is determined by the user at run-time (number of test points on the graph) and I'm not sure if I can seperate the data properly in the subvi.
    Another question would be can I create an array of the seperate graphs (graph 1 in array element 1, etc.) utilizing the data cluster of each graph as an array element?  I ask this because it seems that data seperation within the subvi would be easier with this method.
    Thanks in advance for the help.
    Solved!
    Go to Solution.

    Without seeing the rest of your code you can use either an array of references or just an array of waveforms. If you data for the graphs is not in the form of waveforms then you can simply create the waveform data type using the Build Waveform function. See attached example for the references approach.
    pjr1121 wrote:
    Do you know of an example which shows placing clusters within an array?
    I don't understand what you're asking here. Are you referring to making an array of clusters? On the front panel or on the block diagram as a constant, or on the block diagram in terms of building an array of clusters programmatically. For the front panel just place an array control, and then drag a cluster control inside the array. Then place the elements of the cluster inside the cluster element. For a block diagram constant it's the same approach. For building it programmatically use Bundle or Bundle by Name and then Build Array or Replace Array Subset depending on whether you're building a new array or replacing an array element.
    Have you done the LabVIEW tutorials? To learn more about LabVIEW it is recommended that you go through the tutorial(s) and look over the material in the NI Developer Zone's Learning Center which provides links to other materials and other tutorials. You can also take the online courses for free.
    Attachments:
    multi graphs.vi ‏29 KB
    multi graphs - subVI.vi ‏30 KB

  • Install Guide - SQL Server 2014, Failover Cluster, Windows 2012 R2 Server Core

    I am looking for anyone who has a guide with notes about an installation of a two node, multi subnet failover cluster for SQL Server 2014 on Server Core edition

    Hi KamarasJaranger,
    According to your description, you want configure a SQL Server 2014 Multi-Subnet failover Cluster on Windows Server 2012 R2. Below are the whole steps for the configuration. For the detailed steps about the configuration, please download
    and refer to the
    PDF file.
    1.Add Required Windows Features (.NET Framework 3.5 Features, Failover Clustering and Multipath I/O).
    2.Discover target portals.
    3.Connect targets and configuring Multipathing.
    4.Initialize and format the Disks.
    5.Verify the Storage Replication Process.
    6.Run the Failover Cluster Validation Wizard.
    7.Create the Windows Server 2012 R2 Multi-Subnet Cluster.
    8.Tune Cluster Heartbeat Settings.
    9.Install SQL Server 2014 on a Multi-Subnet Failover Cluster.
    10.Add a Node on a SQL Server 2014 Multi-Subnet Cluster.
    11.Tune the SQL Server 2014 Failover Clustered Instance DNS Settings.
    12.Test application connectivity.
    Regards,
    Michelle Li

Maybe you are looking for

  • Error while adding essbase server(V9.3.1)

    Hi All, I have installed all essbase related products and configured everything thru configuration utility successfully.But while adding essbase server through ADMINISTARTION SERVICES CONSOLE,I got the following error. ===============================

  • How do i share apps between accounts?

    I have installed call of duty using my Admin account and installed it in the top most 'shared' folder. I can use it with no problems at all, but my brothers account has problems with it. 1. The mouse wont work while in Call of duty. 2. It opens in a

  • How to Assign Logical port for a standar Consumer proxy

    Hello, I'm working with a SAP application creating a WS Connector that ask me for the following data: - Class name: - Logical Port name: As far as I have read SAP delivers for this application an example implementation for the following web service:

  • How do I prevent the Airport Extreme router from dropping with DSL when a call comes in?

    I just installed a new Aiport Extreme 802.11n router. I am using DSL and it works great until someone calls me. When the phone rings and my answering machine answers the call the router drops and start flashing amber.My old router did not have this p

  • Playlists on New Apple TV 2

    How do I get my playlists to sync in the order I set up on the new Apple TV???? I don't want them in alphabetical order! Any help mutely appreciated! Thanks