Single point of Connection?

Hi Experts,
Major advantage of XI and other middleware technologies is ..
XI is a single point of connection.
What does it mean exactly with others?
Regards,
YRaj.

Hey
it is possible to handle parallel messages in IE.but i guess you are looking for message processing in general.
see all the messages are processed in the order they arrived ,you have the option of prioritizing the messages in IE meaning you can give a higher priority to some important messages but this prioritizing is not possible at the adapter engine level,so adapter will pick the messages in the order they arrived,now when they enter IE and you have set the priority for them then they are processed based upon that,if no priority has been set they are processed in the order they were received
Thanx
Aamir

Similar Messages

  • Why can't we have a single point of contact who de...

    Three weeks ago our house was hit by lightning and our broadband connection went on the blitz (a Linnit technical term).
    The telephone stopped working. I did an on-line check and the connection registered a fault. The on-line system logged the fault with the teachnical team. So far so good...
    I received a call from the tech team advising an engineer was coming out to us, if the fault was with our equipment we would be charged. That's fair. Engineer called. My phones were fine, BUT the BT router was where the fault lay said the engineer. We would need to raise another fault report because he only dealt with telephone AND as he was a subcontractor he would bill for the work... He disconnected the router so the phone would work and left.
    I work virtually, which means that I can work from my home, so I need the internet even more than my family want the telephone. I plugged the internet back in so that I could continue to work and called BT again from my mobile. Speaking to a very polite guy in Delhi I was asked to describe my phone socket, unplug the router from the office, carry it downstairs to the kitchen where the main socket is, plug the router in there, reconnect, try again, all sorts of stuff before finally being advised that it was probably just the 'microfilter' and that as they were very inexpensive it might be faster for me to go out and buy one and plug it in myself rather than have another engineer come out to us.
    So I did just that. In fact, as the microfilters are less than a fiver, I got two... brought them home and plugged one in... didn't work.
    Called Delhi again. Spoke to someone else who was, again, very polite. They tried to get me to unplug and plug things in and out again, but I politely declined this time explaining that now the poor internet connection that we did have was running so slowly I am having to commute in to work over Christmas. He sympathised and told us that he would escalate the issue. That was before Christmas. Since then I received a text on my phone on Christmas Day telling me they couldn't reach me!!! 
    Today I called again to BEG someone to please come out and fix things for us. We aren't technical. We cannot act as pseudo engineers. We pay BT one bill each quarter for a service. Why on earth can't BT provide me with a single point of contact when I have an issue. And if that point of contact could understand me and explain things to me in words and phrases that I understand that would be perfect!
    Last year I cancelled three mobile contracts that we'd had since the early 90's with O2 because they were so unhelpful.
    BT aren't the cheapest broadband provider but we've stayed with them out of 20+ years loyalty and the understanding that we had a one-stop-shop. Now, it looks as though I'll be shopping around for another domestic broadband provider for 2012.

    Thank you for being so helpful and constructive.
    I tried to look at the ASDL settings, john46 but it's asking me for my HomeHub password... the only password I have is for our wireless network and that one doesn't work. 
    I can't test the phone line right now because I'll have to disconnect the internet and I'm currently working on another computer whilst chatting on this one with you. However I will do that later. I'll also look at the RogerB link you provided. 
    Truth is, we're pensioners who use the internet but we havent a grain of technical understanding between us. We're old fashioned enough to admit that we just want someone who knows what he's doing to come here and fix it for us. It's already cost us £130 for an engineer to come out from OpenReach to tell us the phone line is OK and it's the router causing the problem. Best case scenario is that another BT engineer who knows about broadband comes out and does it because the last BT person that I spoke to in Delhi did actually confirm that there is a fault on the BT side. I'm getting so frustrated right now I'll probably call out an independant I find in Yellow Pages and get charged an arm and a leg again and I'll then cancel with BT in a fit of pique.

  • WAP321 2 SSIDs not both showing Clustered in Single Point Setup

    I have 2 Cisco WAP321 with 2 SSIDs setup Single point using VLAN 1 and VLAN 2 using firmware version 1.0.5.3.  VLAN 1 is the management VLAN.  One WAP321 is connected to a SG200-8 version 1.0.8.3 on a trunked port 1UP,2T.  The SG200-8 is connected to a SG300-28 L3 switch version 1.3.7.18 on a trunk port 1UP,2T.   The other WAP321 is connected to a trunked port 1UP,2T on the SG300-28 switch.  Both SSIDs seem to work but the VLAN2 SSID does not show as clustered under single point Wireless Neighborhood.  Only VLAN1 shows clustered.  Do I have a setup issue? Is the Wireless Neighborhood not showing correct? What do you think is the problem why they are not showing clustered?  Both SSIDs work and connect to the internet.
    PS
    If it matters DHCP is from the SG300-28 in L3 mode which feeds a RV180 router from a 1UP port on the SG300-28.

    My name Eric Moyers. I am an Engineer in the Small Business Support Center.
    You do not have a setup issue. Clustering/Single Point Setup is based on the device and not on the SSID's. When looking at the Wireless Neighborhood within the Clustering section, you will only see the first SSID listed regardless of how many SSID's you have configured. 
    Now as far as the connection are concerned you said that your friends were having a hard time connecting. When they eventually connected were they going to the guest network? When you connect with your laptop, are you connecting to the same SSID or a different one?
    Eric Moyers
    .:|:.:|:. CISCO | Cisco Presales Technical Support | Wireless Subject Matter Expert
    Please rate helpful Posts and Let others know when your Question has been answered.

  • Single points of failure?

    So, we are looking into the xServe RAID, and I'd like some insight into making things as bulletproof as possible.
    Right now we plan to have:
    a load balancer and a failover load balancer (running on cheap BSD hardware, since hardware load balancers are so damned expensive) feeding into
    two application servers, which communicate with
    one back-end server, which serves as both a database server and an NFS server for the app servers
    And the volumes that will be NFS-mounted would be on our xServe RAID, which would be connected directly to the back-end server.
    The networking hardware would all be failover through multiple switches and cards and so forth.
    The idea here is to avoid as many single points of failure as possible. Unfortunately at the moment we don't have a DBA who is fluent in clustering, so we can't yet get rid of the back-end server as a single point of failure. (Which is also why I'm mounting the RAID on it and sharing via NFS... if the database goes down, it won't matter that the file service is down too.) However, in the current setup, there's one other failure point: the RAID controllers on the xServe RAID.
    Performance is less important to us on this than reliability is. We can't afford two RAID units at the moment, but we can afford one full of 500 gig drives, and we really only need about 4 TB of storage right now, so I was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc. As far as I understand it, this eliminates the RAID controllers as a single point of failure, and as far as I know they are at least supposedly the only single point of failure in the xServe RAID system. (I could also do RAID 10 that way, but due to the way we store files, that wouldn't buy us anything except added complexity.)
    And later on, down the road, when we have someone good enough to figure out how to cluster the database, if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it, thus cutting out the middle man NFS service. (I am under the impression that this sort of volume-sharing is possible via FC... is that correct?)
    Comments? Suggestions? Corrections to my misapprehentions?
    --Adam Lang

    Camelot wrote:
    A couple of points.
    was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc.
    Really? Assuming you're using fourteen 500GB drives this will give you seven volumes mounted on the server, each a 500GB mirror split on the two controllers. That's fine from a redundancy standpoint, but it ***** from the standpoint of managing seven direct mountpoints on the server, as well as seven NFS shares, and 14 NFS mount points on the clients. Not to mention file allocations between the volumes, etc.
    If your application is such that it's easy to dictate which volume any particular file should be on and you don't mind managing all those volumes, go ahead, otherwise consider creating two RAID 5 volumes, one on each controller, using RAID 1 to mirror them on the back-end server and exporting a single NFS share to the clients/front-end servers.
    Quite simple, actually. But admittedly, two RAID 5s RAID-1-ed together would be much more efficient, space-wise.
    if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it
    Yes, although you'll need another intermediate server as the metadata controller to arbitrate connections from the two machines. It becomes an expensive option, but your performance will increase, as will the ease with which you can expand your storage network (adding more storage as well as more front-end clients).
    But then that means that the metadata controller is a single point of failure...?
    --Adam Lang

  • Linux cluster, no single point of failure

    I'm having difficulty setting up a Business Objects cluster in linux with no single point of failure.  Following the instructions for custom install I am ending up connecting to the CMS server on the other server and no CMS running on the server i'm doing the install on. Which is a cluster, however we only have CMS running on one server in this scenario and we can't have a single point of failure.  Could someone explain how to setup a 2 server clustered solution on linux that doesn't have a single point of failure.

    not working, I can see my other node listed in the config, but the information for the servers state that the SIA is available, I've checked network/port connectivity between the boxes and SIA is running and available for each box.
    Via the instructions for installing on a system with windows capabilities I read about a step to connect to an existing CMS.
    http://wiki.sdn.sap.com/wiki/download/attachments/154828917/Opiton1_add_cms3.jpg
    http://wiki.sdn.sap.com/wiki/display/BOBJ/Appendix1-Deployment-howtoaddanotherCMS-Multipleenvironments
    via the linux install.sh script, no matter what I do I'm not coming across any way that allows me to reach that step.

  • Single-point failure of proxy server??

    Three questions regarding using the proxy server for WLS clustering:
              1. What happen if the proxy server fails? The entire cluster won't be
              accessible?
              If so, is there a implication of having "meta-configuration" for
              proxy server failover?
              2. Does the term "in-memory persistence" clustering (over JDBC) does
              imply
              the session data is "shared" in memory between the primary &
              secondary
              servers, or they are compleleted replicated in each server??
              3. How bad is it in terms of performance using the JDBC session
              persistence
              and the in-memory replication? Any one has experimented it??
              Any thoughts and comments are helpful and appreciated??
              Frank Wang
              

    Frank,
              If using File/JDBC persistence, it doesn't matter which server gets the request
              because all servers have access to every session's data.
              When you assign a hostname "to the cluster" and map that hostname to the IP
              address of each server in the cluster, this sets up something called DNS
              round-robining. What this does is round-robin which IP address is returned when
              the cluster hostname is resolved. Unfortunately, DNS is not very good at
              detecting failed machines so it may continue to hand out IP addresses of failed
              machines.
              A better way to do this is with a hardware router that is made for this purpose
              (e.g., LocalDirector). The router can detect a failed machine and redirect the
              request to another machine in the cluster. Unlike the in-memory replication
              case, it doesn't matter which server gets each request so no proxy is required.
              Hope this helps,
              Robert
              Frank Wagn wrote:
              > Hi, Robert,
              >
              > Thank you for the answers !!
              >
              > It is still not very clear to me when using the File/JDBC session
              > persistence.
              > Since there is no such a "coordinating" proxy server who knows how to
              > load balancing among the servers in the cluster (based on the specified
              > algorithm in the proxy server properties file), how does the load balancing
              > (and failover) work, when the requests are directed to a "mask" of cluster IP
              >
              > (virtual proxy) which in terms broadcast to all the servers??
              >
              > Frank
              >
              > Robert Patrick wrote:
              >
              > > Hi Frank,
              > >
              > > Frank Wang wrote:
              > >
              > > > Three questions regarding using the proxy server for WLS clustering:
              > > >
              > > > 1. What happen if the proxy server fails? The entire cluster won't be
              > > > accessible?
              > > > If so, is there a implication of having "meta-configuration" for
              > > > proxy server failover?
              > >
              > > If you have a single proxy server, this is correct in that it is a single
              > > point of failure. A common configuration is to use multiple proxy
              > > servers (with something like LocalDirector sitting in front to do routing
              > > and load balancing to the proxy servers) that proxy requests to a cluster
              > > of WLS servers.
              > >
              > > > 2. Does the term "in-memory persistence" clustering (over JDBC) does
              > > > imply
              > > > the session data is "shared" in memory between the primary &
              > > > secondary
              > > > servers, or they are compleleted replicated in each server??
              > >
              > > HttpSession state can be shared across the servers in a cluster in one of
              > > three ways.
              > >
              > > 1.) Using File-based persistence (i.e., serialization) - This requires a
              > > shared file system across all of the servers in the cluster. In this
              > > configuration, all servers are equal and can access the session state.
              > > As you might imagine, this approach is rather expensive since file I/O is
              > > involved.
              > >
              > > 2.) Using JDBC-based persistence - This requires that all servers in the
              > > cluster be configured with the same JDBC connection pool. As with method
              > > 1, all servers are equal and can access the session state. As you might
              > > imagine, this approach is rather expensive since database I/O is
              > > involved.
              > >
              > > 3.) In-memory replication (not really persistence) - This scheme uses a
              > > primary-secondary replication scheme so that each session object is kept
              > > on only two machines in the cluster (which two machines vary depending on
              > > the particular session instance). In this scheme, we need a proxy server
              > > sitting in front of the cluster that can route the requests to the server
              > > with the primary copy of the session for each request (or to the
              > > secondary if the primary has failed). The location information is
              > > encoded in the session id and the proxy knows how to decode this
              > > information and route the requests accordingly (because the proxy is
              > > using code supplied by BEA -- the NSAPI or ISAPI plugins or the
              > > HttpClusterServlet).
              > >
              > > > 3. How bad is it in terms of performance using the JDBC session
              > > > persistence
              > > > and the in-memory replication? Any one has experimented it??
              > >
              > > JDBC session persistence performance is highly dependent on the
              > > underlying DBMS. In my experience in doing benchmarks with WLS,
              > > in-memory replication (of a reasonably small amount of session data) does
              > > not add any measurable overhead. Of course, the key words here are
              > > "reasonably small amount of session data". The more data you stuff into
              > > the HttpSession, the more data that needs to be serialized between
              > > servers, more performance will be impacted.
              > >
              > > Just my two cents,
              > > Robert
              

  • Changing Single Point to Axis on Sensor Mapping

    I needed to include the sensor mapping Express VI into my program and have gotten it to work well. However, for my purposes, a single point wont work. I am measuring force using strain gages. Each input signal comes from a combined 4 different strain gages placed strategically along the x-axis and another 4 strain gages connected along the y-axis. Therefore, it isn't a single point needed but rather a line. Is there a way to do this in the program?

    Hello,
    Thank you for contacting National Instruments.
    If you are worried about the time of your acquisition being off, then you should associate each voltage measurement with a timestamp. This will allow you to know the exact time at which the sample was taken and you will never be off. You can use the Get Date/Time in Seconds.vi in our while loop with your AI code so that you can read a sample and read the time. You can than log the voltage value and the timestamp to your file.
    Regards,
    Bill B
    Applications Engineer
    National Instruments

  • MPLS single point to Monitor

    Hi,
    Is there any single point or a way to monitor all the connections in MPLS cloud.
    Like there are 5 Sites connecting to each other in MPLS and if any site wants to send a packets to remote sites then it directly sends the packets, as this is not a Hub and Spoke, its a MESH like in MPLS. So is there any single point where I can monitor all the 5 sites using any thing like IDS/ IPS, or any other monitoring tool?

    You can also deploy MPLS VPN in a hub and spoke topology. This would be the only way to ensure that all traffic goes through the IDS located at the hub site. The same applies if you want to implement a FW or other centralized network services.
    Hope this helps,

  • Primary site server a single point of failure?

    I'm installing ConfigMgr 2012 R2, and employing a redundant design as much as possible. I have 2 servers, call them CM01,CM02, in a single primary site, and on each server I have installed the following roles: Management Point, Distribution Point, Software
    Update Point, as well as the installing the SMS Provider on both servers. SQL is on a 3rd box.
    I am now testing failover from a client perspective by powering down CM01 and querying the current management point on the client: (get-wmiobject -namespace root\ccm -class ccm_authority).CurrentManagementPoint . The management point assigned to
    the client flips to the the 2nd server, CM02, as expected. However, when I try to open the CM management console, I cannot connect to the Site, and reading SMSAdminUI log reveals this error: "Provider machine not found". 
    Is the Primary site server a single point of failure? 
    Why can't I point the console to a secondary SMS provider?
    If this just isn't possible, what is the course of action to restore console access once the Primary Site server is down?
    Many Thanks

    Yes, that is a completely false statement. Using a CAS and multiple primaries in fact will introduce multiple single points of failure. The only technical Eason for a CAD a multiple primary sites is for scale out; i.e., supporting 100,000+ managed systems.
    HA is achieved from a client perspective by adding multiple site systems hosting the client facing roles: MP, DP, SUP, App Catalog.
    Beyond that, all other roles are non-critical to client operations and thus have no built-in HA mechanism. This includes the site server itself also.
    The real question is what service that ConfigMgr provides do you need HA for?
    Jason | http://blog.configmgrftw.com

  • N5K - single point of failure?

    When both N5K (running 4.2(1)N1(1b) are powered down, if one of them fails to power up, all N2K connected to the two N5K fails to power up. This scenario could happen in situation where there is a power maintainence when both N5K are brought down. It looks like it is related to the below. Beginning with Cisco NX-OS Release 5.0(2)N1(1), you can configure the Cisco Nexus 5000 Series switch to restore vPC services when its peer switch fails to come online by using the reload restore command. You must save this setting in the startup configuration. On reload, Cisco NX-OS Release 5.0(2)N1(1) starts a user-configurable timer (the default is 240 seconds). If the peer-link port comes up physically or the peer-keepalive is functional, the timer is stopped. Can anyone confirm that ? Thanks Eng Wee

    This design option works
    However keep in mind that your design has single point of failure in the nexus side if you need it end to end redundant you need to consider adding a second switch to the topology
    Hope this help
    Sent from Cisco Technical Support iPad App

  • Forms 10g 2 ApplicationServers Single Point of Failure

    Hi,
    we are planning a migration from Forms6i to Forms10g and we are thinking about eliminating as much as possible a single point of failure.
    Today we have all those Clients running Forms-Runtime with the FMBs ...
    They all create a connection against the Database which we have secured as much as possible against Loss of Service.
    After the migration we will have all those Clients running a browser and calling a URL which point to the Application-Server(s) running the Forms-Runtime processes. If this machine fails, none of the Clients can work anymore. Because of that, we are planning for 2 AS to be on a safer side for a Loss of one Server.
    But here starts the question :
    When a clients starts, he will point to an URL which lead to an IP-Address.
    The IP-Address could be of a Hardware-Loadbalancer, if so the LB will forward to Oracle Webcache on one of the AS. If not, The IP-Address leads directly to one Webcache.
    From there it proceeds to the HTTP-Server on one of the AS and then further to the MOD-OC4J Instance, which could be duplicated as well.
    All those "Instances" : Hardware-Loadbalancer, Webcache, HTTP-Server, MOD-OC4J-Instances can be doubled or more but that only makes sense if they run on different hardware, which means different IP-Addresses. I can imagine using a virtual IP-Address for connecting to the HLB or the Webcache but where is it split to the different real addresses with having one Box as a single point of failure.
    I'm looking for a solution to double the ApplicationServer as easy as possible but without having the Clients to decide on which Server they can work and without having a single box in front which would lead to a S.P.O.F.
    I know, that there are HLBs out there which can act as a Cluster so that should eliminate the problem, but I would like to know, whether that cann be done on the AS only.
    Thanks,
    Mark

    Thanks wilfred,
    yes I've read that manual. Probably not every single page ;-)
    I agree that High-Availability is a very broad and complex topic, but my question is (although it was difficult to explain what i mean) only on a small part of it:
    I understand that I can have mutiple instances on each level OC4J, HTTP, WEB-Cache, LBR But where or who excepts one single URL and leads the requests to the available AS
    As mentioned in my post before, we may etst the Microsoft NLB-Cluster to divide the requests to the WEB-Cache Instances on the 2 AS and then the 2 Web-Cache proceed to the 2 HTTP and so on.
    The Idea of that is that Windows offers a virtual IP-Adress from those 2 Windows-Server and somehow the requests will be transferred to a running WEB-Cache.
    Does that work correctly with session-Binding ...
    We'll see
    thanks,
    Mark

  • NI-DAQmx VisualStud​io C++ 6 Single point analog output

    Specs: NI-DAQmx 7, VisualStudio C++ 6.0,  PCI-6722,8channel AO
    We have a very simple application: set a voltage (actually 6 channels) and keep it until we want it changed again, perform the change very quickly in response to an image capturing algorithm. So I don't need any waveforms or buffering.
    In this forum post http://forums.ni.com/ni/board/message?board.id=231​&message.id=3283&query.id=18094 you talk about an AOOnePoint example, but I get an error that the NI-DAQ driver does not support my device.
    I may need to use NI-DAQmx, but how? I would like to use something like AO_VWrite(,,), maybe for 6 channels in one call. But I can't find it in NI-DAQmx. It seems I need to setup buffers and frequencies. I have a working sample, but it seems a slow and certainly overkill of this simple application:
    // Link with \DAQmx ANSI C Dev\lib\msvc\NIDAQmx.lib
    #include "NIDAQmx.h"
    double[2] data;
    int taskHandleAnalog;
    int written;
    void Init()
        DAQmxErrChk (DAQmxCreateTask("",&taskHandleAnalog));
        DAQmxErrChk (DAQmxCreateAOVoltageChan(taskHandleAnalog,"Device and Channel Info","",0,10,DAQmx_Val_Volts,NULL));
        DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandleAnalog,"",1000,DA​Qmx_Val_Rising,DAQmx_Val_ContSamps,NUMBER_OF_AO_SA​AMPLES));
        DAQmxErrChk (DAQmxWriteAnalogF64(taskHandleAnalog,NUMBER_OF_AO​_SAAMPLES,0,1.0,DAQmx_Val_GroupByChannel,data,&wri​tten,NULL));
        DAQmxErrChk (DAQmxStartTask(taskHandleAnalog));
    void SetVoltage( double voltage )
        data[0] = voltage;
        data[1] = voltage;
        DAQmxStopTask(taskHandleAnalog);
        DAQmxErrChk (DAQmxWriteAnalogF64(taskHandleAnalog,NUMBER_OF_AO​_SAAMPLES,true,10.0,DAQmx_Val_GroupByChannel,data,​&written,NULL));

    Hi,
    It looks like you simply wants to output voltages on the analog output channels, but only wants one update at a time with no waveforms or buffering in DAQmx.
    As I'm sure you know there are really just 3 types of measurements.  Single Point, Finite, and Continuous.  Since you want a single value at a time it's just a Single Point operation.
    You can find DAQmx examples for single point operations in this path:
    C:\Program Files\National Instruments\NI-DAQ\Examples\DAQmx ANSI C\Analog Out\Generate Voltage\Volt Update
    Simply place the DAQmx Write Code within a loop and you will be updating one value at a time, but multiple times when "we want it changed again".
    Dennis Morini
    Field Sales Engineer
    National Instruments Denmark
    http://www.ni.com/ask

  • Pl/sql block reading reading table data from single point in time

    I am trying to figure out whether several cursors within a PL/SQL block are executed from within a Single Point In Time, and thus do not see any updates to tables made by other processes or procedures running at the same time.
    The reason I am asking is since I have a block of code making some data extraction, with some initial Sanity Checks before the code executes. However, if some other procedure would be modifying the data in between, then the Sanity Check is invalid. So I am basically trying to figure out if there is some read consistency within a PL/SQL, preventing updates from other processes to be seen.
    Anyone having an idea?.
    BR,
    Cenk

    "Transaction-Level Read Consistency
    Oracle also offers the option of enforcing transaction-level read consistency. When a transaction runs in serializable mode, all data accesses reflect the state of the database as of the time the transaction began. *This means that the data seen by all queries within the same transaction is consistent with respect to a single point in time, except that queries made by a serializable transaction do see changes made by the transaction itself*. Transaction-level read consistency produces repeatable reads and does not expose a query to phantoms."
    http://www.oracle.com/pls/db102/search?remark=quick_search&word=read+consistency&tab_id=&format=ranked

  • Using LabVIEW and an E-Series DAQ Card to perform relatively high speed single point acquisition in response to a changing DIO pattern.

    I am using the DIO lines on my E-series card to drive an external multiplexer which switches 1 of 8 sets of 3 signals to channels 0,1 and 2 on my DAQ. I need to acquire the 3 single points of data, do a little processing, then update the mux code before acquiring the next 3 points of data and so on. I have been trying to do this using hardware controlled loops but can only achieve a real sampling rate (time between the same set of three signals) of about 200s/s. I am trying to achieve in excess of 800 s/s. Any ideas?.

    HI CP,
    You are doing pretty good if you are getting 200S/s.
    I believe the only way you can get 800 S/s reliably is to go to LV-Real Time. Not for the speed, but for the determinism.
    That's my idea.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • How can I design Load Balancing for distant Datacenters? without single point of failure

    Dear Experts,
    We are using the following very old and passive method of redundancy for our cload SaaS but it's time to make it approperiate. Can youplease advise:
    Current issues:
    1. No load balancing. IP selection is based on primary and secondary IP configurations. If Primary fails to respond, IP record for DNS changes to secondary IP with TTL=1min
    2. When primary server fails, it takes around 15 min for clients to access the servers. Way too long!
    The target:
    A. Activate a load balancing mechanism to utilized the stand-by server.
    B. How can the solution be designed to avoid single point of failure? In the previous example, UltraDNS is a single point of failure.
    C. If using GSS is the solution, how can it be designed in both server locations (for active redundancy) using ordinary DNS server?
    D. How can HSRP, GSS, GSLB, and/or VIP be used? What would be the best solution?
    Servers are running ORACLE DB, MS SQL, and tomcat with 2x SAN of 64TB each.

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

Maybe you are looking for