Solution on Link Failover for Hosted Webserver

Hi,
One of my customer has Web based application which is hosted over internet on IP provided by ISP. Challange is in case, the ISP link fails webserver is not available. Customer is planning to add one more link from different ISP. How do I Load balance both this links and have webserver being accessible from any of the link in case of link failover.

Assuming that we have a webserver hosted by us on our internal network as 192.168.1.1 and we map it to pix outside interface ip address x.x.x.x as follows:
static (inside,outside) x.x.x.x 192.168.1.1 netmask 255.255.255.255 then any traffic from outside world hitting to x.x.x.x will directly be routed to 192.168.1.1. Hence, we use port forwarding as follows:static (inside,outside) tcp interface 80 192.168.1.1 80 so that only the port 80 traffic destined to x.x.x.x should be forwarded to 192.168.1.1 and not all the traffic. And you can also do by change the ISP cable to your device and reconfigure it .

Similar Messages

  • Network Load Balancing and failover for AFP Sharing

    Dear all,
    Somebody kindly teach me to use round robin DNS to perform the network load balancing, it's success but not the failover.
    I have 4 xserve and want to do the load balancing and failover at the same time.
    I have read the IP failover document and setup it successfully, but anyone know is it possible to do the IP failover for more than 2 server?
    For example, 4 server serving the AFP service at the same time, maybe I have 1 more extra server to do the IP failover for thoese 4 servers.
    As I know, IP failover require Firewire as the heartbeat detection. But one xserve only have 2 firewire ports. May I setting up the IP failover only by a ethernet port and an IP address? does it possible to detect and failover to any server after server down has been detected?
    I believe load balancer maybe the best solution but its cost is too high.
    Thanks any advance!
    Karllee

    well, u have 2 options here
    software load balancing
    request comes it foo.com -> ws7u2 hosting foo.com is configured to run as reverse proxy . this server sends any incoming requests to one of the four back end web server 7 handling your incoming request
    hardware load balancing (this you need to invest)
    request comes to hardware load balancer who responds for foo.com -> sends requests to four ws7 server hosting your application
    you could try out how software load balancing works out for you before you invest in hardware load balancing
    here is more instruction on configuring ws7 + reverse proxy (software load configuration)
    - install ws7 on foo.com
    - create a new configuration (choose port 80, disable java

  • Has Anyone Been Able to Stand Up Essbase Analytics Link (EAL) for Financial Management in version 11.1.2.2?

    Has anyone had any luck standing up the Essbase Analytics Link (EAL) for Financial Management Application in 11.1.2.2?  We are to the point where when we click on the “Create Bridge Application” it creates an application and database in EAS and then crashes the Hyperion Essbase Analytics Link Server - Web Application .  Prior to this we were getting a NetRetry and NetDelay error and have increased those settings are are not receiving those same errors.  I’m curious if 11.1.2.2 is even a functioning version of EAL or if we are out of luck until the next version.  Any feedback is appreciated.

    FYI, we were able to get EAL up and running after adding the following entries into the registry on our servers...
    Solution
    This timeout issue can be fixed by adding two tcpip registry parameters, but first you must identify which client was communicating with essbase when the timeout occurred so that you know which machine to add the parameters to.  If all the EAL and EPM components are installed on a single machine, then that machine would also host the client.  If products are installed in a distributed environment you determine the client machine based on how the EAL Essbase Server component is defined.
    The APS URL is part of the Analytics Link, Essbase Server definition in EAL.
    If the value (APS URL) is "Embedded" this means that the EAL Application Server is communicating via the JAPI directly with the Essbase Server. In this case the EAL AppServer is the client to the Essbase Server. For this case the following tcpip registry parameters need to go on the machine where the EAL Application Server is running.
    If the (APS URL) value is http://serverName:13080/aps/JAPI then the EAL Application Server is communicating by way of Hyperion Provider Services (APS). In this case EAL proxies requests to and from the Essbase Server through APS. This means that APS is the client to the Essbase Server and the tcpip registry parameters need to go on the machine where Hyperion Provider Services are running.
    Once you have identified which machine is acting as the client to essbase, set the TcpTimedWaitDelay=30 and MaxUserPort=65534 parameters via the windows registry.
    1. Open the Windows Registry.
    2. Navigate to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\TCPIP\Parameters.
    3. Add new DWORD Value named "TcpTimedWaitDelay"
    - right click and select Modify
    - Select "decimal" radio button, type in 30.
    4. Add new DWORD Value named MaxUserPort
    - right click and select Modify
    - Select "decimal" radio button, type in 65534
    5. A reboot of the server is necessary.

  • How to resolve "getPooledConn: No more connections in the pool for Host"

    I am using the wl9.1 proxy in a SunOne WebServer 6.1 (solaris), and I regularly get this error:
    getPooledConn: No more connections in the pool for Host
    I found several postings with this error, but no reactions on how to solve this.
    in the proxy log, I see this info:
    ================New Request: [wls-app/page.do] =================
    Tue Nov 13 13:05:30 2007 <18781194955530286> CookieName is deprecated and replaced by WLCookieName
    Tue Nov 13 13:05:30 2007 <18781194955530286> Uri as read from rq (request) data structure /wls-app/page.do
    Tue Nov 13 13:05:30 2007 <18781194955530286> Uri after pathTrim /wls-app/page.do
    Tue Nov 13 13:05:30 2007 <18781194955530286> Uri resolved to /wls-app/page.do?page=messages
    Tue Nov 13 13:05:30 2007 <18781194955530286> resolveRequest return code is [0]
    Tue Nov 13 13:05:30 2007 <18781194955530286> URI=[wls-app/page.do?page=messages]
    Tue Nov 13 13:05:30 2007 <18781194955530286> INFO: SSL is not configured
    Tue Nov 13 13:05:30 2007 <18781194955530286> Found cookie from cookie header: wlsappCookie=H5TccKpNWGqfnvv2wG1znjmJkqNhMyhct0h93HDgfGnc7phpkdxW!-1488879380!864729474
    Tue Nov 13 13:05:30 2007 <18781194955530286> Parsing cookie wlsappCookie=H5TccKpNWGqfnvv2wG1znjmJkqNhMyhct0h93HDgfGnc7phpkdxW!-1488879380!864729474
    Tue Nov 13 13:05:30 2007 <18781194955530286> getpreferredServersFromCookie: [-1488879380!864729474]
    Tue Nov 13 13:05:30 2007 <18781194955530286> primaryJVMID: [-1488879380]
    secondaryJVMID: [864729474]
    Tue Nov 13 13:05:30 2007 <18781194955530286> No of JVMIDs found in cookie: 2
    Tue Nov 13 13:05:30 2007 <18781194955530286> Trying to locate Primary or Secondary using SrvrInfo with JVMID: -1488879380
    Tue Nov 13 13:05:30 2007 <18781194955530286> getPreferredFromCookie: Found Primary 10.0.0.102:8514:0
    Tue Nov 13 13:05:30 2007 <18781194955530286> Trying to locate Primary or Secondary using SrvrInfo with JVMID: 864729474
    Tue Nov 13 13:05:30 2007 <18781194955530286> getPreferredFromCookie: Found Secondary 10.0.0.101:8514:0
    Tue Nov 13 13:05:30 2007 <18781194955530286> getPreferredFromCookie: Found 2 servers
    Tue Nov 13 13:05:30 2007 <18781194955530286> attempt #0 out of a max of 5
    Tue Nov 13 13:05:30 2007 <18781194955530286> trying connect to PRIMARY '10.0.0.102'/8514/0
    Tue Nov 13 13:05:30 2007 <18781194955530286> getPooledConn: No more connections in the pool for Host[10.0.0.102] Port[8514] SecurePort[0]
    Tue Nov 13 13:05:30 2007 <18781194955530286> INFO: New NON-SSL URL
    Tue Nov 13 13:05:30 2007 <18781194955530286> Connect returns -1, and error no set to 150, msg 'Operation now in progress'
    Tue Nov 13 13:05:30 2007 <18781194955530286> EINPROGRESS in connect() - selecting
    Tue Nov 13 13:05:30 2007 <18781194955530286> Local Port of the socket is 64242
    Tue Nov 13 13:05:30 2007 <18781194955530286> Remote Host 10.0.0.102 Remote Port 8514
    Tue Nov 13 13:05:30 2007 <18781194955530286> created a new connection to preferred server '10.0.0.102/8514' for '/wls-app/page.do?page=messages', Local port: 64242
    Tue Nov 13 13:05:30 2007 <18781194955530286> WLS info : 10.0.0.102:8514 recycled? 0
    Tue Nov 13 13:05:30 2007 <18781194955530286> Adding header for WLS 'WL-Proxy-Client-Cert: ###
    ---removed client cert info---
    Tue Nov 13 13:10:30 2007 <18781194955530286> *******Exception type [READ_TIMEOUT] (no read after 300 seconds) raised at line 205 of Reader.cpp
    Tue Nov 13 13:10:30 2007 <18781194955530286> caught exception in readStatus: READ_TIMEOUT [os error=0,  line 205 of Reader.cpp]: no read after 300 seconds at line 822
    Tue Nov 13 13:10:30 2007 <18781194955530286> PROTOCOL_ERROR: Backend Server not responding - isRecycled:0
    Tue Nov 13 13:10:30 2007 <18781194955530286> *******Exception type [PROTOCOL_ERROR] (Backend Server not responding) raised at line 842 of URL.cpp
    Tue Nov 13 13:10:30 2007 <18781194955530286> got PROTOCOL_ERROR exception in sendRequest phase at line 1364; Msg: PROTOCOL_ERROR [line 842 of URL.cpp]: Backend Server not responding
    Tue Nov 13 13:10:30 2007 <18781194955530286> request [wls-app/page.do?page=messages] did NOT process successfully..................
    Does anyone know how to resolve this issue ?
    Thanks,
    Cappaert Luc

    We are seeing a similar connection pool error captured in the WL proxy log doing load testing. Is there an answer to this question of how to increase this pool size?
    Fri Jan 16 14:59:02 2009 <535212321359422334> Trying a pooled connection for '191.228.175.226/7003/0'
    Fri Jan 16 14:59:02 2009 <535212321359422334> getPooledConn: No more connections in the pool for Host[191.228.175.226] Port[7003] SecurePort[0]
    Fri Jan 16 14:59:02 2009 <535212321359422334> general list: trying connect to '191.228.175.226'/7003/0 at line 1319 for '/SIT-cccpol/PTGadget/SetCookies.jsp'
    Fri Jan 16 14:59:02 2009 <535212321359422334> INFO: New NON-SSL URL
    Fri Jan 16 14:59:02 2009 <535212321359422334> Connect returns -1, and error no set to 10035, msg 'Unknown error'
    Fri Jan 16 14:59:02 2009 <535212321359422334> EINPROGRESS in connect() - selecting
    Fri Jan 16 14:59:02 2009 <535212321359422334> Local Port of the socket is 2097
    Fri Jan 16 14:59:02 2009 <535212321359422334> Remote Host 191.228.175.226 Remote Port 7003

  • I created a website with iWeb but use GoDady for hosting it rather than MobileMe. The images on my Gallery page do not show at all on the external domain but they DO show when seen on MobileMe. Has anyone encountered this problem before? Many thanks!

    Hello al!
    I created a website with iWeb but use GoDady for hosting it rather than MobileMe. The images on my Gallery page do not show at all on the external domain but they DO show when seen on MobileMe. Has anyone encountered this problem before? Many thanks!

    Just create a new page (or use the existing photo page) on your external site and use html to add an iframe sized to the page and link it to the mobilme gallery page. Works for me just fine when showing my gallery from a yahoo site.
    like this
    <iframe scrolling="off" allowTransparency="true" frameborder="0" scrolling="yes" style="width:100%;height:100%;border:none" src="http://gallery.me.com/your_account_name"></iframe>

  • Release notes for iPlanet-WebServer-Enterprise/4.1SP7

    Hi,
    Where can i get the release notes, known bugs and fixes for iPlanet-WebServer-Enterprise/4.1SP7.
    Plesae reply back.
    Thanx
    Shiva

    Hi,
    You can find the release note for iPlanet Web Server SP7 from the following link
    http://docs.sun.com/?p=/doc/816-5676-10
    Srini

  • Are Wildcards possible for host name in "no proxy for" setting?

    Our LAN requires a proxy for internet access. I need to bypass proxy for certain host names, as the proxy blocks many ports.
    Things work if I enter the entire host name. But there are many, so I wish to use wildcards in the host name.
    That does not seem to work.

    There is no solution for host names and wild cards as far as I know. That is why I showed how to do this with the IP.
    Does it work if you only specify the top level domain if you want to include other domains with the same postfix?

  • Dreaweaver upload fails after changing host webserver

    Hello,
    since a long time, i encounter this problem several times and deleting the "Configuration" folder does not change anything. Any version of dream do the same problem.
    I work on a lot of website dev with dreamCC on windows 7. I recently change my host webserver. All the server configuration is the same than the previous one.
    Unfortunately, when I want to upload my files on my server website and using dreamweaver ftp fonctionnalities, I now have a message saying upload can't start because data internal error.
    I can go on dreamweaver distant server, so the connection work fine. And I can see fresh files that I have uploaded with an other ftp client.
    May be one other location cache somewhere else is in conflict.
    When changing a website host, I always had this problem since a long time with dream.
    Any solution ? Thanks.

    Have you tried toggling the Passive and Optimization settings on and off?
    What does your new hosting provider say?

  • Re: Failover for SO's with context

    Right, delivery of events is not guaranteed by Forte, even though
    it is reasonable to rely on it in the case of two Forte servers on a LAN.
    I would not go towards a solution for securing events delivery by
    an acknowledgement mechanism (ack event or shared object notifier),
    because of increased complexity and performance overhead.
    On the other hand, a second simple security level can be provided by
    enabling
    your mirror/backup SO to be refreshed at will, by letting it get a
    snapshot
    of the current transient data to be mirrored, so you can :
    - Start your partitions in any order (The mirror partition will first
    task a
    snapshot of the transient data, then will register for mirror events)
    - Start and stop the mirror partition at will, without disrupting the
    application
    Then, if you do not trust events delivery, you can reinitialize your
    mirror
    periodically (say every 12 hours) to minimize the risks of losing
    transient
    data events.
    Again, this solution is suited to low volumes of transient data.
    I guess what Chad means by journaling is writing to a log file any
    event (in a large sense) happening on data from its initial value. Then
    if
    you need to restore state, you re-play the events from the initial value.
    This is a common solution in the banking area where you need to backup
    values but also events on the values. I do not know how this can be
    applied
    to a generic mechanism with Forte, but it may be a good way to explore,
    although probably more complex to implement with Forte than the
    Backup SO/ Events pattern.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 10:39:03 -0600 Chad Stansbury
    <[email protected]> writes:
    Actually, since events (let alone distributed events) are not
    'guaranteed delivery' in Forte, I would hesitate to use events
    as a mechanism of mirroring your data - unless, of course, you
    really don't require an industrial strength failover strategy.
    This would also apply to asynchronous messaging (unless you
    are careful to register for exception events (which again, aren't
    guaranteed delivery) and have a mechanism to handle said
    asynchronous exception events. I also know that Forte will retry
    certain tasks when the service object it is sent to fails com-
    pletely (like a NIL object exception), but don't know enough
    about the internal workings of Forte to know under which conditions
    this will occur.
    I think that the most common method of a truly industrial-
    strength, guaranteed-delivery mechanisms is via journaling...
    which I know very little about, but is something that you should
    be able to look up and study if that's what you require.
    Again, if you don't care about the (admittedly small) chance
    of an asynchronous call failing, then the suggestions that
    Vincent has already made are good ones.
    From: [email protected]
    To: [email protected]
    Cc: [email protected]
    Sent: 2/13/98 9:13:17 AM
    Subject: Re: Failover for SO's with context
    Steven,
    The pattern choice between external resource vs SO is dependent on the
    type
    of transient data you want to backup. Probably the external resource
    is
    better
    suited to high volumes of data. We have implemented the 'Backup SO'
    pattern because our transient data volumes are rather low (which I
    guess
    must
    be the most common case for global, transient data).
    Whatever the choice you do :
    - Be sure to enforce encapsulation for updating the transient data, in
    order to
    guarantee that any modification to your transient data is duplicated
    on
    the backup
    SO or the external resource
    - About performances, the CPU cost is fairly low for your 'regular'
    application if you
    take care to :
    * use asynchronous tasks to update the external resource
    or
    * use events to notify the backup SO
    Now it is true that you will have a network overhead when using
    events,
    as your
    backup SO shall be isolated in a remote partition on a remote
    server.
    That is one good argument to select the Backup SO pattern for low
    volumes of
    transient data.
    If you choose the 'Backup SO' pattern, you will also have to be
    careful
    not sending
    any distributed reference to your Backup SO but only clones.
    Anyway, the backup SO pattern works fairly well for low volumes of
    data,
    but requires lots of testings and a good understanding of events and
    communication
    across partitions.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 09:24:57 +0100 Steven Arijs <[email protected]>
    writes:
    We're going to implement a failover scenario for our application.
    Unfortunately, we also have to replicate the state of our failed
    service
    objects.
    I've browsed the Forte site and found a TechNote concerning this
    (TechNote 11074).
    In this TechNote they talk about a service object that is responsible
    for updating all backup service objects when needed.
    It seems to me that when I implement that way, I will be creating a
    lot
    of overhead, i.e. I will be doing a lot of stuff several times.
    What will be the effects on my performance ?
    The way with the least performance loss would be to use an external
    resource that is updated. But what if this external resource also
    fails
    Is there any one who has already implemented a failover scenario for a
    service objects with state ?
    Any help would be appreciated.
    Steven Arijs
    ([email protected])
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    Right, delivery of events is not guaranteed by Forte, even though
    it is reasonable to rely on it in the case of two Forte servers on a LAN.
    I would not go towards a solution for securing events delivery by
    an acknowledgement mechanism (ack event or shared object notifier),
    because of increased complexity and performance overhead.
    On the other hand, a second simple security level can be provided by
    enabling
    your mirror/backup SO to be refreshed at will, by letting it get a
    snapshot
    of the current transient data to be mirrored, so you can :
    - Start your partitions in any order (The mirror partition will first
    task a
    snapshot of the transient data, then will register for mirror events)
    - Start and stop the mirror partition at will, without disrupting the
    application
    Then, if you do not trust events delivery, you can reinitialize your
    mirror
    periodically (say every 12 hours) to minimize the risks of losing
    transient
    data events.
    Again, this solution is suited to low volumes of transient data.
    I guess what Chad means by journaling is writing to a log file any
    event (in a large sense) happening on data from its initial value. Then
    if
    you need to restore state, you re-play the events from the initial value.
    This is a common solution in the banking area where you need to backup
    values but also events on the values. I do not know how this can be
    applied
    to a generic mechanism with Forte, but it may be a good way to explore,
    although probably more complex to implement with Forte than the
    Backup SO/ Events pattern.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 10:39:03 -0600 Chad Stansbury
    <[email protected]> writes:
    Actually, since events (let alone distributed events) are not
    'guaranteed delivery' in Forte, I would hesitate to use events
    as a mechanism of mirroring your data - unless, of course, you
    really don't require an industrial strength failover strategy.
    This would also apply to asynchronous messaging (unless you
    are careful to register for exception events (which again, aren't
    guaranteed delivery) and have a mechanism to handle said
    asynchronous exception events. I also know that Forte will retry
    certain tasks when the service object it is sent to fails com-
    pletely (like a NIL object exception), but don't know enough
    about the internal workings of Forte to know under which conditions
    this will occur.
    I think that the most common method of a truly industrial-
    strength, guaranteed-delivery mechanisms is via journaling...
    which I know very little about, but is something that you should
    be able to look up and study if that's what you require.
    Again, if you don't care about the (admittedly small) chance
    of an asynchronous call failing, then the suggestions that
    Vincent has already made are good ones.
    From: [email protected]
    To: [email protected]
    Cc: [email protected]
    Sent: 2/13/98 9:13:17 AM
    Subject: Re: Failover for SO's with context
    Steven,
    The pattern choice between external resource vs SO is dependent on the
    type
    of transient data you want to backup. Probably the external resource
    is
    better
    suited to high volumes of data. We have implemented the 'Backup SO'
    pattern because our transient data volumes are rather low (which I
    guess
    must
    be the most common case for global, transient data).
    Whatever the choice you do :
    - Be sure to enforce encapsulation for updating the transient data, in
    order to
    guarantee that any modification to your transient data is duplicated
    on
    the backup
    SO or the external resource
    - About performances, the CPU cost is fairly low for your 'regular'
    application if you
    take care to :
    * use asynchronous tasks to update the external resource
    or
    * use events to notify the backup SO
    Now it is true that you will have a network overhead when using
    events,
    as your
    backup SO shall be isolated in a remote partition on a remote
    server.
    That is one good argument to select the Backup SO pattern for low
    volumes of
    transient data.
    If you choose the 'Backup SO' pattern, you will also have to be
    careful
    not sending
    any distributed reference to your Backup SO but only clones.
    Anyway, the backup SO pattern works fairly well for low volumes of
    data,
    but requires lots of testings and a good understanding of events and
    communication
    across partitions.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 09:24:57 +0100 Steven Arijs <[email protected]>
    writes:
    We're going to implement a failover scenario for our application.
    Unfortunately, we also have to replicate the state of our failed
    service
    objects.
    I've browsed the Forte site and found a TechNote concerning this
    (TechNote 11074).
    In this TechNote they talk about a service object that is responsible
    for updating all backup service objects when needed.
    It seems to me that when I implement that way, I will be creating a
    lot
    of overhead, i.e. I will be doing a lot of stuff several times.
    What will be the effects on my performance ?
    The way with the least performance loss would be to use an external
    resource that is updated. But what if this external resource also
    fails
    Is there any one who has already implemented a failover scenario for a
    service objects with state ?
    Any help would be appreciated.
    Steven Arijs
    ([email protected])
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

  • Database link failover on RAC

    Dear Friends.
    Could you please provide me the information about implementation of Database links failover in RAC. (Oracle 10g RAC on linux)
    I have created db links across the two RAC environments. Each RAC setup contains 2 nodes.
    I have created DB link across the two RAC environemtns.
    i.e I have created DB link between 1st node of Source RAC system to 1st node of Target RAC system.
    If 1st node of Target RAC system is down, I need to setup in such way that the link should failover to node 2 of Target system.
    I have tried all possible options of TAF. But I did not succeed. Is there anybody is implemented this type setup...?
    How to setup tnsnames.ora on source DB to get this type of failover.
    Thanks in Advance.
    Best Regards
    Kanumuri Raju

    Oracle was kind enough to provide some configuration details in their docco. You may want to review this link:
    http://download-east.oracle.com/docs/cd/B19306_01/network.102/b14212/advcfg.htm#sthref1275
    The configuration needs to be performed in the TNSNAMES.ORA associated with the database initiating the link. If you want bi-directional TAF, you would need to update the TNSNAMES.ORA for 'both' databases.
    I suggest you don't get your hopes up too high about the capability of TAF across DBLinks. I'm pretty sure you will not be able to get SELECT-based TAF. And I'm not absolutely sure which session rules will be used to determine the failover time.

  • How to plan Failover for the following Scenarios in Flex-connect mode.

    The following queries are in respect to AP High availability (not SSO fail over or Controller HA), meaning if one controller fails, the AP will be failing over to the secondary controller which is in a different Geo location. the AP will be in Flex-connect mode with local switching and local auth. in this scenario, following are my queries
    1: If i have an SSID that has an interface group linked to it, can i fail it over on other controller where there may be a single WLAN linked to it.?
    2:Do we need the subnet masks to be same at both ends?
    3: if i have an SSID with open authentication, can i configure the remote network SSID with no authentication?
    4: can any one link me up with a document that explains configuration case study of the flex-connect mode fail over scenarios.
    All the help given would be really appreciated.
    Thanks.

    hi Scott,
    Sorry for replyimg late. and thanks for your reply and suggestion.
    it did help me a lot, but now i am in a tiff.
    the thing is my client has following existing scenario:
    he has 6 disparate locations with a standalone 5508 WLC at each location.
    he is now planning to configure AP failover for every location.
    we are using the Flex-connect design as he has not procured a HA-SSO license.
    also the WLC are not in same location.
    the Flex-connect design is with Local Switching and local Auth.
    there are 2 SSID which are causing me issues.
    1: SSID A is linked to an interface group which has multiple vlans.
    2: SSID B shares its WLAN interface with another SSID (the wlan is split between 2 different SSID)
    we need local switching for these and also they need to have local auth.
    so if i remove the interface group for SSID A and use a bigger subnet, what will be the best possible mask to use considering that the ARP and DHCP broadcast shouldn't choke up the network (existing subnets are /21 and /22). or any workaround to minimise the network activity.
    and for SSID b what is the configuration i would need to do on the secondary controller or is it just that the SSID needs to be present on the controller and the mask need not be same.
    sorry for troubling you and thanks in advance
    Niiketan Sutar.

  • SQL server 2012 with SP2 for hosting VMM 2012 R2 DB server

    Hi,
    I would like to implement System Center 2012 R2 Operation Manager, VMM and Configuration Manager.
    I have proposed to my customer two MSSQL Server 2012 Box, one for SCOM/VMM DB server and another one for SCCM (SCCM DB and SCVMM DB cannot be on the same computer).
    But I read on Microsoft web site that only SQL Server 2012 SP1 are supported for VMM DB server and SQL Server 2012 SP2 is supported for SCOM DB server.
    So my question is: Can I deploy one box SQL server 2012 with SP2 for hosting VMM 2012 R2 DB server ?  if not, When SQL Server 2012 SP2 will be supported for VMM 2012 R2 DB server ?
    Regards.
    BrahimH.
    BrahimH

    Hi,
    As per the link this seems to me a known issue. I cannot 100 % say because I have seen/faced this issue with
    SP1 and you mentioned SP2. I would always suggest to install RTM only there is option to un select SP2 during installing.
    Well thank you for reporting I guess Microsoft would take this as feedback.
    Can you share setup log files please just for analysis
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Java failover for DB2 ZOS

    The customers who participate in the SAP/IBM Customer Technical Exchange had entered a development request last year requesting Java Engine failover for DB2/ZOS, similar to the functionality that's been in place several years for the ABAP engine.  (Development request # 0020079747 0000172038 2006)
    With the next Technical Exchange meeting coming up in September, I thought this forum might be a good way to have other customers respond to this thread to help prioritize the issue with SAP and IBM. 
    Deere is very interested in getting a solution soon for this issue, or at least getting an estimate of when it may be available when we meet again in September.
    Regards,
    Carol Wirth
    SAP Basis Team
    John Deere

    Sony Europe is also very anxious to have this issue resolved. It is not acceptable to have no means of controlled DB2 failover for JAVA threads - canceling the DB2 instance is not a good option.
    Gill Hanlon
    Technical Consultant
    Sony Europe

  • Linksys Easy Link Advisor for WRT310N Wireless Router after upgrading to Windows 7 (64bit)

    I can't reinstall Linksys Easy Link Advisor for WRT310N Wireless Router after upgrading to Windows 7 (64bit).  When I went to Linksys site for solution for new drivers, it asked me for Hardware Version Number.  This model doesn't show one.  Any thoughts?
    Thanks!

    Sabertooth is right. If you are able to go online from your Router then you don't need to install LELA software on your Computer.

  • Host agent check errors for host

    Dear All,
    We have recently installed PI 7.4 system. While configuring in SolMan 7.1. i am facing issue that "Host agent check errors for host "
    Managed System Configuration - >Assign Diagnostics Agents.
    Kindly help me out..
    Regards
    Jay

    Hi Jay,
    Please confirm ownership & permission for directory /usr/sap/hostctrl .For ref please follow SCN link Problem with connecting SAP Host Agent
    After that retry the phase.
    Regards,
    Gaurav

Maybe you are looking for