Edge server high availability and next hop to FE servers

Hi All,
 As per the article https://technet.microsoft.com/en-us/library/gg412847.aspx "Each Edge Server is a multihomed computer
with external and internal facing interfaces. The adapter Domain Name System (DNS) settings depend on whether there are DNS servers in the perimeter network. If
no DNS servers exist in the perimeter, the Edge Server(s) use external DNS servers to resolve Internet name lookups, and each Edge Server uses a HOST to resolve the next hop server names to IP addresses."
If i have 3 FE servers with DNS load balancing, where the pool.contoso.com would be associated with 192.168.0.1,
192.168.0.2, 192.168.0.3.
How do i create Host record for front end pool in the edge server? i mean which among the 3 ips should i use? as i can use only one IP in the host file record for a host name.
If i create  pool.contoso.com 
192.168.0.1 and that server is unavailable then the whole purpose of Edge and FE HA is defeated !!

The same doc says: "Edit the HOST file on each Edge Server to contain a record for the next hop server or virtual IP (VIP) (the record will be the Director, Standard Edition server, or a Front End pool that was configured as the Edge Server next hop
address in Topology Builder). If you are using DNS load balancing, include a line for each member of the next hop pool."
But you're right, it was my understanding that only the first line of the hosts file was used.  I'd have to presume the application is somewhat intelligent about this or it's incorrect guidance.  I've never tested it.
You could use internal DNS too, resolving the issue, but if your DMZ was ever penetrated, someone could potentially use DNS to help map out your network.  Otherwise, you'd want to add a DNS server in your DMZ or use HLB just for this which I wouldn't
be in love with.
Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
SWC Unified Communications
This forum post is based upon my personal experience and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • Exchange Server 2013: Deploying High Availability and Site Resilience

    Dear All,
    I'm planning to Deploying High Availability and Site Resilience.
    I have two data center and I have one exchange server on each site (multi-role).
    I want to deploy Database Availability Group.
    Is it possible? any idea?
    in addition, all clients is connected to their email to their own site. dose it has any affected to Outlook users?
    KH
    [email protected]

    Hi MAS,
    Currently, I have only mailbox server and only database for each site. 
    + Site1: I has DB1 and all users in site1 will access to their own site. (MBX1) subnet 192.168.1.0/24
    + Site2: I has DB2 and all users in site2 will access to their own site. (MBX2) subnet 192.168.2.0/24
    But the email for incoming and outgoing with external will go to Site1.
    In Planning,
    I want to implement DAG to provide HA on database level below:
    Is it possible to do that? dose it has any affected to current Outlook users?
    BR,
    KH
    [email protected]

  • SQL Server 2005 High Availability and Disaster Recovery options

    Hi, We are are working on a High Availability & Disaster Recovery Planning solution for an application database which is on SQL Server 2005. What different options have we got to implement this for SQL Server 2005 and after we have everything setup how
    do we test the failover is working?
    Thanks in advance.........
    Ione

    DR : Disaster recovery is the best option for the business to minimize their data loss and downtime. The SQL server has a number of native options. But, everything is depends upon your recovery time objective RTO and recovery point objective RPO.
    1. Data center disaster
    Geo Clustering
    2. Server(Host)/Drive (Except shared drive) disaster
    Clustering
    3. Database/Drive disaster     
    Database mirroring
    Log shipping
    Replication
    Log shipping
    Log shipping is the process of automating the full database backup and transaction log on a production server and then automatically restores them on to the secondary (standby) server.
    Log shipping will work either Full or Bulk logged recovery model.
    You can also configure log shipping in the single SQL instance.
    The Stand by database can be either restoring or read only (standby).
    The manual fail over is required to bring the database online.
    Some data can be lost (15 minutes).
    Peer-to-Peer Transactional Replication
    Peer-to-peer transactional replication is designed for applications that might read or might modify the data in any database that participates in replication. Additionally, if any servers that host the databases are unavailable, you can modify the application
    to route traffic to the remaining servers. The remaining servers contain same copies of the data.
    Clustering
    Clustering is a combination of one or more servers it will automatically allow one physical server to take over the tasks of another physical server that has failed. Its not a real disaster recovery solution because if the shared drive unavailable we cannot
    bring the database to online.
    Clustering is best option it provides a minimum downtime (like 5 minutes) and data loss in case any data center (Geo) or server failure.
    Clustering needs extra hardware/server and it’s more expensive.
    Database mirroring
    Database mirroring introduced in 2005 onwards. Database Mirroring maintain an exact copy of a database on a different server. It has automatic fail over option and mainly helps to increase the database availability too.
    Database mirroring only works FULL recovery model.
    This needs two instances.
    Mirror database always in restoring state.
    http://msdn.microsoft.com/en-us/library/ms151196%28v=sql.90%29.aspx
    http://blogs.technet.com/b/wbaer/archive/2008/04/19/high-availability-and-disaster-recovery-with-microsoft-sql-server-2005-database-mirroring-and-microsoft-sql-server-2005-log-shipping-for-microsoft-sharepoint-products-and-technologies.aspx
    http://www.slideshare.net/rajib_kundu/disaster-recovery-in-sql-server
    HADR Considerations
    Need to Understand the business motivations and regulatory requirements that are driving the customer's HA/DR requirements. Understand how your customer categorizes the workload from an HA/DR perspective. There is likely to be an alignment between the needs
    and categorization.
    Check for both the recovery time objective (RTO) and the recovery point objective (RPO) for different workload categories, for both a failure within a data center (local high availability) and a total data center failure (disaster recovery). While RPO and
    RTO vary for different workloads because of business, cost, or technological considerations, customers may prefer a single technical solution for ease in operations. However, a single technical solution may require trade-offs that need to be discussed with
    customers so that their expectations are set appropriately.
    Check and understand if there is an organizational preference for a particular HA/DR technology. Customers may have a preference because of previous experiences, established operational procedures, or simply the desire for uniformity across databases from
    different vendors. Understand the motives behind a preference: A customers' preference for HA/DR may not be because of the functions and features of the HA/DR technology. For example, a customer may decide to adopt a third-party solution for DR to maintain
    a single operational procedure. For this reason, using HA/DR technology provided by a SAN vendor (such as EMC SRDF) is a popular approach.
    To design and adopt an HA/DR solution it is also important to understand the implications of applying maintenance to both hardware and software (including Windows security patching). Database mirroring is often adopted to minimize the service disruption
    to achieve this objective.
    HADR Options :
    Failover clustering for HA and database mirroring for DR.
    Synchronous database mirroring for HA/DR and log shipping for additional DR.
    Geo-cluster for HA/DR and log shipping for additional DR.
    Failover clustering for HA and storage area network (SAN)-based replication for DR.
    Peer-to-peer replication for HA and DR (and reporting).
    Backup & Restore ( DR)
    keep your server DB backups in network location ( DR)
    Always keep your sql server 2005 upto date, in case if you are not getting any official support from MS then you have to take care of any critical issues and more..
    Raju Rasagounder Sr MSSQL DBA

  • How to configure high availability and disaster recovery? And user authenticate

    We are in the process of rolling out our online help which was created using Robohelp.   In our initial rollout we will provide access to the files via our Client Portal which requires authentication.  We are also planning for our next version where we intend to implement Robohelp server functionality.
    Our IT team is looking at options on how to configure for High Availability and Disaster Recovery.  It seems that Robohelp doesn't have any built-in functionality in this area.  In addition we require that our users authenticate.  The options for the server version seem to be more internally focused and we would need to solution the authentication using a third party.
    Would anyone be willing to share their approach in these areas?  Would you be willing to participate in a conference call with our IT Professionals?

    Hello again
    I see my good friend Peter replied to your LInkedIn post where you cross-posted the same question. For those here that have no clue what Peter stated, here it is:
    What are you seeking to recover? Your projects? Your outputs? This sounds like a question more appropriate to Disaster Recovery consultants and far wider reaching than RoboHelp. To me it seems like a question your IT people should be asking direct to such consultants who would expect a fee for their advice.
    I would agree with Peter's reply.
    I'll also go further and ask what exactly is being done in this realm for the application? Help files generally are there to support an application on the server. So whatever you are doing for the applciation should also be able to be used for the WebHelp, FlashHelp or web based AIR Help files, no?
    Cheers... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7, 8 or 9 within the day!
    Adobe Certified RoboHelp HTML Training
    SorcerStone Blog
    RoboHelp eBooks

  • The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly. If the repository was temporarily unavailable, an incremental crawl will fix this error

    We are getting the below error when we see in Crawl logs
    "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly. If the repository was temporarily unavailable, an incremental crawl will fix this error."
    This is happening in FAST search.
    Here I can see soke of the logs related to this search crawl.
    Could anyone please help on this?
    web application 'http://xvy/' doesn't use search application 'FAST Query SSA', skipping it.
    ABC\sp_search' on web application 'http://xvy/'. 2d7dba01-3d2e-4903-b59f-9a8601627bcd
    07/30/2014 01:30:46.65  OWSTIMER.EXE (0x28DC)                    0x1BC0 SharePoint Server Search       Administration               
     dl2m Verbose  Search application 'Search Service Application 1': Skipping web application '48ed7882-9f70-424e-bf72-e3c9f5340b97' because its outbound url 'http://ebc:30347' was automatically added once.
    Ensure full read access to the indexing account 'ABC\sp_search' on web application 'http://nvcp/'. 85041609-d618-4132-ac8e-195a910d99a0
    07/30/2014 01:31:46.53  OWSTIMER.EXE (0x28DC)                    0x05F4 SharePoint Server Search       Administration               
     dl2m Verbose  Search application 'FAST Query SSA': Skipping web application '57718ea1-8cb5-4adc-abd2-9e55415e5791' because its outbound url 'http://nvcp' was automatically added once. 85041609-d618-4132-ac8e-195a910d99a0
    07/30/2014 01:31:46.53  OWSTIMER.EXE (0x28DC)                    0x05F4 SharePoint Server Search       Administration               
     dl2n Verbose  Search application 'FAST Query SSA': Adding start address 'http://nvcp' for web application '57718ea1-8cb5-4adc-abd2-9e55415e5791' to list of valid start addresses. 85041609-d618-4132-ac8e-195a910d99a0
    07/30/2014 01:31:46.53  OWSTIMER.EXE (0x28DC)                    0x05F4 SharePoint Server Search       Administration               
     dmb6 Verbose  Ensure full read access to the indexing account 'ABc\sp_search' on web application 'http://nvcp'ext/'. 85041609-d618-4132-ac8e-195a910d99a0
    07/30/2014 01:31:46.53  OWSTIMER.EXE (0x28DC)                    0x05F4 SharePoint Server Search       Administration               
     dl2m Verbose  Search application 'FAST Query SSA': Skipping web application '64d562a1-535e-4917-8979-88840e2a67fe' because its outbound url 'http://nvcp'ext' was automatically added once. 85041609-d618-4132-ac8e-195a910d99a0
    07/30/2014 01:31:46.53  OWSTIMER.EXE (0x28DC)                    0x05F4 SharePoint Server Search       Administration               
     dl2n Verbose  Search application 'FAST Query SSA': Adding start address 'http://nvcpext' for web application '64d562a1-535e-4917-8979-88840e2a67fe' to list of valid start addresses. 85041609-d618-4132-ac8e-195a910d99a0
    07/30/2014 01:31:46.53  OWSTIMER.EXE (0x28DC)                    0x05F4 SharePoint Server Search       Administration               
     dmb6 Verbose  Ensure full read access to the indexing account 'ABC\sp_search' on web application 'http://nvcpnew/'. 85041609-d618-4132-ac8e-195a910d99a0
    executing SQL query {? = call dbo.proc_MSS_PropagationIndexerGetReadyQueryComponents}  [propdatabase.cxx:70]  d:\office\source\search\native\ytrip\tripoli\propagation\propdatabase.cxx 
    07/30/2014 01:32:04.31  mssearch.exe (0x0588)                    0x1DE4 SharePoint Server Search       Propagation Manager          
     e3o3 Verbose  executing SQL query {? = call dbo.proc_MSS_PropagationIndexerGetReadyQueryComponents}  [propdatabase.cxx:70]  d:\office\source\search\native\ytrip\tripoli\propagation\propdatabase.cxx 
    07/30/2014 01:32:04.68  mssdmn.exe (0x15CC)                      0x1060 SharePoint Server Search       HTTP Protocol
    Handler          du4i                     0x29E4 SharePoint Server Search       HTTP
    Protocol Handler          du4i Verbose  CHttpAccessorHelper::InitRequestInternal - opening request for '/robots.txt'.   [httpacchelper.cxx:353]  d:\office\source\search\native\gather\protocols\http\httpacchelper.cxx 
    07/30/2014 01:32:04.70  mssdmn.exe (0x15CC)                      0x29E4 SharePoint Server Search       HTTP Protocol
    Handler          du54 High     CHttpAccessorHelper::InitRequestInternal - unexpected status (503) on request for 'http://ppecpnew/robots.txt' Authentication 0.  [httpacchelper.cxx:703] 
    d:\office\source\search\native\gather\protocols\http\httpacchelper.cxx 
    07/30/2014 01:32:04.70  mssearch.exe (0x0588)                    0x130C SharePoint Server Search       Gatherer                     
     cd11 Warning  The start address http://nvcp'/sites/quipme cannot be crawled.  Context: Application 'FAST_Content_SSA', Catalog 'Portal_Content'  Details: 
    The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly. If the repository was temporarily unavailable, an incremental crawl will fix this error.   (0x80041200) 
    07/30/2014 01:32:04.70  mssdmn.exe (0x15CC)                      0x104C SharePoint Server Search       HTTP Protocol
    Handler          du4i Verbose  CHttpAccessorHelper::InitRequestInternal - opening request for '/robots.txt'.   [httpacchelper.cxx:353]  d:\office\source\search\native\gather\protocols\http\httpacchelper.cxx 
    07/30/2014 01:32:04.70  mssdmn.exe (0x15CC)                      0x104C SharePoint Server Search       HTTP Protocol
    Handler          du54 High  
    07/30/2014 01:32:04.70  mssearch.exe (0x0588)                    0x2948 SharePoint Server Search       Gatherer                     
     cd11 Warning  The start address
    http://nvcp'/sites/MDPPubng cannot be crawled.  Context: Application 'FAST_Content_SSA', Catalog 'Portal_Content'  Details:  The crawler could not communicate with the server. Check that the server is
    available and that the firewall access is configured correctly. If the repository was temporarily unavailable, an incremental crawl will fix this error.   (0x80041200) 
     CHttpProbeHelper::ProbeServer: InitRequest failed for 'http://ppecpnew/_vti_bin/sitedata.asmx'. Return error to caller, hr=80041200  [stscommon.cxx:490]  d:\office\source\search\native\gather\protocols\common\stscommon.cxx 
    07/30/2014 01:32:26.06  mssdmn.exe (0x15CC)                      0x193C SharePoint Server Search       PHSts                        
     dvg0 High     STS3::COWSServer::InitializeClaimsCookie: Probing url 'http://pncvr' failed. Return error to caller, hr=80041200  [sts3util.cxx:1332]  d:\office\source\search\native\gather\protocols\sts3\sts3util.cxx 
    07/30/2014 01:32:26.06  mssdmn.exe (0x15CC)                      0x193C SharePoint Server Search       PHSts                        
     en0e High     CSTS3Accessor::InitURLType: Return error to caller, hr=80041200                 [sts3acc.cxx:2214]  d:\office\source\search\native\gather\protocols\sts3\sts3acc.cxx 
    07/30/2014 01:32:26.06  mssdmn.exe (0x15CC)                      0x193C SharePoint Server Search       PHSts                        
     dv3p High     CSTS3Accessor::GetServer fails, Url sts4://pnvpr/siteurl=sites/product/siteid={7ebfb072-08a8-4df7-8f74-e06730325d9a}/weburl=/webid={bd7ae724-1256-4b26-9633-416447d6bc5c}, hr=80041200  [sts3acc.cxx:185] 
    d:\office\source\search\native\gather\protocols\sts3\sts3acc.cxx 
    07/30/2014 01:32:26.06  mssdmn.exe (0x15CC)                      0x193C SharePoint Server Search       PHSts                        
     dvb1 High     CSTS3Accessor::Init fails, Url sts4:/mngbv/siteurl=sites/product/siteid={7ebfb072-08a8-4df7-8f74-e06730325d9a}/weburl=/webid={bd7ae724-1256-4b26-9633-416447d6bc5c}, hr=80041200  [sts3handler.cxx:312] 
    d:\office\source\search\native\gather\protocols\sts3\sts3handler.cxx 
    07/30/2014 01:32:26.06  mssdmn.exe (0x15CC)                      0x16FC SharePoint Server Search       HTTP Protocol
    Handler          du2z Verbose  CHttpProbeHelper::ProbeServer: Probing server with url 'http://pnvpr/_vti_bin/sitedata.asmx'.  [stscommon.cxx:476]  d:\office\source\search\native\gather\protocols\common\stscommon.cxx 
    07/30/2014 01:32:26.08  mssdmn.exe (0x15CC)                      0x193C SharePoint Server Search       PHSts                        
     dvb2 High     CSTS3Handler::CreateAccessorExD: Return error to caller, hr=80041200            [sts3handler.cxx:330]  d:\office\source\search\native\gather\protocols\sts3\sts3handler.cxx 
    07/30/2014 01:32:26.08  mssdmn.exe (0x15CC)                      0x16FC SharePoint Server Search       HTTP Protocol
    Handler          du4i Verbose  CHttpAccessorHelper::InitRequestInternal - opening request for '/_vti_bin/sitedata.asmx'.  [httpacchelper.cxx:353]  d:\office\source\search\native\gather\protocols\http\httpacchelper.cxx 
    earch application 'FAST Query SSA': Adding start address 'http://mnvfgext' for web application '64d562a1-535e-4917-8979-88840e2a67fe' to list of valid start addresses. a6b7948a-dc16-419d-b58a-0ee798a0bb9c
    07/30/2014 01:32:46.53  OWSTIMER.EXE (0x28DC)                    0x1444 SharePoint Server Search       Administration               
     dmb6 Verbose  Ensure full read access to the indexing account 'ABC\sp_search' on web application 'http://nvpr/'. a6b7948a-dc16-419d-b58a-0ee798a0bb9c
    07/30/2014 01:32:46.53  OWSTIMER.EXE (0x28DC)                    0x1444 SharePoint Server Search       Administration               
     dl2m Verbose  Search application 'FAST Query SSA': Skipping web application 'cea7b67b-fd5f-4c9a-a300-64a7d7ca3093' because its outbound url 'http://pnvpr' was automatically added once. a6b7948a-dc16-419d-b58a-0ee798a0bb9c
    07/30/2014 01:32:46.53  OWSTIMER.EXE (0x28DC)                    0x1444 SharePoint Server Search       Administration               
     dl2n Verbose  Search application 'FAST Query SSA': Adding start address 'http://pnvpr' for web application 'cea7b67b-fd5f-4c9a-a300-64a7d7ca3093' to list of valid start addresses. a6b7948a-dc16-419d-b58a-0ee798a0bb9c
    07/30/2014 01:32:46.53  OWSTIMER.EXE (0x28DC)                    0x1444 SharePoint Server Search       Administration               
     dl2k Verbose  web application 'http://abcrsp/' doesn't use search application 'FAST Query SSA', skipping it. a6b7948a-dc16-419d-b58a-0ee798a0bb9c
    07/30/2014 01:32:46.53  OWSTIMER.EXE (0x28DC)                    0x1444 SharePoint Server Search       Administration               
     dl2k Verbose  web application 'http://excb/' doesn't use search application 'FAST Query SSA', ski
    Anil Loka

    Hi,
    According to your post, my understanding is that you got error when communicating to the server.
    This happens when crawler is not able to connect to the server. Make sure server name is correct. Couple of steps to troubleshoot it
    You should be able to ping the server from the server having crawl component. Make sure there is an entry for the server in the host file under c:\Windows\System32\drivers\etc folder.
              Ping <servername>
          2.  You should be able to connect to the server using telnet command
    Telnet< servername> <port number>
    More information:
    Troubleshooting of FAST Search Configuration
    If the issue still exists, you can delete the old search application and recreate from the beginning.
    You can also reset the index and do a full crawl after.
    Here is a similar thread for your reference:
    http://social.technet.microsoft.com/Forums/en-US/f3c61b53-304a-4c2a-a370-d0e573219d1d/an-unrecognized-http-response-was-received-when-attempting-to-crawl-this-item?forum=sharepointadminprevious
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • HTTP Server High Availability

    Hello All.
    I have a question regarding OC4J and HTTP server High Availability.
    I want to do something like the Figure 3-1 of the Oracle Application Server High Availability Guide 10.1.2. See this link
    http://download-east.oracle.com/docs/cd/B14099_11/core.1012/b14003/midtierdesc.htm#CIHCEDFC
    What I have now is the following:
    Three hosts
    Two of them are an OAS 10.1.2 which we already configured the Cluster and deployed our applications (used this tutorial: http://www.oracle.com/technology/obe/obe_as_1012/j2ee/deploy/j2eecluster/farmcluster.htm)
    Let's say this nodes are:
    - host1
    - host2
    The other one is the Oracle WebCache stand alone (will act as Load Balancer). We will call this
    - hostwc3
    We already configured the WebCache as Load Balancer and is working just fine. We also configured the session replication successful and work great with our applications.
    What we have not clear is the following:
    When a client try to visit http://hostwc3/application/ the LOAD BALANCER routes him to, let's say http://host1/application/ and in the browser's URL will not show the Virtual Server anymore (the webcache server) and will show the actual real Apache address (host1 )that is attending him. IF we "kill" on ENTIRE host1 (apache, oc4j, etc..) the clients WILL perceive the down and if they try to press F5, the will try to access to an Apache that doesn't is up and running.... The behavior expected is that the browser NEVER shows the actual Apache URL, so, when some apache goes down, the client do not disconnect (as it happens with and OC4J downfall ) and always works with the "virtual web server".
    I came up with some ideas but I want you Guys to give me an advice:
    - In Web Cache, do not route for load balancing to Apache, and route the Oc4J directly (Is this possible?)
    - Configure a HTTP Server Cluster, this means that we have to have a "Virtual Name"to the Apaches (two of them). Is this possible? how?
    - Use the rewrite mode of the Apache. Is this a good idea?
    - Any other idea how to fix the Apache "Single Point Of Failure" ?
    According with the figure 3-1 ( Link above ) we do can have HTTP Server in a cluster. But I have no idea how to manage it or configure it.
    Thanks in advance any help!

    You cannot point Outlook Anywhere to your DAG cluster IP address. It must be pointed to the actual IP address of either server.
    For no extra cost DNS round robin is the best you will get, but it does have some drawbacks as it may give the IP address of a server you have taken down for maintenance or the server has an issue.
    You could look to implement a load balancer but again if you are doing this for high availability then you want more than one load balancer in the cluster - otherwise you've just moved your single point of failure.
    Having your existing NAT and just remembering to update it to point to the other server during maintenance may suit your needs for now.
    If you can go into more detail about what the high availability your business is looking to achieve and the budget we can suggest the best method to meet those needs for the price point.
    Have a great day
    Oliver
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • – Enable high availability and redundancy for Cisco WAAS

    How this is available
    – Enable high availability and redundancy for Cisco WAAS appliances in data centers.
    Thank you.

    Hi,
    You can serially cluster two WAE devices with the Cisco WAE Inline  Network Adapter installed to provide higher availability in the data  center if a device fails. If the current optimizing device fails, the  inline group shuts down, or the device becomes the overloaded, the  second WAE device in the cluster provides the optimization services.  Deploying WAE devices in a serial inline cluster for scaling or load  balancing is not supported.
    More deatils here: Clustering Inline WAEs
    Hope this helps.
    Regards.
    PS: Please mark this as Answered, if this answers your question.

  • Difference btw: Distributed, High-Availability and Additional SAP System Instances

    Hello,
    In the installation Doc for NW 741, there is 3 other installations scenarios beside the single instance, can you pl. highlight the difference?
    Cheers,
    F

    Hello Fouad,
    Distributed landscape:
    In distributed landscape, every instance can run on a separate host
    - Central services instance for ABAP ( ASCS)
    -  Enqueue replication server instance (ERS instance ) for ASCS instance
    - Database instance
    - Primary application server instance (PAS)
    High availability :
    In high availability ,all above instances can run on seperate host. Here you can run all instance in switchover cluster infrastructure. That means ,for any instance if one node in cluster fails ,it switch over to other other node and still operations can run without any distrubance.
    Additional SAP instance:
    Additional SAP aaplication instance can be installed to scale the performance of an SAP system. It can be installed on one of the above instance host or on differnt host.
    The term "additional application server instance "was introduced to replace term dialog instance from SAP Netweaver 7.1 and higher.
    For more and detailed information ,you can refer to Installation guide for the SAP release you are planning to install and for DB/OS combination.
    Hope this helps.
    Regards,
    Archana

  • JCo Server High Availability

    How can I make the JCo server implement "High Availability" functionality. The SAP Server which the makes calls to the JCo Server is HA-aware. So if there is a failover, the SAP Server switches over to the other instance but the JCo server keeps sending the message "Server unavailable". Is there a solution for this problem.
    Thanks.

    Single Appliance not necessarily means Single Point of Failure, an appliance with HW Redundancy could handle failure and provide High Availability, if only configured well.
    Does Symantec BrightMail Appliance provide such redundancy configuration?
    You will have to ask their support or in a Symantec Forum.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Exchange server 2013 CAS server high availability

    Hi
    I have exchange server 2010 sp3(2 MB, 2Hub/Cas) servers.
    Planning to migrate to exchange server 2013.( 2 cas servers and 2 mbx servers).
    I dont want to go all traffic single so i am keeping the role separate..
    In exchange 2010 i achieved hub/CAS high availability through NLB.
    In exchange 2013 how to acheive this...
    Please share your suggestions with document if possible...

    Here ya go:
    http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx
    Load balancing
    and
    http://technet.microsoft.com/en-us/office/dn756394
    Even though it says 2010, it applies to 2013 vendors as well.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Mac OS X Server High Availability

    I'm getting two Xserve and a Vdeck for storage. The server will have the basic functions of file and network. I would like to implement a solution in high availability. In the microsoft world, the name is failover cluster.
    The files and settings will be in storage. I need the second server performs the functions automatically if the first fails.
    Can anyone help me?

    There is an IP Failover feature on XServes. Check this article: http://docs.info.apple.com/article.html?path=ServerAdmin/10.5/en/c3fs29.html
    It is for Leopard (10.5) Server, though I think that many things did not change.
    Kostas

  • SAP POS - Xpress Server High availability

    Dear All,
    I have a requirement for Installing "High Availability" for SAP POS XPS server (Store server), Transnet server, & Transnet client.
    I have gone thru the possibilities of having Backup server for XPS store server, but the backup server supported by SAP has some major restrictions listed below.
    1. Store closing
    2. Terminal Closing
    3. EOD ( End of the day)
    4. Exchange with reciept
    5. Layaway sale.
    Please let me know whether their is a possibility for achieving high availability for store server which performs the above listed transactions too.
    Appreciate if any body can reply asap.
    Best regards
    Syed

    Hello Syed,
    Having High Availability for an Xpress is not standards SAP implementation and is quite u201Coverkillu201D. The use of the backup server is designed to allow the store to function and service itu2019s customers until a server can be rebuilt or repaired.
    BTW:
    -Layaways can be done in offline mode and it will post to the server when it becomes available
    -Returns can be done on the backup server if the customer implemented Returns Authorization

  • High Availability and mac mini

    Hi All,
    Currently running Mac OS X Server on a mac mini, I'm looking for some ways to make that mac mini "server" a high availability system for my office.
    The first thing i will change is to install the operating system on a external firewire 2 disks RAID 1 hard drive : http://www.lacie.com/us/products/product.htm?pid=10967
    Then I'm thinking how to make sure the rest of the mac mini will be running all the time as well.
    That's why I'm wondering if it's possible to make everything double : 2 minis with 1 RAID hard drive each.
    Each complete set would be linked to create somewhat of fail-over system : If one set woes the other one can take over with exactly the same data.
    A bit like what it's been done here :
    http://homepage.mac.com/pauljlucas/personal/macmini/cluster.html but with OS X Server instead of Linux.

    Would this be useful?
    http://daugerresearch.com/pooch/top.shtml
    !http://daugerresearch.com/images/pooch1.jpg!
    +What is Pooch?+
    +Pooch is software providing the easiest way to assemble and operate a high-performance parallel computer. We encourage you to follow the Mac Cluster Recipe and see for yourself.+

  • Sql Server High availability failover trigger

    Hello,
    We are implementing sql server 2012 availability groups (AG). Our secondary databases are not accessible in order to save licenses.
    We have a lot of issues concerning monitoring, backup and SSIS. They all come down to the fact that they want basic information from the secondary, that is not accessible. We are implementing SSIS, which is supported on AG, but the SSISDB is encrypted.
    Backup problem
    The secondary instance does not know anything about the backups made in the primary instance. After a failover differential backups fail.
    SSIS problem:
    There is a blog (http://blogs.msdn.com/b/mattm/archive/2012/09/19/ssis-with-alwayson.aspx) that suggest to make a job that checks whether the status has changed from secondary to primary. If so, you can decrypt and encrypt again.
    This job has to be executed every minute. Which is way too much effort for an event that happens once in a while.  There are a few other problems with this solution. The phrase "use ssisdb" has to be included in the a job step. And the
    jobstep fails. The secondary is not accessible.
    Monitoring problems:
    We use Microsoft tooling for monitoring: SCOM. Scom does not recognize a non readable secondary and tries to login continuously.
    There are a few solutions that I can think of:
    -  sql server build in failover trigger
    -  Special status of secondary database.
    Failover trigger:
    We would like a build-in failover trigger, in stead of a time based job, that starts a few standard maintenance actions if only at the time (or directly after) a failover has occurred. Because now our HA cluster is not really high available until :
    - SSISDB works and is accessible after failover
    - Backup information is synchronised
    - SCOM monitoring skips the secondary database (scom produces loads of login failures)
    Does anyone have any suggestion how to fix this?

    No built in trigger can achieve your requirement.

  • High Availability and load balancing

    Hi,
    I have 6513 catalyst with redundant sup720 and msfc. All the servers are connected to this switch and there is no vlan configuration. Here is the hardware config of the box.
    Mod Slot Ports Module-Type Model Sub Stat
    1 1 48 10/100/1000BaseT Ethernet WS-X6148-GE-TX no ok
    2 2 48 10/100BaseTX Ethernet WS-X6148-RJ-45 no ok
    3 3 48 10/100BaseTX Ethernet WS-X6148-RJ-45 no ok
    4 4 48 10/100BaseTX Ethernet WS-X6148-RJ-45 no ok
    5 5 48 10/100BaseTX Ethernet WS-X6148-RJ-45 no ok
    6 6 48 10/100BaseTX Ethernet WS-X6148-RJ-45 no ok
    7 7 2 1000BaseX Supervisor WS-SUP720-BASE yes ok
    15 7 1 Multilayer Switch Feature WS-SUP720 no ok
    8 8 2 1000BaseX Supervisor WS-SUP720-BASE yes
    I want to introduce a new 6513 chassis with same knid of configuration. Please help me out to configure these boxes to provide the high availabilty as well as load balancing for the server farm. Do I need to do any thing on the servers in terms of hwardare / software requirment to achieve the objective.
    thnaks & regards
    shalabh

    The config is the same for both switches:
    (this will enable port channel bundle, you can specify upto 16 ports..i would recommend 10)
    (config-if#)interface range gigabitethernet 1/1-2
    (config-if#)Description PORT-CHANNEL Interface
    (config-if#)switchport
    (config-if#)channel-group 1 mode on
    (config-if#)switchport trunk encapsulation dot1q
    (config-if#)switchport mode trunk
    (config-if#)speed 1000
    (config-if#)no shutdown
    (Enable SRM SSO)
    router(config)#redundancy
    router(config-red)#mode sso
    router(config-red)#end
    router#show redundancy states
    you should see my state = active
    peer state = standby hot
    where are your users come into the 6500's? if they are sitting on the 6500's I would recommend putting them in a separate vlan.

Maybe you are looking for

  • Sony Xperia Z1 Water Resistivity false advertising and poor service from service center

    I have bought Sony Xperia Z1 from eBay on 31 October 2013. I have been using this phone for nearly a year. The phone was good in performance but comes to water resistivity it will resist water for a few months but after few months even though the car

  • Select Query in Jquery

    Hi all, How should i write select query into Jquery. My query is like this select SERVICE_name into :p20_service_name from new_service where service_id=(SELECT max(service_id) from NEW_SERVICE);Thanks for help!

  • Refresh characteristic texts in query result without logoff in Bex 7.0

    Dear Experts, I'm using a query in BEx 7.0 and switched off all caching parameters in RSRT. For instantly reloading values from Cube/DSO this is working fine. Some of my characteristics are shown as key + text in query result. Unfortulately these tex

  • Refurbished iphone add to Apple ID

    Hello! I have refurbished iphone 3gs 16gb white QR21*****NQ I want to add the products to my Apple ID But this serial number is registered to another Apple ID.. I do not know who was the owner.. What do I do? How to add this product to my Apple ID ?

  • Questions about classes

    Hi, I finally upgraded from Flash 5 to CS3 - big difference :-) Reading the first half of an ActionScript 3 book left me quite confused about classes - what they are, how you handle multiple classes, how you use one class from within another etc. Are