OAM 11gR2 on High Availability

Hi,
We are trying to install Admin servers in Active passive mode in a single domain to manage single set of managed servers (for OAM component). The scenerio is that I have Admin Server + Managed Server on OAMHOST1 and OAMHOST2 both. I can start Admin server on OAMHOST1 and then manage both the managed servers on OAMHOST1 and OAMHOST2. Now in case my Admin Server on OAMHOST1 goes down, the requirement is that secondary Admin Server on OAMHOST2 should be started and use to manage the managed servers on OAMHOST1 and OAMHOST2. So i have stopped the admin server on OAMHOST1 and then tried to start the Admin Server on OAMHOST2 but its not starting up. It is simply throwing the error that port 7001 on OAMHOST1 is in use but i can clearly see that there is no service running on 7001 port on OAMHOST1 and ALSO why its trying to connect OAMHOST1:7001. As per oracle documentation this is feasible (this is mentioned in their HA guide) but they haven't mentioned any step to configure this or test this.
Please advise if anyone has any idea to implement this. Since OAM admin is deployed on Admin server, the HA is crucial for Admin Server unavailability.

In the OAM Server Settings you will a "load balancing" section. You have to give the host name a value that can be reached from your users through a load balancer (eg a non-protected OHS with mod_wl_ohs configured to redirect to the login page on both OAM servers).
HTH,
--olaf                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • 11g OAM High Availability + SSL

    Has anyone SUCCEEDED in setting up OVD+OID+OAM 11g in a High Availability environment to include SSL throughout on LINUX with 10g WebGate?

    In the OAM Server Settings you will a "load balancing" section. You have to give the host name a value that can be reached from your users through a load balancer (eg a non-protected OHS with mod_wl_ohs configured to redirect to the login page on both OAM servers).
    HTH,
    --olaf                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • High Availability for EBS Database 11gR2

    Hello Gurus,
    We have the below environment:
    Oracle Applications - 12.1.3
    Oracle Database - 11.2.0.3
    OS - Oracle Enterprise Linux 5.7
    I would like to know how to achieve high availability for the database. We don't want to go with RAC at this stage. What are the other alternatives to achieve high availability at the database side.
    Thanks in advance!
    -Khan

    I have come across the note id
    How To Install An Oracle Database In An Active/Passive Configuration Without RAC? [ID 734361.1]
    Can you please confirm if this can be used to configure Database for HA?
    Also please confirm if I need to get the two servers configured with any cluster software at Linux level?

  • Oracle EBS R12.1.3 High Availability with RAC 11g on VMWARE

    Dear All,
    Our customer requirement is of high availability and good utilization of hardware for EBS R12.13 implementation.As per our strategy we want to use Oracle VM(I think it doesn't require any licensing) , RAC 11gR2 and appstier load balancing.We have only two servers to achieve all these.One one node there will be two VMs one for dbtier and another one for appstier and same will be on second server.So this way we will be having four virtual machines on two physical nodes.For all these we will use OEL Linux as Operating system
    Please share your experiences , if above deployment model is correct or need further enhancements.
    Regards.

    Our customer requirement is of high availability and good utilization of hardware for EBS R12.13 implementation.As per our strategy we want to use Oracle VM(I think it doesn't require any licensing) , Please contact your Oracle Sales representative for license questions.
    RAC 11gR2 and appstier load balancing.We have only two servers to achieve all these.One one node there will be two VMs one for dbtier and another one for appstier and same will be on second server.So this way we will be having four virtual machines on two physical nodes.For all these we will use OEL Linux as Operating system
    Please share your experiences , if above deployment model is correct or need further enhancements.Please see these docs.
    Using Oracle VM Templates for Oracle E-Business Suite [ID 975734.1]
    Using Oracle VM with Oracle E-Business Suite Release 11i or Release 12 [ID 465915.1]
    Oracle E-Business Suite High Availability on Oracle VM [ID 986690.1]
    Certified Software on Oracle VM [ID 464754.1]
    Oracle VM General Frequently Asked Questions [ID 464756.1]
    Thanks,
    Hussein

  • Making connect against SID high available

    Hi Forum,
    is there any way to make applications that use host/port/SID to connect to the database high available? The background of my question is that we have a legacy application that does not support to connect via servicename to our RAC database. The configuration dialog has no options to use a servicename, you can just enter host/port/sid.
    Now is there anyway to trick this application so that the sid/host can be failed over? The host could be tricked via virtual IP that could be provided by the grid infrastructure, but what about the SID? Is there some kind of "virtual SID" out there?
    Thanks in advance
    Thomas

    Hi,
    If you are lucky, you may be able to trick the application in giving a whole (DESCRIPTION........) tnsnames entry as the host and not using the "port" and "sid" entry in that masks.
    As a normal SQLNET/JDBC client will understand
    @(DESCRIPTION = (ADDRESS = ...))
    equally as
    @host:port:sid
    this could work. If it does then you got no problems, because you can specify the services there with all failover stuff.
    If this does not work, maybe 11gR2 server pools can help you. It will not provide you a load balancing though, just HA.
    With 11gR2 server pools/policy managed, the database SID is not fixed on a server, but will failover to other nodes.
    So lets assume you have 3 servers, and your policy managed database is running on node 1 and 2.
    Furthermore assume, that the policy for the database is minimum 2 servers and maximum 2 servers.
    In this constellation instance on node 1 will be DBNAME_1 and on node 2 DBNAME_2.
    If first server fails, the DBNAME_1 instance will automatically be started on the free node (server 3), and your clients would still be able to connect to the SID.
    The tricky part now is to manage that the additional VIP and listener you created are failing over to server 3 as well and not to server 2.
    But with a correct settings in the clusterware for the newly created VIP that should be addressable (look at the clusterware administration guide Chapter 5).
    Hope that helps
    Sebastian

  • EBS AccessGate in High Availability

    Hi,
    We are integrating EBS 11i with OAM 10g uisng EBS AccessGate.
    My question is whether we can configure EBS AccessGate in High Availability ? If yes, can you pls provide the details.
    Thanks

    Hi
    Were you able to configure EBS access gate in high availability.

  • High Availibility

    Hello Everyone,
    We are implementing the entire Identity and Access Management Suite in High Availability.
    We have 2 boxes for OID & OVD AND 2 for OAM.
    We have Apache webserver on which we have a webgate, which would be configured for reverse proxy.
    My question here is, for HA in regards to Apache WebServer:
    Should I only have one Apache webserver in the HA or should I have two of those for those 2 boxes of OAM ?
    I am fairly new to this technology, so please bear with me.
    Thanks.

    Hey,
    Take a look at this picture(Figure 5-1 Oracle Access Manager in Active-Active Topology) of OAM High Availability documentation, it should be answer your main question:
    This link: http://download.oracle.com/docs/cd/B28196_01/core.1014/b28186/coreid_topo.htm.
    If you need more, I have some topologies examples and we can discuss about your particular situation. Example: Zone1(2 servers with WebCache[reverse proxy]) ; Zone2(2 servers with OAM) ; Zone3(2 servers with OID).
    regards,

  • OAM 11gR2 and OVD

    Hi,
    It appears OVD did not make it into the Oracle Fusion Middleware Identity Management 11gR2 release. The latest version available is still the one included in the Oracle Fusion Middleware Identity Management 11gR1 release. Is that correct?
    If so, I have a deployment of Oracle Access Manager 11gR2, which I'd like to integrate with OVD. Does this situation mean that I must deploy another entire WebLogic domain for the Oracle Fusion Middleware Identity Management 11gR1 release? Or is it possible to somehow install the 11gR1 version of OVD into the 11gR2 instance I've already got?
    - Jim

    Yes, the latest version of OVD available is 11.1.1.6 (11g R1). You may use this version with OAM 11gR2.
    OVD 11.1.1.6 uses WebLogic 10.3.6 and OAM 11g R2 also uses the same weblogic version. Please let me know if you are on some other version of WLS.
    As per best practice, try to keep the OAM and OVD in separate WLS domains.

  • How to protect an application running on IIS with OAM 11gR2

    Hello Gurus,
    I have a question regarding protecting an application running on IIS with OAM 11gR2. We have an OHS server running and all the requests from the users are coming to this OHS server webgate for them to login using the SSO login page. These is all solaris. I am protecting other applications like pplsoft moduels with this OHS instance and OAM server. There is another application that I need to protect which is itself running on IIS windows machine. I need guidance as to -
    1.) Do I need to install a windows version of webgate to protect this IIS based application?
    2.) Or I can still protect and proxy requests from this application to current OHS instance? How can I do this?
    3.) Or Do I need to proxy requests directly from IIS to OAM weblogic server?
    Please advise to the earliest as this is an urgent issue.
    Thanks !!

    From your description it is not clear how exactly architecture looks like
    We have an OHS server running and all the requests from the users are coming to this OHS server webgate for them to login using the SSO login page.
    is this OHS centralized login farm ? (Case 1)
    OR is this OHS server (with webgate) acting as virtual web server hosting multiple web sites so that request to any site passes through this OHS/webgate (Case 2)
    1.) Do I need to install a windows version of webgate to protect this IIS based application?
    If case 1 then you need to install 10g webgate on top of IIS server to protect this application
    If case 2 then you can just proxy request from OHS to IIS server. As every request passes through OHS user will be authenticated before request hits IIS
    Look at Product documentation for virtual web sites : http://docs.oracle.com/cd/E27559_01/admin.1112/e27239/shared.htm#autoId12
    It has steps to protect virtual web sites.
    Also you need to make sure no one hits IIIS web sites directly.
    Hope this helps

  • How to protect an application running on Apache Tomcat app server with OAM 11gR2

    Gurus,
    We have an Apache Tomcat based application named "ABCD" here at client site that we want OAM 11gR2 PS1 to integrate with for SSO purposes. I have successfully configured OHS to reverse proxy requests to Apache Tomcat server whenever somebody tries to access the application URL but still, I am getting the application login page once I have successfully authenticated on OAM SSO login page. The Tomcat based application is authenticating users against a "UserDatabase realm".
    I know in terms of weblogic application, there is an OAM identity asserter provider which then populates the User Principal for the java environment with the authenticated OAM user. But there is no such OAM identity provider for Tomcat.
    So my question is, is there an provider (or Tomcat equivalent) which will entrust authentication to a header, that could be used to populate the Java User Principal from the OAM_REMOTE_USER header? Is the weblogic equivalent of authentication providers present in tomcat as well? Are those called valves?
    Please advise to the earliest.
    Thanks !!

    Aakash,
    I did follow the 4 steps that you mentioned to me. Out of the 4 that you had mentioned, I already had the webgate in place on OHS server and I was already passing the remote_user http header in oam policy as action.
    As part of Step #2: Install mod_jk plugin on OHS server that you mentioned
    1.) I downloaded the tomcat connector - tomcat-connectors-1.2.37-src
    2.) I had to run ./configure,make, make install on my OHS server which runs on RHEL 6. It created the mod_jk.so file. I pasted it in the needed folder.
    3.) I then created the httpd.conf file and workers.properties file as said in the connector docs.
    4.) Restarted OHS.
    As part of Step #3: Configure tomcat's ajp connector that you mentioned and I went through all the links pasted below but didn't find actually what needs to be in place to configure tomcat's ajp connector. I do see in the server.xml of tomcat app server that the ajp 1.3 protocol is supported:
    http://tomcat.apache.org/tomcat-4.0-doc/config/ajp.html
    http://tomcat.apache.org/tomcat-3.3-doc/mod_jk-howto.html#s8
    http://tomcat.apache.org/tomcat-7.0-doc/config/ajp.html
    http://www.mulesoft.com/understanding-tomcat-connectors
    <!-- A "Connector" represents an endpoint by which requests are received
             and responses are returned. Documentation at :
             Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
             Java AJP  Connector: /docs/config/ajp.html
             APR (HTTP/AJP) Connector: /docs/apr.html
             Define a non-SSL HTTP/1.1 Connector on port 8080
        -->
        <Connector port="8080" protocol="HTTP/1.1"
                   connectionTimeout="20000"
                   redirectPort="8443" />
    <!-- Define an AJP 1.3 Connector on port 8009 -->
        <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
    Do we need to disable the HTTP protocol in Tomcat and keep only AJP connector enabled? If yes, how to do that?
    I am trying to connect to the application from OHS server like so I am using the http protocal right? How should I use the ajp protocol to connect to tomcat application? 
    http://ohs-host:ohs-port/abcd
    Thanks !!!!!

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SharePoint Foundation 2013 - Can we use the foundation for intranet portal with high availability ( medium farm)

    Today I had requirement, where we have to use the SharePoint Foundation 2013 (free version) to build an intranet portal ( basic announcement , calendar , department site , document management - only check-in check-out / Version).
     Please help me regarding the license  and size limitations. ( I know the feature comparison of Standard / Enterprise) I just want to know only about the installation process and license.
    6 Server - 2 App / 2 Web / 2 DB cluster ( so total license 6 windows OS license , 2 SQL Server license and Guess no sharepoint licenes)

    Thanks Trevor,
    Is load balance service also comes in free license... So, in that case I can use SharePoint Foundation 2013 version for building a simple Intranet & DMS ( with limited functionality).  And for Workflow and content management we have to write code.
    Windows Network Load Balancing (the NLB feature) is included as part of Windows Server and would offer high availability for traffic bound to the SharePoint servers. WNLB can only associate with up to 4 servers.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • 11.1.2 High Availability for HSS LCM

    Hello All,
    Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
    My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
    grouping.size=100
    grouping.size_unknown_artifact_count=10000
    grouping.group_by_type=Y
    report.enabled=Y
    report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
    fileSystem.friendlyNames=false
    msr.queue.size=200
    msr.queue.waittime=60
    group.count=10000
    double-encoding=true
    export.group.count = 30
    import.group.count = 10000
    filesystem.artifact.path=\\server1\import_export
    I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
    [2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
    [2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
    [2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
    I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
    Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
    Bug Attributes
    Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
    Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
    Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
    Created 25-Jan-2011 Platform Version 2003 R2
    Updated 24-Feb-2011 Base Bug 11696634
    Database Version 2005
    Affects Platforms Generic
    Product Source Oracle
    Related Products
    Line - Family -
    Area - Product 4482 - Hyperion Lifecycle Management
    Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to top

    it is not possible to implement a kind of HA between two different appliances 3315 and 3395. 
    A node in HA can have the 3 persona. 
    Suppose Node A (Admin(primary), monitoring(Primary) and PSN). 
    Node B(Admin(Secondary), Monitoring(Secondary) and PSN). 
    If the Node A is unavailable, you will have to promote manually the Admin role to Primary. 
    Although the best way is to have
    Node A Admin(Primary), Monitoring(Secondary) and PSN
    Node B Admin(Secondary), Monitoring (Primary) and PSN. 
    Rate if helpful and Marked As correct if it is correct for the experts. 
    Regards,

  • Oracle Berkeley DB Java Edition High Availability (White Paper)

    Hi all,
    I've just read Oracle Berkeley DB Java Edition High Availability White Paper
    http://www.oracle.com/technetwork/database/berkeleydb/berkeleydb-je-ha-whitepaper-132079.pdf
    In section "Time Consistency Policy" (Page 18) it is written:
    "Setting a lag period that is too small, given the load and available hardware resources, could result in
    frequent timeout exceptions and reduce a replica's availability for read operations. It could also increase
    the latency associated with read requests, as the replica makes the read transaction wait so that it can
    catch up in the replication stream."
    Can you tell me why those read operations will not be taken by the master ?
    Why will we have frequent timeout ?
    Why should read transaction wait instead of being redirect to the master ?
    Why should it reduce replica's availability for read operations ?
    Thanks

    Please post this question on the Berkeley DB Java Edition (BDB JE) forum Berkeley DB Java Edition. This is the Berkeley DB Core (BDB) forum.
    Thanks,
    Andrei

  • Office Web Apps farm - how to make it high available

    Hi there,
    I have deployed OWAs for Lync 2013 with two servers in one farm. When I was testing, e.g. the case when other server is down I found out the following error:
    Log Name:      Microsoft Office Web Apps
    Source:        Office Web Apps
    Date:          17/03/2014 22:59:54
    Event ID:      8111
    Level:         Error
    Computer:      OWASrv02.domain.com
    Description:
    A Word or PowerPoint front end failed to communicate with backend machine
    http://OWASrv01:809/pptc/Viewing.svc
    Does that means, that OWAs cannot setup as high avaiable when using standalone OWAs only? Above error appeared when the server OWASrv01 was rebooting.
    Petri

    Hi Petri,
    For your scenario, you have achieve a  high performance for your Office Web Apps farm but not high availability. 
    For performing a high availability Office Web Apps farm, you can refer to the blog:
    http://blogs.technet.com/b/meamcs/archive/2013/03/27/office-web-apps-2013-multi-servers-nlb-installation-and-deployment-for-sharepoint-2013-step-by-step-guide.aspx 
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

Maybe you are looking for