OSSO in High Availability Mode

Hi All,
Can anyone please tell me how to configure the OSSO 10.1.4.3 in High Availability & FailOver cases?
Thanks in Advance.
Siva Pokuri.

Read the docs http://download.oracle.com/docs/cd/B28196_01/idmanage.1014/b15988/advconfg.htm#i1011679

Similar Messages

  • 2xC350 in High Availability Mode (Cluster Mode)

    Hello all,
    first of all, i`m a newbie in ironport. So Sorry for my basic questions, but i can`t find anything in the manuals.
    I want to configure the two boxes in High Availability Mode (Cluster Mode) but i don`t understand the ironport cluster architecture.
    1) in machine mode i can configure IP-Adresses -> OK
    2) in Clustermode i can configure listeners and bind them to a IP-Address -> OK
    But how works the HA?
    A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
    B) Or should i configure different IPs and configure two MX Records?
    And if one box is down the second MX will be used.
    Thanks in advance
    Michael

    The ironport clustering is for policy distribution only - not for smtp load mgmt.
    A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
    Could do - using NAT'ing on the f/w but few large business take this approach today.
    Many/most large businesses use a HW loadbalancer like an F5, Foundry ServerIron, etc. The appliances themselves would be set up on seperate IP addresses. Depending on the implementation requirements, the internal IP address could be a public IP or a private IP.
    B) Or should i configure different IPs and configure two MX Records?
    And if one box is down the second MX will be used.
    If you set up two boxes, even with a different MX preference, mail will be delivered to both MX records. There are broken SMTP implementations that get the priority backwards, and many spammers will intentionally attempt to exploit less-restrictive accept rules on secondary MX recievers and will send to them first.

  • Can IPS 4345 work in High Availability mode?

    Hi all,
    Can IPS 4345 work in High Availability mode?
    Or can it have a bypass unit? Kindly help. Is there any alternative to this model in Sourcefire?
    Regds,
    Ram

    Hi,
    Cisco 4300 series IPS supports High-availability mode.
    you can run it in active-active mode and also in active-standby mode.
    Regards,
    Rahul Chhabra
    Network Engineer
    Spooster IT Services

  • FIM installation in High Availability Mode

    Experts,
    I am planning to install FIM in high availability mode.
    FIM Portal on four servers
    FIM Service on four servers and
    FIM Portal on four servers.
    Any document that can guide me for this.
    Thanks,
    Mann

    See these
    Preinstallation and Topology Configuration
    FIM 2010 high availability
    I also recommend this FIM book by David & Brad
    FIM R2 Best Practices Volume 1: Introduction, Architecture And Installation Of Forefront Identity Manager 2010 R2

  • Configuring two 11g OID servers in High Availability mode.

    I have OID1 server where I have installed OID11g and WLS using SSL Port 3131 and Non SSL Port 3060. The ldap set up is working as the sqlnet connections are using ldap adapter to resolve the request.
    I have OID2 server where I have installed OID11g using the same port.
    Now, I want to setup a cluster for these two so that the the load balancer will automatically route the requests to either of the two servers so that if one is unavailable, the other will fill the request. I am following "Configuring High Availability for Identity Management Components" document, but it is not very what steps needs to be followed.
    Any suggestion will be appreciated;
    I am also having problem using ldapbind or any of the oid commands as it gives "unable to locate message file: ldap<language>.msb" despite the fact that I am seting all the env vars such as ORACLE_HOME, ORACLE_INTANCE, ORA_NLS33 and so on.

    You don't need to setup a cluster for Load balancer. The Load balancer configuration can point to both the server and depending on the configuration in LBR act in failover and load balanced mode. All you need to take care of is that the two OID servers are using the same schema.
    When installing first OID server it gives a option to install in cluster mode and when installing the second server you can use the option to expand the cluster created in first installation. But that should not stop you from configuring OID in highly available mode using Load balancer as explained above.
    "unable to locate message file: ldap<language>.msb" occurs if you have not set the ORACLE_HOME variable. See that it is set to <MiddlewareHome>/Oracle_IDM1 if you have used the defaults.
    Hope this helps,
    Sagar

  • Identity management 11g in High Availability Mode.

    Hi All,
    Can any one please give me some pointers on how to configure Identity management 11g in High Availability Mode. If possible please provide some document links for reference.
    Currectly I am looking into below Oracle Ducument.
    http://download.oracle.com/docs/cd/E15523_01/core.1111/e12035/directorytier_im.htm#BACIEEBD
    This document completely configuring the High Avaialability case when we have Oracle Data Base in RAC mode. Please correct me if i am wrong.
    But I just wanted to know how can we configure the high availability mode without Oracle DataBase in RAC mode.
    Do we need to configure the DataBase in high Availability Mode also?
    Thanks in Advance.
    Siva Pokuri.

    Below resources should be of some help to you:
    http://www.oracle.com/technology/products/ias/hi_av/F5v9LBR.pdf
    http://www.oracle.com/technology/products/ias/hi_av/904_Distributed_IM.pdf
    http://www.oracle.com/technology/products/ias/hi_av/904_rack_mounted_im.pdf
    http://www.oracle.com/technology/products/ias/hi_av/904_cfc_im.pdf
    http://www.oracle.com/technology/products/ias/hi_av/OracleASInfraHAArchs.pdf

  • 11g OID Configuration in High Availability Mode on OEL5.6 64 Bit

    Hi Could you please provide me with some good document to install and configure 11g OID (Oracle Internet Directory) Configuration in High Availability Mode on OEL5.6 64 Bit.
    Regards
    Mohammed Riyaz Ahmed

    Hi,
    You get OID 11g as part of OFM 11g. Refer here for docs on high availability:
    http://docs.oracle.com/cd/E21764_01/install.1111/e12002/overview.htm#CJAJEDFC
    For other OID docs:
    http://www.oracle.com/technetwork/documentation/oid-089101.html
    I hope this helps.
    regards,
    GP

  • Cisco ISE in High Availability mode

    Hello
    Need some help, I have hardware cisco ISE 3315, want to go for high availability now, my question is that;
    1. Is Cisco ISE available on Hyper V ?
    2. Is it possible to configure 1 hardware , and other virtual (VMware / HyperV {If available}) in high availability mode ?
    Thank you very much.

    While ISE may run in Hyper-V, it will definitely not be supported so I recommend staying away from doing that. The only supported virtual environment is VMware. If you only have Hyper-V then you will have to get another appliance. Do keep in mind that the 3315s are EOL/EOS. The replacement models for those are the 3415.
    As it was already stated above Charles and Karsten, you can mix virtual and physical appliances. So if you do en up going with a supported virtual solution make sure that the resources for the ISE nodes are dedicated/reserved and that thin provisioning is also NOT supported. 
    Hope this helps!
    Thank you for rating helpful posts! 

  • How to configure cisco 3650-24ts-s switch in high availability mode

    Hi, I bought 2 nos 3650-24ts-s switch with accessories. i have created 10 vlans & given internal access in one switch. Now I need to configure another switch as standby or HA mode so if any thing goes wrong in first switch, second one will take  automatic.  Pl help to provide me step by step guide for doing the same.thnaks

    Depending on license you could have access to setup HSRP between them. Since they aren't stacked switches I would also do a port channel.

  • NAC High Availability: Users getting disconnected during failover

    Hi,
    We have a pair of CAS in in-band virtual-gateway mode in high availability mode.
    We are still running some tests but we have noticed that the clients are losing connectivity during the failover.
    * The service ip is always active (never stops responding pings).
    * The stand-by CAS becomes active immediatly after we shut down the primary, we see it on the CAM.
    * The client however looses connectivity with the internal network for almost two minutes.
    I'm guessing this isn't normal, but would like to know what is the expected behaviour on this.
    Thanks and regards,

    We configured another pair today and we are noticing the same behaviour, however it seems random... sometimes the user barely looses connection, other times it will take from 2-5 minutes for it to come back.
    We are only using eth2 for the failover link since we only have one serial port.
    When we test we make sure both servers are up and then we reboot the primary. The secondary becomes active immediately. When both are up again we repeat the process.
    any other ideas? something we should check?
    Thanks!

  • Running SAP XI in High Availability

    Bonjour a vous tous !
    I am looking for best practices when running SAP Exchange Infrastructure (3.0, well it is now XI 7.0 with the new NW2004s – SP9) in High Availability mode.
    The customer that I work with does not know yet on which platform its production environment will be (Unix/Oracle vs Microsoft/SqlServer 20005). I know that some hardware vendor has built-in HA capabilities, either Software/Hardware based (MSFT MCS, IBM HACMP, etc).
    Should we go for an "SAP XI all-In One" installation or distributed one ?  Should we get one central SLD or one per environment? What is the role of the SLD and Solution Manager? Do they need to be interconnected?
    I have worked with other EAI middlewares (Biztralk Server and webMethods) and normally the Integration Engine sit on it's own server and the various adapters are on their respective servers (i.e.: one server for HTTP, one server for FTP, one server for EJB, etc), since the sizing (RAM. Java heap memory, etc) for all theses adapters are different (FTP a lot of small file, EJB not so many request, but they could eat-up a lot of CPU, etc). Is this a best practice that I can apply also with SAP XI ?
    Thank in advance for you help !
    A+

    Hi Michel,
    Well we seem to work along the same track...
    I am working on 2 customers concurrently - the one using like you using PI 7.0. In their case we implemented a Windows solution using W2K3 Enterprise Edition 64-bit. Avoid 32-bit as it will no longer be supported by SAP next year. We are using MSCS (which is part of the Enterprise Edition by default).
    The 2nd is a solaris customer (very large) for which we are using a cross-data center solution handled by Veritas Cluster Manager with automatic failover between systems and data centers if required.
    Some answers to your questions:
    1. All-in-one - yes
    2. SLD - one for DEV/QA, and separate for Prod - this is recomended by SAP too - I can mail you a guide if needed
    3. With Solution Manager 4.0, it now supports the J2EE side and hence can understand SLDs. The concept is to setup an HA Solution Manager and deploy the SLD for Prod on the same piece of kit.
    4. SAP do not recomend splitting off the Adapter Engine except for specific cases, the main reason being if you have specific systems behind firewalls that you need to communicate with.
    Under no circumstances should you have separate AE's per adapter - this would be an extermely expensive solution hardware and support-wise.
    Regards
    Brian

  • Many VLFs in high availability database

    Hi,
    I have a database in high availability mode with a Log file of 16GB. Running DBCC SQLPERF (LOGSPACE) reveals that only 0.03% of the file is used. So I'd like to shrink the file. I performed full and transaction backups and tried to shrink, but nothing happens.
    I executed DBCC OPENTRAN but no transaction is open on the DB. Executing SELECT name,log_reuse_wait_desc FROM sys.databases; returns "NOTHING". But If I run DBCC LOGINFO I see 320 Virtual log files, with about 200 being marked with STATUS 2 (not
    reusabale).
    Looking at the Always-on availability dashboard shows that replication is fine.
    Does somebody know why these VLFs are marked as such?
    Thanks

    with about 200 being marked with STATUS 2 (not reusabale).
    No need to worry about this, you can read below link
    http://blog.moserit.com/virtual-log-file-monitoring-with-dbcc-loginfo-in-alwayson
    Though we mark the log records as available for cleanup, the actual process of cleaning up is deferred.  Since the new available space is tracked but the VLFs themselves are not
    yet marked as inactive it is not reported as such by DBCC LOGINFO directly.  The other commands such as DBCC SQLPERF(‘LOGSPACE’) accurately report the free space since they include the VLFs marked for deferred cleanup when accounting for space. 
    There is, unfortunately, no equivalent of the DBCC LOGINFO currently that can track VLFs marked for deferred cleanup.

  • ISE in High Availability (HA) mode.. Factors to look upon

    We are setting up lab where we have installed 2 ISE on VM.  We  are deploying them in HA mode. While deploying them we are facing error  after registering ISE-2 with Primary ISE-1. Even after periodic refresh  of 'Sync' tab we are getting 'out of sync' Error. 
    We have checked certificate which is bound correctly as we could register ISE-2 under primary ISE-1
    TIme: Time on all the devices are synched up properly and are in UTC timezone.
    What are the factors that play role for HA in ISE. Which things has to look upon while resolving the error.
    ---Securview Support

    Hello,
    I went through your query and found some pre-requisite which would help in solving your query:-
    Ensure that you have a second ISE node configured with the Administration persona before you can promote it to become your primary Administration ISE node.
    •Before you configure the Administration ISE nodes for high availability, we recommend that you obtain a backup of the Cisco ISE configuration from the standalone node that you are going to register as a secondary Administration ISE node.
    •Every ISE administrator account is assigned one or more administrative roles. To perform the operations described in the following procedure, you must have one of the following roles assigned: Super Admin or System Admin. See Cisco ISE Admin Group Roles and Responsibilities for more information on the various administrative roles and the privileges associated with each of them.

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • 11.1.2 High Availability for HSS LCM

    Hello All,
    Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
    My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
    grouping.size=100
    grouping.size_unknown_artifact_count=10000
    grouping.group_by_type=Y
    report.enabled=Y
    report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
    fileSystem.friendlyNames=false
    msr.queue.size=200
    msr.queue.waittime=60
    group.count=10000
    double-encoding=true
    export.group.count = 30
    import.group.count = 10000
    filesystem.artifact.path=\\server1\import_export
    I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
    [2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
    [2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
    [2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
    I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
    Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
    Bug Attributes
    Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
    Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
    Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
    Created 25-Jan-2011 Platform Version 2003 R2
    Updated 24-Feb-2011 Base Bug 11696634
    Database Version 2005
    Affects Platforms Generic
    Product Source Oracle
    Related Products
    Line - Family -
    Area - Product 4482 - Hyperion Lifecycle Management
    Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to top

    it is not possible to implement a kind of HA between two different appliances 3315 and 3395. 
    A node in HA can have the 3 persona. 
    Suppose Node A (Admin(primary), monitoring(Primary) and PSN). 
    Node B(Admin(Secondary), Monitoring(Secondary) and PSN). 
    If the Node A is unavailable, you will have to promote manually the Admin role to Primary. 
    Although the best way is to have
    Node A Admin(Primary), Monitoring(Secondary) and PSN
    Node B Admin(Secondary), Monitoring (Primary) and PSN. 
    Rate if helpful and Marked As correct if it is correct for the experts. 
    Regards,

Maybe you are looking for