High-Availability & Migration

Hi all,
I am currently working on adapting our software to a high-
availability architecture using Tuxedo and I have runned into
questions to which I cannot find satisfying answers in the
documentation. Can someone help?
To ensure high-availability, we use two equivalent machines, a
master and a backup, each having sufficient capacity to handle
all the load of the system. The bunch of services handling the
operations of the software can therefore run on either or both
machines. However, a specific message processing operation
requires the participation of 3 different servers which, for a
given message, must all be from the same machine (they must the
same network connection on which acks are sent).
The initial idea was to make all servers run on the master
machine with nothing on the backup. Only in the case of failure
would the servers be transfered to the backup machine – using
group or machine migration. Unfortunately, there is some
information I don’t seem to find in the doc ...
1) So far, everything I read on migration indicates that it can
only be performed manually. Is there any way to have Tuxedo do
it automatically? On which conditions?
2) What happens if one server in the group crashes? From what I
understand, it can be restarted up to MAXGEN time within the
GRACE period, but what then? What if it crashes out of lack of
ressources (memory, hard drive), will it be migrated
automatically to the backup machine? And will the other servers
in the group follow?
3) If, instead of using migration, I decide to have all services
running on both machines; how do I prevent a message from
transiting from one machine to another as it is forwarded to the
different servers in the chain?
Anyone had similar problems? Thanks for the answers.

Marcin Kasperski wrote:
To ensure high-availability, we use two equivalent machines, a
master and a backup, each having sufficient capacity to handle
all the load of the system. The bunch of services handling the
operations of the software can therefore run on either or both
machines. However, a specific message processing operation
requires the participation of 3 different servers which, for a
given message, must all be from the same machine (they must the
same network connection on which acks are sent).If I were implementing this, I would just advertise services which embed
machine information in their name. Then:
- the client would call - say - 'mainsvc' - which would be running on
both machines
- mainsvc would determine the machine it runs on and forward the call to
- say 'mainsvc-machine1'
- the rest of the calls would be directed to 'svc2-machine1' and
'svc3-machine1' (the reply from the first call would contain the info
what to call next)
This way, the first call would find the working machine (using the only
available if the second one crashed and one of the two available in the
round-robin method when both work), the rest would be directed to the
machine you need.
Instead of advertising machine-specific service names, you can also take
a look at data dependent routing.Sorry Marcin, but you have just bound your system to specific machines and
will limit scaling. That's a bad thing. A simpler solution is to use Data
Dependent Routing (DDR). This is probably the single most powerful concept
in Tuxedo, and for some reason people rarely exploit it.
By simply having some kind of machine identifier in the request buffer
either native (part of the natural application data) or artificial (a field
added just for doing DDR routing) you have something like a Zip Code (for US
folk) or Postal Code (for the international crowd). Tuxedo analyzes this
data and based on rules in the UBBCONFIG, routes the message to the correct
system. So if the routing field says MACH1, it goes to SVC1 on machine1.
If the routing field says MACH2, it goes to SVC1 on machine3, etc. etc. I
could have 10 machines, and all 10 would by symmetrical in capability, and I
could route to each accordingly.
Part of what makes Tuxedo so neat to work with, is that you can create a
Logical design, develop it, and then deploy it in any number of different
Physical designs. And if you do everything right, the Logical design
remains intact. For example, there is a drug store chain here in the US
that has over 4000 Tuxedo domains, all working together!! Then they have a
central site that mirrors all 4000 individual databases. Get your
prescription filled at one store, and then go to another, and that second
store can access your records from the central site. DDR insures that all
messages get routed to the correct location. (I here there is a brokerage
firm with 10,000 domains!!!)
Check out DDR, it will make your day.
Brian Douglass
Transaction Processing Solutions, Inc.
8555 W. Sahara
Suite 112
Las Vegas, NV 89117
Voice: 702-254-5485
Fax: 702-254-9449
e-mail: [email protected]

Similar Messages

  • Exchange 2010 to Exchange 2013 Migration and Architect a resilient and high availability exchange setup

    Hi,
    I currently have a single Exchange 2010 Server that has all the roles supporting about 500 users. I plan to upgrade to 2013 and move to a four server HA Exchange setup (a CAS array with 2 Server as CAS servers  and one DAG with 2 mailbox Servers). My
    goal is to plan out the transition in steps with no downtime. Email is most critical with my company.
    Exchange 2010 is running SP3 on a Windows Server 2010 and a Separate Server as archive. In the new setup, rather than having a separate server for archiving, I am just going to put that on a separate partition.
    Here is what I have planned so far.
    1. Build out four Servers. 2 CAS and 2 Mailbox Servers. Mailbox Servers have 4 partitions each. One for OS. Second for DB. Third for Logs and Fourth for Archives.
    2. Prepare AD for exchange 2013.
    3. Install Exchange roles. CAS on two servers and mailbox on 2 servers. Add a DAG. Someone had suggested to me to use an odd number so 3 or 5. Is that a requirement?
    4. I am using a third party load balancer for CAS array instead of NLB so I will be setting up that.
    5. Do post install to ready up the new CAS. While doing this, can i use the same parameters as assigned on exchange 2010 like can i use the webmail URL for outlook anywhere, OAB etc.
    6. Once this is done. I plan to move a few mailboxes as test to the new mailbox servers or DAG.
    7. Testing outlook setups on new servers. inbound and outbound email tests.
    once this is done, I can migrate over and point all my MX records to the new servers.
    Please let me know your thoughts and what am I missing. I like to solidify a flowchart of all steps that I need to do before I start the migration. 
    thank you for your help in advance

    Hi,
    okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
    and so on. It's much more simpler, better and less expensive.
    CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
    http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
    First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
    On channel 9 there is many stuff from MEC:
    http://channel9.msdn.com/search?term=exchange+2013
    Migration:
    http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
    Additional informations:
    http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
    Hope this helps :-)

  • Migrating Dimension Server from local installation to High Availability

    Hi Team,
    When Hyperion was installed on the client site (11.1.2.3), there was no requirement for EPMA to become highly available.
    Now that has become a requirement to make it highly available in the event that one of the virtual machine is unreachable.
    In the event that a server becomes unavailable, it is acceptable for a manual failover procedure to be done.
    What steps can be taken to prepare for such an event?
    Possible outcome:
    Configure another dimension server on the second Virtual machine and leave it manually switched off.
    Modify the IIS path to point to a clustered location accessible by both machines.
    In the event of a failover, manually bring up the dimension server on the other machine.
    This is yet to be implemented successfully.
    Current Scenerio:
    1 x Windows Machine hosting:
      Foundation Services
      Calculation Manager
      Performance Architect
      Planning
      Reporting and Analysis
      Repository location on a shared mount (Clustered filesystem)
      FDQM
      Essbase
      Analytic Provider Services
      Essbase Integration Services (not used)
      Essbase Studio (not used)
    1 x Linux Machine
      Essbase Server
      Arborpath on a shared mount
    Aim: Add 2 new Virtual Machines ( 1 x Windows App Server + 1 x Linux Essbase Server) for high availability.
    Progress:
      Windows Machine components clustered:
      Foundation Services
      Calculation Manager
      Planning
      Reporting and Analysis
      Essbase
      Analytic Provider Services
      Linux Essbase Machines:
      Configured Active/Passive using OPMN
    Challenge faced:
    Failover scenerio can be a manual list of steps to provide functionaility.
    Please provide support on introducing failover steps for dimension server.

    Remote Desktop/Preferences/Task Server
    Add the FQDN or I.P address of your server in this pane on your book(basically your pointing your book to your installed software on the other machine). Setup your server with the installs you want it to make, and administer it via your book (or any machines with another unlimited ARD) IF you want report generation, you would schedule the clients to send the info to the server. A task server can only do 2 things, install packages or change client settings, thats it.

  • 11.1.2 High Availability for HSS LCM

    Hello All,
    Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
    My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
    grouping.size=100
    grouping.size_unknown_artifact_count=10000
    grouping.group_by_type=Y
    report.enabled=Y
    report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
    fileSystem.friendlyNames=false
    msr.queue.size=200
    msr.queue.waittime=60
    group.count=10000
    double-encoding=true
    export.group.count = 30
    import.group.count = 10000
    filesystem.artifact.path=\\server1\import_export
    I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
    [2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
    [2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
    [2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
    I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
    Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
    Bug Attributes
    Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
    Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
    Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
    Created 25-Jan-2011 Platform Version 2003 R2
    Updated 24-Feb-2011 Base Bug 11696634
    Database Version 2005
    Affects Platforms Generic
    Product Source Oracle
    Related Products
    Line - Family -
    Area - Product 4482 - Hyperion Lifecycle Management
    Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to top

    it is not possible to implement a kind of HA between two different appliances 3315 and 3395. 
    A node in HA can have the 3 persona. 
    Suppose Node A (Admin(primary), monitoring(Primary) and PSN). 
    Node B(Admin(Secondary), Monitoring(Secondary) and PSN). 
    If the Node A is unavailable, you will have to promote manually the Admin role to Primary. 
    Although the best way is to have
    Node A Admin(Primary), Monitoring(Secondary) and PSN
    Node B Admin(Secondary), Monitoring (Primary) and PSN. 
    Rate if helpful and Marked As correct if it is correct for the experts. 
    Regards,

  • OIM 11g High Availability Deployment

    Hi Experts,
    I'm deploying OIM 11g in High Available schema, following Oracle docs: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF, I have succesfully installed and configured OIM & SOA in weblogic domain on 'OIMHOST1', trying to propagate the configuration from 'OIMHOST1' to 'OIMHOST2' I have packed (using pack.sh) the domain on 'OIMHOST1' and unpacked (using unpack.sh) it to 'OIMHOST2' so I have updated the NodeManager executing setNMProps.sh and finally Ihave started the NodeManager. In order to Test everything is fine and following the documentation I'm traying to perform the following steps, but I'm not succeed
    I'M MUST TO SAY THAT I'M RUNNING ON SINGLE STANDARD EDITION DB INSTANCE AND NOT RAC AS MENTIONED IN ORACLE DOCS, PLEASE CLARIFY IF RAC IS REQUIRED, FOR NOW I'M IN DEVELOPMENT ENVIRONMENT, SO I THINK RAC IS NOT REQUIRED FOR NOW, PLEASE CLARIFY
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1&
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console.
    Here its not possible start AdminServer on OIMHOST2, first of all, it looks like boot.properties file under WLS_OIM_DOMAIN_HOME/servers/AdminSever/security is not valid, the first time I try to execute startWeblogic.sh script, it ask for username/password, I have updated boot.properties (vi boot.properties) and manually set clear username and password, this time startWeblogic.sh script passed this stage, but fails:
    <Error> <util.install.help.BuildMasterHelpSet> <BEA-000000> <IOException ioe java.io.IOException: No such file or directory>
    <Error> <oracle.adf.share.config.ADFMDSConfig> <BEA-000000> <MDSConfigurationException encountered in parseADFConfigurationMDS-01330: unable to load MDS configuration document
    MDS-01329: unable to load element "persistence-config"
    MDS-01370: MetadataStore configuration for metadata-store-usage "writeable" is invalid.
    MDS-00503: The metadata path "/u01/app/oracle/product/Middleware/user_projects/domains/IDMDomain/sysman/mds" does not contain any valid directories.
    I have verified that this directory "mds" does not exists, as reported by the IOException, in OIMHOST2, but it exists in OIMHOST1. from here its not possible for me following Oracle's documentation, I test this starting Adminserver in OIMHOST1, and starting WLS_SOA2 and WLS_OIM2 managed servers from OIMHOST1 AdminServer console, I have tested 2 ways:
    1.- All managed servers in OIHOST1 are shutdown, for this, managed servers in OIMHOST2 works as expected
    2.- All managed servers in OIMHOST1 are RUNNING, for this, first I have started SOA2 managed server, after that, I have fired OIM2 managed server, when it finish boot process the following message appears in server's output:
    <Warning> <org.quartz.impl.jdbcjobstore.JobStoreCMT> <BEA-000000> <This scheduler instance (servername.domainname1304128390936) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.>
    Start the WLS_SOA2 managed server using the WebLogic Administration Console.
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started.
    8.9.3.9 Validate the Oracle Identity Manager Instance on OIMHOST2
    Validate the Oracle Identity Manager Server instance on OIMHOST2 by bringing up the Oracle Identity Manager Console using a web browser.
    The URL for the Oracle Identity Manager Console is:
    http://oimvhn2.mycompany.com:14000/oim
    Log in using the xelsysadm password.
    Your help is highly apprecciated
    Regards
    Juan

    Hi Vaasu,
    I have succeeded deploying OIM in HA, just now my customer and I are working on the installation of webtier. Now I have a better understand about HA concepts and the way weblogic works -really nice, but little tricky-
    All the magic about HA is configuring properly the network interfaces in each Linux boxes (our case) so, first of all you need to create 2 new floating IP's on each Linux boxes (google: how to create virtual Ip in linux, if you don't know) clone and modify your 'eth0' network script to create the virtual IPs
    Follow the procudere in the HA guide: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
    create DB schemas with RCU
    install weblogic
    install SOA
    patch SOA
    install IAM
    ---if you are working on a virtual machine is good idea to take a snapshot here---
    Create and configure the weblogic domain (special attentention whe configuring the cluster), see step 13 of 8.9.3.2 Creating and Configuring the WebLogic Domain for OIM and SOA on OIMHOST1, here you need to cofigure:
    For the oim_server1 entry, change the entry to the following values:
    Name: WLS_OIM1
    Listen Address: the IP that is confured in eth0:1 of Linux box1
    Listen Port: 14000
    For the soa_server1 entry, change the entry to the following values:
    Name: WLS_SOA1
    Listen Address: the IP configure on eth0:2 of Linux box1
    Listen Port: 8001
    For the second OIM Server, click Add and supply the following information:
    Name: WLS_OIM2
    Listen Address: the IP configured on eth0:1 of Linux box2
    Listen Port: 14000
    For the second SOA Server, click Add and supply the following information:
    Name: WLS_SOA2
    Listen Address: the IP configured on eth0:2 of Linux box2
    Listen Port: 8001
    Click Next.
    On Step 16 ensure you are using the UNIX tab to configure the machines, also ensure that for machine1 you use the IP configured on the eth0 interface of Linux box1, the same for machine2
    please confirm you have performered 8.9.3.3.2 Update Node Manager on OIMHOST1
    if everything is ok you must be able to start the AdminServer as described in the guide.
    configure OIM: 8.9.3.4.2 Running the Oracle Identity Management Configuration Wizard, in my case I don't need LDAPsync, I have skipped this section, if you configure properly OIM, then you mus perform 8.9.3.5 Post-Configuration Steps for the Managed Servers
    resrtar AdminServer then from the weblogic console, start OIM and SOA if node manager is properly configured SOA and OIM must run properly, update deployment mode and coherence as described in the guide and verify that OIM run perfectly in Linux box1.
    Propagate OIM from Linux box1 to Linux box2 as described in the guide, using pack and unpack (you MUST use the same filesystem directory structure on both Linux boxes)
    Update and start NodeManager as described in the guide
    VERY IMPORTAN OBSERVATION
    the guide say:
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    JUAN OBSERVATION:
    IS NOT POSSIBLE TO START OR STOP ADMINSERVER ON HOST2 SINCE ADMIN SERVER WERE CONFIGURED TO LISTEN ON THE IP ADDRES OF eth0 INTERFACE ON HOST1, SO, ITS NOT POSSIBLE TO PLAY IT ON HOST2, I THINK AND ADDITIONAL PROCEDURE SHOULD BE FOLLOWED TO CONFIGURE ADMINSERVER IN HA IN A ACTIVE-PASSIVE MODE
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1& -----NOT APPLICABLE
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console. -----NOT APPLICABLE
    Start the WLS_SOA2 managed server using the WebLogic Administration Console. ----START SOA2 FROM THE CONSOLE RUNNING ON HOST1, IT DOESN'T MATTER
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started. ------ START OIM2 FROM THE CONSOLE RUNNING ON HOST1
    HERE YOU MUST BE ABLE TO LOGIN TO OIM2 SERVER AS DESCRIBED IN THE GUIDE, YOU DON'T NEED TO EXECUTE config.sh SCRIPT THIS SHOULD WORK AS DESCRIBED.
    Server migration should work straight-forward if you have configured the floating IPs as described, I have not configured the persistence yet since my customer does not have the skills to share a storage.
    I hope this helps, and feel free to comment or complement.
    By the way, did you know how to set up a valid SSL certificate in Windows 2003 server??? I need it to test and Exchange 2007 I'm tryin to integrate
    Regards
    Juan

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • SAP XI 3.0 High Availability Whitepaper

    Hi,
    is anyone of you aware of a Whitebook for SAP XI 3.0 High Availability?
    We are currently in the process to migrate our current SAP XI 3.0 System, which runs on Windows 2003, to a HA solution.
    But I am still unsure about all the possiblities and pros and cons that you have with such a scenario.
    If you do have any documents/papers or similar regarding HA solution or experiences in this area please let me know.
    Thanks in advance!
    kind regards
    Nesimi

    hi Nesimi,
    try to post this message in sap xi forum
    Process Integration (PI) & SOA Middleware

  • UOO sequencing along with WLS high availability cluster and fault tolerance

    Hi WebLogic gurus.
    My customer is currently using the following Oracle products to integrate Siebel Order Mgmt to Oracle BRM:
    * WebLogic Server 10.3.1
    * Oracle OSB 11g
    They use path service feature of a WebLogic clustered environment.
    They have configured EAI to use the UOO(Unit Of Order) Weblogic 10.3.1 feature to preserve the natural order of subsequent modifications on the same entity.
    They are going to apply UOO to a distributed queue for high availability.
    They have the following questions:
    1) When during the processing of messages having the same UOO, the end point becomes unavailable, and another node is available in order to migrate, there is a chance the UOO messages exist in the failed endpoint.
    2) During the migration of the initial endpoint, are these messages persisted?
    By persisted we mean that when other messages arrive with the same UOO in the migrated endpoint this migrated resource contains also the messages that existed before the migration?
    3) During the migration of endpoints is the client receiving error messages or not?
    I've found an entry on the WLS cluster documentation regarding fault tolerance of such solution.
    Special Considerations For Targeting a Path Service
    When the path service for a cluster is targeted to a migratable target, as a best practice, the path
    service and its custom store should be the only users of that migratable target.
    When a path service is targeted to a migratable target its provides enhanced storage of message
    unit-of-order (UOO) information for JMS distributed destinations, since the UOO information
    will be based on the entire migratable target instead of being based only on the server instance
    hosting the distributed destinations member.
    Do you have any feedback to that?
    My customer is worry about loosing UOO sequencing during migration of endpoints !!
    best regards & thanks,
    Marco

    First, if using a distributed queue the Forward Delay attribute controls the number of seconds WebLogic JMS will wait before trying to forward the messages. By default, the value is set to −1, which means that forwarding is disabled. Setting a Forward Delay is incompatible with strictly ordered message processing, including the Unit-of-Order feature.
    When using unit-of-order with distributed destinations, you should always send the messages to the distributed destination rather than to one of its members. If you are not careful, sending messages directly to a member destination may result in messages for the same unit-of-order going to more than one member destination and cause you to lose your message ordering.
    When unit-of-order messages are processed, they will be processed in strict order. While the current unit-of-order message is being processed by a message consumer, the next message in the unit-of-order will not be delivered unless it is to the same transaction or session. If no message associated with a particular unit-of-order is processing, then a message associated with that unit-of-order may go to any session that’s consuming from the message’s destination. This guarantees that all messages will be processed one at a time and in order, and any rollback or recover will not prevent ordered processing of the messages.
    The path service uses a persistent store to save the state of which member destination a particular unit-of-order is currently using. When a Path Service receives the first message for a particular unit-of-order bound for a distributed destination, it uses the normal JMS load balancing heuristics to select which member destination will handle the unit and writes that information into its persistent store. The Path Service ensures that a new UOO, or an old UOO that has no messages currently on any destination, can be enqueued anywhere in the cluster. Adding and removing member destinations will not disrupt any existing unit-of-order because the routing decision is made dynamically and those decisions are persistent.
    If the Path Service is unavailable, any requests to create new units-of-order will throw the JMSOrderException until the Path Service is available. Information about existing units-of-order are cached in the connection factory and destination servers so the Path Service availability typically will not prevent existing unit-of-order messages from being sent or processed.
    Hope this helps.

  • SAP NetWeaver 7.30 High-Availability installation without MSCS

    Hi all,
    Do we have any option to install SAP NetWeaver 7.30 High-Availability System on the third-party cluster solution, except for MS Cluster Service?
    I tried to install the first cluster node without MSCS configuration, however, sapinst.exe checks whether we already set up MSCS on this node, then installer aborted.
    I'd like to know the way to disable this checks. (Or, way to mimic the installer as if MSCS cluster configuration was done.)
    If unavailable, I have to install MSCS first, and SAP NetWeaver, then also have to uninstall MSCS instead of out cluster software;
    It seems to me no sense.
    Best regards,

    Hi Peter,
    Thanks for your quick response.
    It is disappointed for me that HA Installation is tightly combined with MSCS.
    Do we have a way to migrate from other installation scenario to High-Availability?
    For example, once we install ASCS instance as distributed system, then migrate
    it for High-Availability usage, leveraged by profile updates and so on?
    I'm not sure how High-Availability installation is different from other installation
    scenario except for installation path and hostname to be used for connection.
    Best regards,

  • Problems With the High Availability

    Hi all,
    First of all, i'm sorry for my "bad" english, but i will to try to explain my problem as clairly as i can.
    I'm using VDI 3.1.1 ,a configuration "High Availability with bundled MySQL" with a primary server and 2 secondary servers.
    i'm trying to test the HA for the core VDI and FailOver for the VirtualBox, The HA works but i think with a timeout (=2minutes) that i want to reduce.
    I found some troubles with the VirtualBox failover.
    Here is more explaination :
    These Tests are done with the VRDP mode.
    The first test after the installation&configuration, the failover works fine after the crash of server A, the desktops migrate and restart in the other server (the server B)
    The second test : i reboot the server A, and i want to test again with shuting down the server B (crash). the desktops migrate to the server A and they are with the state "running" but SRS can't deport the display (it loops on the authentication screen) ... The VM is running and i can access to it with the Remote Desktop Connection.
    When i looked to the logs, i found that the SRS is always looking for the VM on the server who crashed (Server B). I looked to the vda-client command for the user concerned and it shows me that the host is the server who crashed (Server B) with the old port. I opened the MySQL,vda database and i looked the datas on the t_desktop* table, the fields* ip-or-dns* and port *were not updated when the VMs migrates. i update the two fields with an update request with the correct host (of the server A) and the new port where the VM has migrate, and the display comes in 1s.
    So, the VM migrate well, but Datas on MySQL Database were not updated.
    For the WRDP mode, everything works fine, because the SRS uses the IP of the VM to deport the display, not the host and port like in VRDP mode.
    I test this with different configuration :
    * HA with bundled Mysql : primary server and two secondary servers. The two secondary servers contains the VDI core and the VirtualBoxes.
    * HA with bundled Mysql : the 3 vdi Cores are virtualized on a server, and Two Servers with only the VirtualBoxes.
    * One Server with VDI Core, and Two Servers with only the VirtualBoxes to test only the failover of virtualbox.
    All of this tests gave the same results.
    My questions :
    1 - After a crash of a server and after reboot, is there anything to do? (i checked all the vda services "database,webservices ..." and everything is ok after the reboot).
    2 - Is there a bug for the HA with the VRDP mode?
    3 - Is anybody had the same problems? if yes, what is the solution?
    4 - How can i reduce the timeout for the HA of VDI Core? is there any process for that?
    Thank you all for your help , and again sorry for my bad english, i know i have to improve it :)

    I am using VDI version 3.2. Virtualization layer is Virtuabl Box.
    setup is 1 x VDI server, 2 x virtual box host , remote mysql database and shared storage backend i.e NAS
    The Virtualbox session / vm does not fail over, period !
    When i pull the plug on the 1st Virtualbox, all the desktops session on the down server, remains on the server
    and did not failover to the other one.
    Seriously , have any one managed to get the HA working ? anyone ?
    see below for the cli output.
    Started 1 desktop
    root@vdihost-a:~[598]# vda provider-list-hosts virtualbox
    NAME             STATUS  ENABLED         CPU_USAGE      MEMORY_USAGE  DESKTOPS
    vdi1              OK  Enabled       0% (20 MHz)      10% (1.8 GB)         0
    vdi2               OK  Enabled      1% (310 MHz)      16% (2.6 GB)         1
    Started 2nd desktop
    root@vdihost-a:~[599]# vda provider-list-hosts virtualbox
    NAME             STATUS  ENABLED         CPU_USAGE      MEMORY_USAGE  DESKTOPS
    vdi1             OK  Enabled      1% (319 MHz)      17% (2.9 GB)         1
    vdi2               OK  Enabled      1% (327 MHz)      16% (2.6 GB)         1
    AfterpPower cut to vdi2, but second desktop did not failover to vdi2, desktops count still 1, instead of 2
    root@vdihost-a:~[600]# vda provider-list-hosts virtualbox
    NAME              STATUS  ENABLED         CPU_USAGE      MEMORY_USAGE  DESKTOPS
    vdi1               OK  Enabled      0% (140 MHz)      17% (2.9 GB)         1
    vdi2     Unresponsive  Enabled        0% (0 MHz)         0% (0 KB)         1
    I really want to know if anyone has managed to get this working at all.

  • Exchange server 2013 CAS server high availability

    Hi
    I have exchange server 2010 sp3(2 MB, 2Hub/Cas) servers.
    Planning to migrate to exchange server 2013.( 2 cas servers and 2 mbx servers).
    I dont want to go all traffic single so i am keeping the role separate..
    In exchange 2010 i achieved hub/CAS high availability through NLB.
    In exchange 2013 how to acheive this...
    Please share your suggestions with document if possible...

    Here ya go:
    http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx
    Load balancing
    and
    http://technet.microsoft.com/en-us/office/dn756394
    Even though it says 2010, it applies to 2013 vendors as well.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Tuxedo and High Availability

    Can you provide some information on how Tuxedo can be configured in a high availability
    environment. Specifically running Tux 7.1 on AIX 4.3 with HACMP/ES. I am planning
    on running with a 'cascading N+1' configuration and have concerns over the ability
    of the standby node to take over a failed node succesfully due to config dependancies
    on the machine name. Is there a white paper detailing use of Tux in a high availability
    environment ?

    Found the answers and thought would share it.
    1. Can load balancing be achieved in MP setup or is this a high availability configuration?
    Both - MP supports load balancing and high availability 2. In an MP setup, can a workstation client continue to work even after the master node gets migrated? If so, can we have both (or all nodes and their WSL) listed in WSNADDR for this to happen
    Correct.

  • WL9/bpel10.1.3.4 MLR10  high availability setup missing internal queues.

    Hi,
    Noticed that setups on weblogic HA (High Availability) cluster setups have a bug in setting up the JMS SOA module.
    Currently 2 mandatory queues (distributed) and corresponding Factories are missing: eg BPELInvokerQueue and BPELWorkerQueue.
    These queues basically guarantee no message loss during for example http/SOAP communications. Like in our case BPEL <-> OSB.
    On non HA setups and probably older versions of BPEL the porting to WL9.2 goes smoother as they do contain these queues/.
    Does anyone know putting them manually resolves the problem? Putting Service request on OTN to support this.
    Thanks and Regards
    Jirka

    Hi,
    okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
    and so on. It's much more simpler, better and less expensive.
    CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
    http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
    First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
    On channel 9 there is many stuff from MEC:
    http://channel9.msdn.com/search?term=exchange+2013
    Migration:
    http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
    Additional informations:
    http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
    Hope this helps :-)

  • VMs High availability in cloud

    If a cloud is created with SAN Storage Pool,  hyper-v hosts (without clustered) and network fabrics should the VMs will migrate to another hyper-v host in case one of the hyper-v hosts failure. By cloud  characteristics it should.
    Please help clarifying.
      

    A cloud in VMM is an abstraction of the underlying resources, it is not a technical feature on its own, but depending on the underlying configuration in the fabric.
    A cloud can contain several host groups, and each host group can contain a mix of both stand-alone hosts and clusters. In order to have VMs on a cluster, they must be instrumented that they should be highly available. This can be set on the template where
    the VM deployment will be based on, or you can force the cloud to deploy them to a cluster.
    No matter what you are trying to achieve, high availability for Hyper-V would require the virtualization hosts to be members of a Windows Failover Cluster.
    And regarding placement, when you deploy to a cloud in VMM, you don't specify a host. Intelligent placement (in VMM) will determine the best suited host for the configured virtual machine. If you deploy to a host
    group, then you are able to pick the desired host - although intelligent placement will give the recommended host in the wizard. 
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • High Available virtual machines-SAN storage availability

    Hi,
    Considering that we have the following scenarios:
    1) High available virtual machines
    2)Storage presented through a virtual SAN switch connection.
    The question that I have for you is the following:
    How will the SAN storage be available to the virtual machines in:
    a) Life-migration scenarios?
    b)Physical server failure?
    Thank you.

    Hi,
    a) from Technet:
    You must have access to any virtual SANs that are being used by the virtual machine. In addition, the virtual SAN connectivity must have the same number of ports on the SAN to expose the LUNs"
    http://technet.microsoft.com/en-us/library/dn551169.aspx#BKMK_Step1
    b) A classcial Failover will occur and all cluster resources will be moved to another cluster node depending on your configured Failover Cluster rules
    regards Marc Grote aka Jens Baier - www.it-training-grote.de - www.forefront-tmg.de - www.galileocomputing.de/3570

Maybe you are looking for

  • Changing GR/IR clearing account

    Hi, We are using one Gr/ir clearing account for both Material and Service Entry sheet process. Now for clear process we want to create new account for the Service entry sheet. We have given blank valuation class for Service entry sheet and it has bee

  • Missing Meta data in Gallery JSON file

    I have bought a GEOtagger device and I am using GPS2Aperture to get the meta data into my Aperture library. But when you export the pictures to a Web Gallery Album the meta data is not exported. I found out it is not MobileMe too blame. When you look

  • Problem in hiding decimals for ALV reprt

    I am trying to hide decimal in ALV report. following is  part of a code which I have written. I have checked gt_fieldcat in debug, fieldcat is updated with value for ls_ftcat-decimals_out. But somehow report is still showing decimals.Please help...?

  • Diplay Characteristics values only on hierarchy leaves

    Dear All, I have an issue in the BEx Query Analyser. I have a hierarchy which displays values of the other characteristics as follows: E       | William          | John > EA | William > EB | John The hierarchy nodes diplay all the values of its sons.

  • Aqualung + skins pkgbuild. new music player for linux.

    Aqualung is a new linux music player. More info and screenshots are available here http://aqualung.sourceforge.net/ It supports alsa, oss and jack and uses gtk2. aqualung pkgbuild pkgname=aqualung pkgver=0.9beta1 pkgrel=1 pkgdesc="Skinable xmms/bmp l