High available architecture
Hi all,
we are planning to migrate to OBPM 11g.
Can you kindly help me clarify the following?
in OBPM 10g, we had engines and directory schema, so if we have multiple applications, we can easily separate them with separate engines.
How can we achieve this in OBPM 11g?
How do I have separate clusters for each application but common workspace? Is this possible?
How do I load balance?
Thanks,
Alex
HI Manoj,
First of all i hope you have this guide:
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/206692e2-39e5-2b10-c49d-dce7751605d6?quicklink=index&overridelayout=true
"How To Cluster MDM 7.1 using MSCS".
And to answer your questions:
1.) To my understanding, with Microsoft Cluster we can address only hardware(Physical Machine) failure or operating system failure, but what will happen if the host is running fine and the MDS installed on the host fails. Do we have a mechanism to avoid downtime other than restarting the MDS.
Yes you are right, but whenever a failover occurs in a MDM cluster environment; MDM Server instance on 2nd machine comes up automatically. The only issue is Repositories get unloaded due to failover. And thats why it looks like all work gets stopped. To overcome a maual intervention, there is a parameter in MDS.INI to automatically load repository whenever MDM server starts after failover. Can go through Guide, but its not a foolproof method yet.
2.) Is SAP MDM architecture/ installation similar to other SAP system architectures, for eg: will SAP MDM have
Central Instance, Central Services Installation( Message Server and Enqueue Server),Database Instance,Dialog instance
Additional application server instance. Or it has only MDS, MDSS, MDIS and Database?
It has only MDS, MDSS, MDIS and Database in active passive cluster environment.
3.) SAP MDM forms a part of Netweaver stack along with other applications like PI , BI etc, Does it mean that we have to install netweaver also before we install SAP MDM or it can be installed independently. I mean how SAP MDM is linked to Netweaver in
landscape.
Independent. Linking to other NetWeaver modules is a separate project/assignment itself for techies .
4.) What is the difference between Switchover Cluster and Software Cluster(Redundant application server) since both of them needs two or more host machines.
No Idea. Most probably a Basis guy can help you more.
BR,
Alok Sharma
Similar Messages
-
High Availability architecture design know-how in 9.3.1
Guys,
Is there any toolkit and know-how how to design high avaialbility architecture (clustering) for system 9.3.1 and particularly HFM.
I believe it could be quite different than in 11th version.
Any input most appreciated
Regards,
RafalHi,
Here is how to create users in Shared Services :- http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_cas_help/crtuser.htm
The rest of the information for Shared Services is in :- http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_cas_help/frameset.htm?apc.html
Cheers
John
http://john-goodwin.blogspot.com/ -
High Availability of BPEL System
We are having a High Availability architecture configuration for the BPEL System in our production environment.
The BPEL servers are clustered in the middle tier of the architecture and RAC is used in the database tier of the architecture.
We have 5 BPEL processes which are getting invoked within each other. For eg:
BPELProcess1 --> BPELProcess2 --> BPELProcess3, BPELProcess4 &
BPELProcess4 --> BPELProcess5
Now when all the above BPEL processes are deployed on both the nodes of the BPEL server, how do we handle the end point URL's of these BPEL servers.
Should we hardcode the end point URL in the invoking BPEL process or should we replace the IP address of the two nodes of the BPEL server with the IP address of the load balancer.
If we replace the IP address of the BPEL server with the IP address of the load balancer, it will require us to modify, redeploy and retest all the BPEL processes again.
Please advise
ThanksThe BPEL servers are configured with active - active topology and RAC is used in the database tier of the architecture.
BPEL Servers is not clustered. Load Balancer is used in front of the two nodes of the BPEL servers. -
I would like to understand if a shared storage is required for all application servers that will run weblogic + oracle identity manager. I've been using the following oracle guides : http://docs.oracle.com/cd/E40329_01/doc.1112/e28391/iam.htm#BABEJGID and http://www.oracle.com/technetwork/database/availability/maa-deployment-blueprint-1735105.pdf and from my interpretation both talk about configuring all the application servers with access to a shared storage. From an architecture standpoint, does this mean all the application servers need access to an external hard disk? if shared storage is required what are the steps to implement it?
Thanks,
user12107187You can do it and it will work. But Fusion Middleware products have EDG to provide guidelines to help you to implement high availability.
Having a shared storage will help you to recover from storage failure, otherwise this might be a point of failure.
Enterprise Deployment Overview
"An Oracle Fusion Middleware enterprise deployment:
Considers various business service level agreements (SLA) to make high-availability best practices as widely applicable as possible
Leverages database grid servers and storage grid with low-cost storage to provide highly resilient, lower cost infrastructure
Uses results from extensive performance impact studies for different configurations to ensure that the high-availability architecture is optimally configured to perform and scale to business needs
Enables control over the length of time to recover from an outage and the amount of acceptable data loss from a natural disaster
Evolves with each Oracle version and is completely independent of hardware and operating system "
Best Regards
Luz -
Hi all,
I am currently working on adapting our software to a high-
availability architecture using Tuxedo and I have runned into
questions to which I cannot find satisfying answers in the
documentation. Can someone help?
To ensure high-availability, we use two equivalent machines, a
master and a backup, each having sufficient capacity to handle
all the load of the system. The bunch of services handling the
operations of the software can therefore run on either or both
machines. However, a specific message processing operation
requires the participation of 3 different servers which, for a
given message, must all be from the same machine (they must the
same network connection on which acks are sent).
The initial idea was to make all servers run on the master
machine with nothing on the backup. Only in the case of failure
would the servers be transfered to the backup machine – using
group or machine migration. Unfortunately, there is some
information I don’t seem to find in the doc ...
1) So far, everything I read on migration indicates that it can
only be performed manually. Is there any way to have Tuxedo do
it automatically? On which conditions?
2) What happens if one server in the group crashes? From what I
understand, it can be restarted up to MAXGEN time within the
GRACE period, but what then? What if it crashes out of lack of
ressources (memory, hard drive), will it be migrated
automatically to the backup machine? And will the other servers
in the group follow?
3) If, instead of using migration, I decide to have all services
running on both machines; how do I prevent a message from
transiting from one machine to another as it is forwarded to the
different servers in the chain?
Anyone had similar problems? Thanks for the answers.Marcin Kasperski wrote:
To ensure high-availability, we use two equivalent machines, a
master and a backup, each having sufficient capacity to handle
all the load of the system. The bunch of services handling the
operations of the software can therefore run on either or both
machines. However, a specific message processing operation
requires the participation of 3 different servers which, for a
given message, must all be from the same machine (they must the
same network connection on which acks are sent).If I were implementing this, I would just advertise services which embed
machine information in their name. Then:
- the client would call - say - 'mainsvc' - which would be running on
both machines
- mainsvc would determine the machine it runs on and forward the call to
- say 'mainsvc-machine1'
- the rest of the calls would be directed to 'svc2-machine1' and
'svc3-machine1' (the reply from the first call would contain the info
what to call next)
This way, the first call would find the working machine (using the only
available if the second one crashed and one of the two available in the
round-robin method when both work), the rest would be directed to the
machine you need.
Instead of advertising machine-specific service names, you can also take
a look at data dependent routing.Sorry Marcin, but you have just bound your system to specific machines and
will limit scaling. That's a bad thing. A simpler solution is to use Data
Dependent Routing (DDR). This is probably the single most powerful concept
in Tuxedo, and for some reason people rarely exploit it.
By simply having some kind of machine identifier in the request buffer
either native (part of the natural application data) or artificial (a field
added just for doing DDR routing) you have something like a Zip Code (for US
folk) or Postal Code (for the international crowd). Tuxedo analyzes this
data and based on rules in the UBBCONFIG, routes the message to the correct
system. So if the routing field says MACH1, it goes to SVC1 on machine1.
If the routing field says MACH2, it goes to SVC1 on machine3, etc. etc. I
could have 10 machines, and all 10 would by symmetrical in capability, and I
could route to each accordingly.
Part of what makes Tuxedo so neat to work with, is that you can create a
Logical design, develop it, and then deploy it in any number of different
Physical designs. And if you do everything right, the Logical design
remains intact. For example, there is a drug store chain here in the US
that has over 4000 Tuxedo domains, all working together!! Then they have a
central site that mirrors all 4000 individual databases. Get your
prescription filled at one store, and then go to another, and that second
store can access your records from the central site. DDR insures that all
messages get routed to the correct location. (I here there is a brokerage
firm with 10,000 domains!!!)
Check out DDR, it will make your day.
Brian Douglass
Transaction Processing Solutions, Inc.
8555 W. Sahara
Suite 112
Las Vegas, NV 89117
Voice: 702-254-5485
Fax: 702-254-9449
e-mail: [email protected] -
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Hi,
I currently have a single Exchange 2010 Server that has all the roles supporting about 500 users. I plan to upgrade to 2013 and move to a four server HA Exchange setup (a CAS array with 2 Server as CAS servers and one DAG with 2 mailbox Servers). My
goal is to plan out the transition in steps with no downtime. Email is most critical with my company.
Exchange 2010 is running SP3 on a Windows Server 2010 and a Separate Server as archive. In the new setup, rather than having a separate server for archiving, I am just going to put that on a separate partition.
Here is what I have planned so far.
1. Build out four Servers. 2 CAS and 2 Mailbox Servers. Mailbox Servers have 4 partitions each. One for OS. Second for DB. Third for Logs and Fourth for Archives.
2. Prepare AD for exchange 2013.
3. Install Exchange roles. CAS on two servers and mailbox on 2 servers. Add a DAG. Someone had suggested to me to use an odd number so 3 or 5. Is that a requirement?
4. I am using a third party load balancer for CAS array instead of NLB so I will be setting up that.
5. Do post install to ready up the new CAS. While doing this, can i use the same parameters as assigned on exchange 2010 like can i use the webmail URL for outlook anywhere, OAB etc.
6. Once this is done. I plan to move a few mailboxes as test to the new mailbox servers or DAG.
7. Testing outlook setups on new servers. inbound and outbound email tests.
once this is done, I can migrate over and point all my MX records to the new servers.
Please let me know your thoughts and what am I missing. I like to solidify a flowchart of all steps that I need to do before I start the migration.
thank you for your help in advanceHi,
okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
and so on. It's much more simpler, better and less expensive.
CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
On channel 9 there is many stuff from MEC:
http://channel9.msdn.com/search?term=exchange+2013
Migration:
http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
Additional informations:
http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
Hope this helps :-) -
Windows 2012 RDS - Session Host servers High Availability
Hello Windows/Terminal server Champs,
I am new middle of implementing RDS environment for one of my customer, Hope you could help me out.
My customer has asked for HA for RDS session host where applications are published, and i have prepared below plan for server point of view.
2 Session Host server, 1 webaccess, 1 License/connection
Broker & 1 Gateway (DMZ).
In first Phase, we are planning to target internal user
who connect to Session host HA where these 2 servers will have application installed and internal user will use RDP to access these application.
In second Phase we will be dealing with external Party who connect from external network where we are planning to integrate with NetIQ => gateway
=> Webaccess/Session host
I have successfully installed and configured 2 Session
Host, 1 license/Broker. 1 webAccess & 1 Gateway. But my main concern to have session Host High Available as it is hosting the application and most of the internal user going to use it. to configure it i am following http://technet.microsoft.com/en-us/library/cc753891.aspx
However most of the Architecture is change in RDS 2012. can you please help me out to setup the Session Host HA.
Note: we can have only 1 Connection broker /Licensing server , 1 webacess server & 1 Gateway server, we cannot increase more server due to cost
factor.
thanks in advance.Yes, absolutely no problem in just using one connection broker in your environment as long as your customer understands the SPOF.
the session hosts however aren't really what you would class HA - but to set them up so youhave reduancy you would use either Windows NLB, an external NLB device or windows dns round robin. My preferred option when using the connection broker is DNS round
robin - where you give each server in the farm the same farm name dns entry - the connection broker then decides which server to allocate the session too.
You must ensure your session host servers are identical in terms of software though - same software installed in the same paths on all the session host servers.
if you use the 2012 deployment wizard through server manager roles the majority of the config is done for you.
Regards,
Denis Cooper
MCITP EA - MCT
Help keep the forums tidy, if this has helped please mark it as an answer
My Blog
LinkedIn: -
Multi Site SQL High Availability group
Hi,
We have our primary Data Centre where we have 2 MS SQL 2012 Enterprise Servers and 2 SharePoint 2013 Enterprise servers.
We have SQL High Availability group implemented on our primary Site (Site A)
Site A has subnet of 192.168.0.0.
We recently added a new DR Site (Site B). Site B we have MS SQL 2012 Enterprise Servers. Site B have a subnet of
172.1.1.0
Both sites are connected via a VPN Tunnel. MS SQL 2012 Enterprise Server on Site B has been added in have SQL High Availability
group of Site A.
SQL High Availability group have 2 IPs 192.168.0.32 and 172.1.1.32. SQL Listener have 2 IPs 192.168.0.33 and 172.1.1.33
We want to make sure that if Site A completely down then Site B work as an active Site. But when Site A down we are unable
to ping High Availability group and Site B is unable to work as active site. SQL and SharePoint services are completely down.
Site A has AD(Primary Domain Controller) and Site B has ADC(Additional Domain Controller).
Site A has witness server.
We are using Server 2012 Data Centre
Please suggest.
FarooqSharePoint is not the same as any other applications. The DR site has to be a completely different farm from your production. This means that the SharePoint_AdminContent and config databases on both farms are different and should not be included in the Availability
Group (they do not support asynchronous Availability Group.) Only content databases and other SharePoint databases supported in an Availability Group should be included as per this
TechNet article. Have a look at this
blog post for your networking configuration.
The reason your Windows cluster service goes down in the DR data center when the when your primary data center goes down is because the cluster has no majority votes. You have 4 available votes in your cluster - 3 in the primary data center in the form of
the 2 Availability Group replicas and the witness server and 1 in the DR data center. If the Windows cluster does not have majority of votes, it automatically shuts down the cluster. This is by design. That is why when this situation happens, you need to force
the cluster to start without a quorum as described in
this article (Windows Server 2012 R2 introduced the concept of dynamic witness and dynamic quorum to address this concern.) By default, the IP address of the Availability Group listener name will be registered on all of the DNS servers if the cluster is
running Windows Server 2012 and higher (this is the RegisterAllProvidersIP property that Samir mentioned.) However, you need to flush the DNS cache on the SharePoint web and application servers and point them to the correct IP address on the DR server after
failover. This is because the default TTL value of the Availability Group listener name is 20 minutes.
It is important to define your DR strategies and objectives when designing the architecture to make sure that the solution will meet your goals. This will also help you automate some of the processes in the DR data center when failover or a disaster occurs.
Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
Blog |
Twitter | LinkedIn
SQL Server High Availability and Disaster Recover Deep Dive Course -
HIGH AVAILABILITY for ORACLE E-Business Suite
hai..
i am new to oracle apps administraion.i would like to know one thing.i now how to mantain high availability in the database using rac, dataguard and other concepts.but i would like to know how to mantaing high availability in oracle e-busines suite. is there any clustering concetps in apps? bcas at the database level using rac we can configure number of instances. here is it possible to mantian two or more web or forms..servers simultaneously using clustering concepts? any related papers or there? please guide me.
thanks..
srinivas pvsMaximum Availability Architecture (MMA) and Solutions for Oracle Apps 11i can be found in the following links:
Note: 403347.1 - Maximum Availability Architecture and Oracle E-Business Suite Release 11i
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=403347.1
Note: 207159.1 - Oracle E-Business Suite Release 11i Technology Stack Documentation Roadmap
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=207159.1 -
2xC350 in High Availability Mode (Cluster Mode)
Hello all,
first of all, i`m a newbie in ironport. So Sorry for my basic questions, but i can`t find anything in the manuals.
I want to configure the two boxes in High Availability Mode (Cluster Mode) but i don`t understand the ironport cluster architecture.
1) in machine mode i can configure IP-Adresses -> OK
2) in Clustermode i can configure listeners and bind them to a IP-Address -> OK
But how works the HA?
A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
B) Or should i configure different IPs and configure two MX Records?
And if one box is down the second MX will be used.
Thanks in advance
MichaelThe ironport clustering is for policy distribution only - not for smtp load mgmt.
A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
Could do - using NAT'ing on the f/w but few large business take this approach today.
Many/most large businesses use a HW loadbalancer like an F5, Foundry ServerIron, etc. The appliances themselves would be set up on seperate IP addresses. Depending on the implementation requirements, the internal IP address could be a public IP or a private IP.
B) Or should i configure different IPs and configure two MX Records?
And if one box is down the second MX will be used.
If you set up two boxes, even with a different MX preference, mail will be delivered to both MX records. There are broken SMTP implementations that get the priority backwards, and many spammers will intentionally attempt to exploit less-restrictive accept rules on secondary MX recievers and will send to them first. -
Installing SAP ERP on DB2 with high availability
Hey Gurus,
currently we're installing SAP ERP 6 EHP 6 based on DB2 for unix and windows, this is done using the high availability option on AIX 7.1 and IBM's power HA.
We have the following architecture:
Node A:
1-A host for ASCS, ERS and CI.
2-A host for the DB instance.
The same with node B replacing CI with DI.
Please advise on how to proceed with installation.
Also I need a bit of clarification regarding how to share (or export) the file systems between the hosts of the same node so that the CI can and DB can be connected, since as far as I know the DB installation will ask for the profile directory through the installation, while the CI will ask to see the db file systems as well as the db instance.
Thanks in advanceHi Ahmed,
For your query
Also I need a bit of clarification regarding how to share (or export) the file systems between the hosts of the same node so that the CI can and DB can be connected, since as far as I know the DB installation will ask for the profile directory through the installation, while the CI will ask to see the db file systems as well as the db instance.
Please refer installation guide sections related to file system planning.
Here you can see the file system directory structure as well as information on which filesystems to be shared.
Attached are some screenshot for reference
Hope this helps.
Regards,
Deepak Kori -
Question on replication/high availability designs
We're currently trying to work out a design for a high-availability system using Oracle 9i release 2. Having gone through some of the Oracle whitepapers, it appears that the ideal architecture involves setting up 2 RAC sites using Dataguard to synchronize the data. However, due to time and financial constraints, we are only allowed to have 2 servers for hosting the databases, which are geographically separate from each other in prevention of natural disasters. Our app servers will use JDBC pools to connect to the databases.
Our goal is to have both databases be the mirror image of each other at any given time, and the database must be working 24/7. We do have a primary and a secondary distinction between the two, so if the primary fails, we would like the secondary database to take over the tasks as needed.
The ability to query existing data is mission critical. The ability to write/update the database is less important, however we do need the secondary to be able to process data input/updates when primary is down for a prolonged period of time, and have the ability to synchronize back with the primary site when it is back up again.
My question now is which replication technology should we try to implement? I've looked into both Oracle Advanced Replication and Dataguard, each seems to have its own advantages and drawbacks:
Replication - can easily switch between the two databases using multimaster implementation, however data recovery/synchronization may be difficult in case of failure, and possibly will lose data (pending implementation). There has been a few posts in this forum that suggested that replication should not really be considered as an option for high availability, why is that?
Dataguard - zero data loss in failover/switchover, however manual intervention is required to initiate failover/switchover. Once the primary site fails over to the standby, the standby becomes the primary until DBA manually goes back in and switch the roles. In Oracle 10g release 2, seems that automatic failover is achieved through the use of an extra observer piece. There does not seem to be anyway to do this in Oracle 9i release 2.
Being new to the implementation of high-availability systems, I am at somewhat of a loss at this point. Both implementations seem to be a possible candidate, but we will need to sacrifice some efforts for both of them also. Would anyone shine some light on this, maybe point out my misconceptions with Advanced Replication and Dataguard, and/or suggest a better architecture/technology to use? Any input is greatly appreciated, thanks in advance.
Sincerely,
Peter TungHi,
It sounds as if you're talking about the DB_TXN_NOSYNC flag, rather than DB_NOSYNC.
You mention that in general, you lose uncommitted transactions on system failure. I think what you mean is that you may lose some committed transactions on system failure. This is correct.
It is also correct that if you use replication you can arrange to have clients have a copy of all committed transactions, so that if the master fails (and enough clients do not fail, of course) then the clients still have the transaction data, even when using DB_TXN_NOSYNC.
This is a very common usage scenario for Berkeley DB replication/HA, used to achieve high throughput. You will want to pay attention to the configured ack policy, group size setting, setting of the 2SITE_STRICT option (if group size == 2). -
Choosing VM as high availability for BizTalk's SQL Server databases
Hi,
I'm lloking to choose the architecture of our BizTalk 2013 solution.
The application server will be build on virtual machines, but I still have interrogation about the SQL Server.
Is a SQL Server on a VM supported for BizTalk 2013 ?
Because in case of physical failure, the VM moved to another server can we considered the VM as a solution of High-Availability ?
thanks for your reply.When the SQL VM fails over what attributes of the server will change? If everything including the server name, the mapped SAN locations (for Databases) and/or the versions of SQL will remain the same then it will behave like a temporary network outage between
the Front-end and the SQL.
If however, the failed over VM has a different set of mapped location for DB then you would need to establish a BizTalk Log Shipping between SQL VM 1 and SQL VM 2. In this scenario, the recovery will require time and cannot be automatic.
A word of caution though. If you have SLA's pertaining to transactions/messages per second with the customer then you might want to evaluate having dedicated boxes of SQL. The SQL Licensing is PER CORE and in a VIRTUAL ENVIRONMENT all the
CORES have to be licensed since the Virtualization does not permit you to BIND the VM to a specific set of CORES. The same would apply for your BizTalk Servers. This just might work out costlier as opposed to the dedicated server environment.
Regards. -
FIM installation in High Availability Mode
Experts,
I am planning to install FIM in high availability mode.
FIM Portal on four servers
FIM Service on four servers and
FIM Portal on four servers.
Any document that can guide me for this.
Thanks,
MannSee these
Preinstallation and Topology Configuration
FIM 2010 high availability
I also recommend this FIM book by David & Brad
FIM R2 Best Practices Volume 1: Introduction, Architecture And Installation Of Forefront Identity Manager 2010 R2
Maybe you are looking for
-
Imac G5 second hand shop. Need to add Hard drive Ram and OS.Any advice?
I've found the serial number and am interested in what the specs of this machine are. It is very clean inside but the hard drive (and heat sensor), and ram are missing. I've have a SATA hard drive and the latest version of OSX Leopard but I want to g
-
Issues in Data Transfer Process
Hello All, After creating transformation from Infosource to InfoCube, now i am trying to data transfer process. In DTP Type it displays "DTP for direct process", but i need DTP type as Standard(Can be scheduled). Its giving me an error as "Source doe
-
How do I insert a logo into my header in the new Pages?
I need to paste a small company logo inside my header box so that it appears on every page of our company report and I can't figure out how! After I updated Pages yesterday I have been having MANY problems. We are new to Mac and Pages.
-
Hard disk partition area inaccessable
Hello, I have a 15 GB Quantum HDD. I tried to Install Solaris and it created a swap partition of size 720 MB on it. The installation failed due to some reason afterwards. BUT that partition is now INACCESSABLE. I have tried low level disk formating u
-
no more details to offer.. R is a free downloadable statistics language and i can t install it with firefox as of yet.. i use linux mint 11 as my operating system...similar as you know to windows.....but NO OPTION TO INSTALL !! just no way to tell wh