Highly Available Cluster-wide IP
We have the following situation
1>. We have a resource group hosting samba
2>. We have one more resource group hosting a java server.
We need to make both this resource group dependant on a common logical hostname.
Do we create a separate resource group with a logical hostname resource. In this case even if the adpater hosting the logical hostname goes down, the resource group is not switched since there is no probe functionality for LogicalHostname.
How do we go about doing this
Hi,
from your question I conclude that both services always have to run on the same node, as they have to run where the "common" logical IP is running. How about putting both services into a single resource group? This seems to be the easiest solution.
In most failure scenarios of logical IP addresses, a failover should be initiated. I must admit that I have never tested a RG which consisited only of a logical IP address.
Regards
Hartmut
Similar Messages
-
UOO sequencing along with WLS high availability cluster and fault tolerance
Hi WebLogic gurus.
My customer is currently using the following Oracle products to integrate Siebel Order Mgmt to Oracle BRM:
* WebLogic Server 10.3.1
* Oracle OSB 11g
They use path service feature of a WebLogic clustered environment.
They have configured EAI to use the UOO(Unit Of Order) Weblogic 10.3.1 feature to preserve the natural order of subsequent modifications on the same entity.
They are going to apply UOO to a distributed queue for high availability.
They have the following questions:
1) When during the processing of messages having the same UOO, the end point becomes unavailable, and another node is available in order to migrate, there is a chance the UOO messages exist in the failed endpoint.
2) During the migration of the initial endpoint, are these messages persisted?
By persisted we mean that when other messages arrive with the same UOO in the migrated endpoint this migrated resource contains also the messages that existed before the migration?
3) During the migration of endpoints is the client receiving error messages or not?
I've found an entry on the WLS cluster documentation regarding fault tolerance of such solution.
Special Considerations For Targeting a Path Service
When the path service for a cluster is targeted to a migratable target, as a best practice, the path
service and its custom store should be the only users of that migratable target.
When a path service is targeted to a migratable target its provides enhanced storage of message
unit-of-order (UOO) information for JMS distributed destinations, since the UOO information
will be based on the entire migratable target instead of being based only on the server instance
hosting the distributed destinations member.
Do you have any feedback to that?
My customer is worry about loosing UOO sequencing during migration of endpoints !!
best regards & thanks,
MarcoFirst, if using a distributed queue the Forward Delay attribute controls the number of seconds WebLogic JMS will wait before trying to forward the messages. By default, the value is set to −1, which means that forwarding is disabled. Setting a Forward Delay is incompatible with strictly ordered message processing, including the Unit-of-Order feature.
When using unit-of-order with distributed destinations, you should always send the messages to the distributed destination rather than to one of its members. If you are not careful, sending messages directly to a member destination may result in messages for the same unit-of-order going to more than one member destination and cause you to lose your message ordering.
When unit-of-order messages are processed, they will be processed in strict order. While the current unit-of-order message is being processed by a message consumer, the next message in the unit-of-order will not be delivered unless it is to the same transaction or session. If no message associated with a particular unit-of-order is processing, then a message associated with that unit-of-order may go to any session that’s consuming from the message’s destination. This guarantees that all messages will be processed one at a time and in order, and any rollback or recover will not prevent ordered processing of the messages.
The path service uses a persistent store to save the state of which member destination a particular unit-of-order is currently using. When a Path Service receives the first message for a particular unit-of-order bound for a distributed destination, it uses the normal JMS load balancing heuristics to select which member destination will handle the unit and writes that information into its persistent store. The Path Service ensures that a new UOO, or an old UOO that has no messages currently on any destination, can be enqueued anywhere in the cluster. Adding and removing member destinations will not disrupt any existing unit-of-order because the routing decision is made dynamically and those decisions are persistent.
If the Path Service is unavailable, any requests to create new units-of-order will throw the JMSOrderException until the Path Service is available. Information about existing units-of-order are cached in the connection factory and destination servers so the Path Service availability typically will not prevent existing unit-of-order messages from being sent or processed.
Hope this helps. -
Two SAP Instances in High Availability cluster - SGeSAP - HPUX
Dear All
We want to install 2 SAP instances in single host with DB and CI separately. DB and CI will be on High Availabilty Cluster using SGeSAP for HP - Unix. The database is Oracle 10g.
Does SAP support multiple instance on the same hosts(DB and CI) for HA option.
Kindly inform
Regards
LakshmiYes it is possible to run two SAP systems on same cluster using SGeSAP. Normally If there is only one system on the cluster DB is configured on one node and CI is configured on the second node, with fail over in case of a node failure. In your case if a node fails one node will be running two DB and two CI instances. You have to size the hardware accordingly. Just FYI, SGeSAP is licensed per SAP System.
-
PI 7.1 High Availability Cluster issue
Dear All,
in our PI7.1 system, we have High availabilty..........we have 2 app servers APP1 and APP2 .............we have 2 servers CI1 and CI2 for central instance.........we are using virtual host as CI everywhere.......so CI can be CI1 or CI2 at any instance.........
Two days back during night for 10 minutes something strange happened........before this time period and after this time period everything is okay................during this time period, i saw msgs erroring in PI in SXMB_MONI transaction giving HTTP 503 Service not available msg and giving HTML error as ICM started but no server with HTTP connected.................then after this time-period, these error msgs were restarted automatically because of RSXMB_RESTART_MESSAGE report which is scheduled in backgroud.............
i want to analyze what happened during this time period..............
From the above HTML error, i analyzed that J2EE engine was down.............
From ICM trace i got that a server node with some number has been dropped from load balancing list, then its HTTP service was stopped, then later this server node was added to load balancing list, then its HTTP service was started and then msgs started getting processed............
i think this server node is linked to CI1 instance.............
So my questions:
1. why CI1 was dropped from load balancing list - what happened at that point of time - how to analyze it.......
2. even if CI1 was not working, why did not the system switched to CI2 immidiately.......
3. how did CI1 automatically got added in the load balancing list............
Plz help me to analyze this situation..........Hi misecmisc
please gone through this links i hope it will help you.
https://wiki.sdn.sap.com/wiki/display/JSTSG/%28JSTSG%29%28TG%29Q2065
https://www.sdn.sap.com/irj/scn/advancedsearch?query=http503Servicenotavailable+
Regards
Bandla -
Unable to connect copied vhd files to VM Hyper-v high availability cluster.
Hi
I have my VMs VHD files stored on my SAN and I have a copied one of the LUNs containing a vhd file which I have then exported to the shared storage of the hyper-V cluster which is just like and other VHD. The problem comes when trying to connect this
copied vhd to the VM which still uses the original VHD . The system refuses reporting that it cannot add the same disk twice..
These are two different VHD files
I have: renamed the new file, changed the disk ID, changed the volume id and using vbox (because it was the only way I could find do it ) changed the uuid of the vhd file. But still no joy.
I can connect the volume to any other VM without any issues, but I need to be able to connect it to the same VM..
I'm not sure how Hyper-v is comparing the two VHD files but I need to know so I can I can connect the copy VHD without having to copy all its contents to a new VHD which is impractical..
Anyone got any ideas or a procedure??
windows 2008 R2 cluster with 3par SAN as storage.
PeterHi perter,
Please try to refer to the following items :
1. create a folder in the LUN , then copy the problematic VHD into it ,attach the copied VHD to original VM.
2. copy the problematic VHD file to another volume ,then attach it to original VM again .
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
We have a current HA cluster at center1 which is mirrored to another HA cluster in center2. We have several instances already installed and working which are using one NIC for data and replication. We want to prevent mirror failovers by
configuring a NIC on a replication network which has no DNS server. What are the steps to configure the current SQL instances to use this dedicated NIC for mirror replication?Hi dskim111,
You can refer the following step by step article to create the dedicated mirroring NIC,
Step by Step Guide to Setup a Dedicated SQL Database Mirroring(DBM on dedicated Nic card)
http://blogs.msdn.com/b/sqlserverfaq/archive/2010/03/31/step-by-step-guide-to-setup-a-dedicated-sql-database-mirroring-dbm-on-dedicated-nic-card.aspx?Redirected=true
I’m glad to be of help to you!
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Move SAP ECC standalone system to High Availability cluster environment.
Hi,
I have requirement to move SAP ECC stand alone system to cluster environment.
OS - MS server 2003
DB - Oracle 10
What steps need to follow to perform this activity?
What down time required?
Please Help.
Regards
AmitHi Amit
1. You have to perform SAP Homogeneous system copy method from soure to target
Kindly refer the system copy guide
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00f3087e-2620-2b10-d58a-c50b66c1578e?QuickLink=index&…
refer the SAP note FAQ 112266
2. Before that check the PAM you present version supported Windows 20008 / Oracle 11 G?
refer the SAP link http://service.sap.com/PAM
3. Over all down time may be 20 hrs time
BR
SS -
Install ERP ECC 6.0 EHP4 with MSCS High Availability??
Dear Expert:
I need to Install ERP ECC 6.0 EHP4 with MSCS High Availability cluster.
excpet the installation guide.
is there any step by step document or guide could be reference??
please give me the link or any infor.
and....not quite sure what to do in the guide below:
Nodes7.2.2Distribution of SAP System Components to Disks for MSCS
When planning the MSCS installation, keep in mind that the cluster hardware has two different sets of disks:% ·4
Local disks that are connected directly to the MSCS nodes% ·4
Shared disks that can be accessed by all MSCS nodes via a shared interconnect
NOTE
Shared disk is a synonym for the MSCS resource of Resource type Physical disk.
You need to install the SAP system components in both the following ways:% ·4
Separately on all MSCS nodes to use the local storage on each node% ·4
On the shared storage used in common by all MSCS nodes
thank you very much for your help
regards
jack leeYou can find installation documents at [http://service.sap.com/instguidesnw70]
-
Hi,
Noticed that setups on weblogic HA (High Availability) cluster setups have a bug in setting up the JMS SOA module.
Currently 2 mandatory queues (distributed) and corresponding Factories are missing: eg BPELInvokerQueue and BPELWorkerQueue.
These queues basically guarantee no message loss during for example http/SOAP communications. Like in our case BPEL <-> OSB.
On non HA setups and probably older versions of BPEL the porting to WL9.2 goes smoother as they do contain these queues/.
Does anyone know putting them manually resolves the problem? Putting Service request on OTN to support this.
Thanks and Regards
JirkaHi,
okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
and so on. It's much more simpler, better and less expensive.
CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
On channel 9 there is many stuff from MEC:
http://channel9.msdn.com/search?term=exchange+2013
Migration:
http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
Additional informations:
http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
Hope this helps :-) -
Cluster install, high availability
Hi All,
Does anyone have a guide or can detail the steps (differences from a single server install) to install BPC 7.0 Microsoft version in a Cluster Environment(High Availability).
CheersHello Jitendra,
SQL should have its own cluster group. So should DTC and the shared storage.
The name of the cluster that Sorin is refering to is the virtual name associated with the clustered resource, so in this case the DNS name of the SQL cluster group. The idea is that BPC does not care that it is running on a cluster, all it wants is a SQL server. The clustering aspect is transparent to BPC and is left to MS Clustering. Same thing if you have a specific group for SSAS, you give that group's DNS name when installing BPC.
The WEB/App servers components should not be installed on the cluster, but rather on NLB nodes.
Reporting services can be installed either on the cluster or on the web/app server, but keep in mind that most companies don't like having a web server on database servers, so we usually install SSRS on the web/app servers. The SSRS database repository would be located on the SQL cluster in either case.
In the case of NLB being used for the web/app servers, the file share should be located on a shared resource, it cannot be local on the web/app servers. It can be located on the cluster, or any dependable network share that uses the same AD security as the servers.
Note that for NLB we usually recommend using a hardware solution as it is faster than using the software solution that is part of Windows, and does not take resources away from the web/app server.
I hope this helps
Best regards,
Bruno Ranchy -
2xC350 in High Availability Mode (Cluster Mode)
Hello all,
first of all, i`m a newbie in ironport. So Sorry for my basic questions, but i can`t find anything in the manuals.
I want to configure the two boxes in High Availability Mode (Cluster Mode) but i don`t understand the ironport cluster architecture.
1) in machine mode i can configure IP-Adresses -> OK
2) in Clustermode i can configure listeners and bind them to a IP-Address -> OK
But how works the HA?
A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
B) Or should i configure different IPs and configure two MX Records?
And if one box is down the second MX will be used.
Thanks in advance
MichaelThe ironport clustering is for policy distribution only - not for smtp load mgmt.
A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
Could do - using NAT'ing on the f/w but few large business take this approach today.
Many/most large businesses use a HW loadbalancer like an F5, Foundry ServerIron, etc. The appliances themselves would be set up on seperate IP addresses. Depending on the implementation requirements, the internal IP address could be a public IP or a private IP.
B) Or should i configure different IPs and configure two MX Records?
And if one box is down the second MX will be used.
If you set up two boxes, even with a different MX preference, mail will be delivered to both MX records. There are broken SMTP implementations that get the priority backwards, and many spammers will intentionally attempt to exploit less-restrictive accept rules on secondary MX recievers and will send to them first. -
High available address with zone cluster
Hi,
I run through Sun documentation, and didn't fully understand, how should I ensure the high availability of an IP address "inside" Solaris zone cluster.
I installed a new zone cluster with the following configuration:
zonename: proxy
zonepath: /proxy_zone
autoboot: true
brand: cluster
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
enable_priv_net: true
net:
address: 193.219.80.85
physical: auto
sysid:
root_password: ********************
name_service: NONE
nfs4_domain: dynamic
security_policy: NONE
system_locale: C
terminal: ansi
node:
physical-host: cluster1
hostname: proxy_cl1
net:
address: 193.219.80.92/27
physical: vnet0
defrouter: 193.219.80.65
node:
physical-host: cluster2
hostname: proxy_cl2
net:
address: 193.219.80.94/27
physical: vnet0
defrouter: 193.219.80.65
clzc:proxy>
clzc:proxy>
After installation, I've tried to configure a new resource group with a logicalhostname resource in it inside the zone cluster:
/usr/cluster/bin/clresourcegroup create -n proxy_cl1,proxy_cl2 sharedip
and got the following error:
clresourcegroup: (C145848) proxy_cl1: Invalid node
Is there any other way to make an IP address inside a "proxy" zone cluster high available?
Thanks.I have rapid spanning tree enabled on both switches.
The problem is that I have to disabled spanning tree on the link connecting the two switches together. If not the inter switch link will be blocked the moment I fail over the network bond because it probably thinks there is a redundant path. Is there some other way to prevent the inter switch link from blocking?
If not, how can I disable spanning tree on the aggregated link? So far I only managed to do this on a normal link, but cannot do it on an aggregated link. -
High availability without RAC DB on windows cluster and multiple DB instanc
Is it possible to achieve High Availability on 2 Oracle DB in windows 2003 cluster servers and shared SAN server. without installing and configuring RAC DB and ASM.
Can we use Veritas or Symantec or anyother tool to get it done.
What are the options available to achive this?
Appreciate response.
Thanks
NoorPlease no double postings, this will not give you answers faster...
For answer see here:
HA Oracle DB Clustering requirement without RAC DB on shared SAN Server? -
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Today I had requirement, where we have to use the SharePoint Foundation 2013 (free version) to build an intranet portal ( basic announcement , calendar , department site , document management - only check-in check-out / Version).
Please help me regarding the license and size limitations. ( I know the feature comparison of Standard / Enterprise) I just want to know only about the installation process and license.
6 Server - 2 App / 2 Web / 2 DB cluster ( so total license 6 windows OS license , 2 SQL Server license and Guess no sharepoint licenes)Thanks Trevor,
Is load balance service also comes in free license... So, in that case I can use SharePoint Foundation 2013 version for building a simple Intranet & DMS ( with limited functionality). And for Workflow and content management we have to write code.
Windows Network Load Balancing (the NLB feature) is included as part of Windows Server and would offer high availability for traffic bound to the SharePoint servers. WNLB can only associate with up to 4 servers.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
Maybe you are looking for
-
You can download past purchases on this computer with just one Apple ID every 90 days. You cannot associate this computer with a different Apple ID for 74 days. What does that mean ???
-
I am trying to write a script that will execute a batch file on a remote Windows server. This line below is basically what I'm trying to do and it works without error, however I would like to use a variable for the path\filename of the batch file. in
-
How do I remove annoying word advertisement on every web page?
Hi everyone, I hope I can get help on this. Recently, everytime I browse internet pages several words get highlighted and when I pass the mouse over the words they automatically pop up. It wasn't so annoying at first but more and more words are getti
-
LSMW for master data & transaction data
Hello Experts, We have master data : 1)Material 2)Vendor and Transaction data 3)Open PR 4)Open PO 5)Stock Posting Can we use LSMW for all these? Or which one is recommended for each one of them... Whether BDC or LSMW recommended ? Kindly suggest... R
-
Hi all I have a situation where I need data to be processed EOIO. The problem is that i only whant to process one message a time pr. client (I have multiple business systems (clients) on the same system). How can i achieve this? My scenatio is file -