WL9/bpel10.1.3.4 MLR10  high availability setup missing internal queues.

Hi,
Noticed that setups on weblogic HA (High Availability) cluster setups have a bug in setting up the JMS SOA module.
Currently 2 mandatory queues (distributed) and corresponding Factories are missing: eg BPELInvokerQueue and BPELWorkerQueue.
These queues basically guarantee no message loss during for example http/SOAP communications. Like in our case BPEL <-> OSB.
On non HA setups and probably older versions of BPEL the porting to WL9.2 goes smoother as they do contain these queues/.
Does anyone know putting them manually resolves the problem? Putting Service request on OTN to support this.
Thanks and Regards
Jirka

Hi,
okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
and so on. It's much more simpler, better and less expensive.
CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
On channel 9 there is many stuff from MEC:
http://channel9.msdn.com/search?term=exchange+2013
Migration:
http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
Additional informations:
http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
Hope this helps :-)

Similar Messages

  • Sender agreements on high availability setup

    Hi
    We have XI and ERP exchanging some messages. Our ERP system has been switched to high availability setup.
    As for one scenarion in XI: ERP is sender and there is ONE sender agreement that can use ONE communication channel (RFC).
    In that communication channel, "RFC Server Parameter" is configured for one of the app servers. "RFC Metadata Repository Peremeters" are set to point to the load balancer.
    What happens is that this scenarion can be started only from this app server that is configured in the communication channel.
    How to enable that both ERP app servers could be th senders for this scenario?
    I am missing something obious...
    Thanks,
    Heiko

    This seems strange for me too. But if I set the RFC Message server and Metadata server to use the same system, the connection will not work.
    It seems that there are two possible ways:
    - change communication channel so that it would not be app server specific. Tried but don't seem to figure out how this should be configured.
    - add multiple sender agreements. XI dos not allow multiple sender agreements... requires different receiver... but there is no receiver at all.
    Any ideas?
    Heiko

  • High Availability setup 11g

    How do you setup for DR failover in OEM 11g?
    I have dataguard standby to other server, and web logic installed.
    How do I install/setup Grid install but not activate it? the OMS is only stand-alone at the primary site.
    Yes, I have tried to look at HA doco. but does anyone have step by step approach?

    How about this http://docs.oracle.com/cd/E23943_01/doc.1111/e15722/create_domain.htm#BAJBDDJB
    biinternal.mycompany.net/analytics is your BI Service.
    If helps pls mark

  • Jabber High Availability Setup

    Hi,
    I'm working on a deployment of IM&P Server v9.1 with a publisher and a subscriber in a cluster, I've configured HA for the cluster and it is working fine (when the pub goes down no one gets disconnected and all the chat services work fine) but I have a question about the configuration in the jabber software for windows, ios, android, etc:
    Let's take a look to the jabber desktop for windows, when you configure the ip address of the server, what should you configure there??? in my case I've configured the ip add of the IM&P Pub and my users worked fine with all the services BUT if the pub goes down no one can login using the jabber (this make sense since is the pub add the one that the users are looking for in that config) unless I configure in this screen the ip add of the Sub.
    Is there a configuration that can be done in order to have this automatic???
    Thanks in advance for your comments.

    Thanks alot for your comment!
    Just one question, besides this DNS SRV config what do I need to configure in the IM&Presence Server in order to have this working???
    How can I make jabber to know when to use one record or the other??
    Thanks in advance for your help.

  • High Availability setup Single Host multiple OMS Servers in Cluster?

    Hi All,
    Due to OMS Server JRockit JVM crash
    Refer:
    EM Grid OMS JRockit JVM crashing with Illegal memory access 54
    We are planning to setup one more OMS server in the same host. These two OMS server will be in cluster.
    What are the implications? Any problem in setting up OMS clusters in a single host as clustered ? (Client dont have plan to install on multiple host).
    I couldn't find any document which mention that we could do OMS cluster
    Thanks

    For HA confoiguration of the OMS please see http://download.oracle.com/docs/cd/E11857_01/em.111/e16790/toc.htm
    For installing an additional OMS please check http://download.oracle.com/docs/cd/E11857_01/install.111/e15838/toc.htm
    Regards
    Rob
    http://oemgc.wordpress.com

  • High Availability Setup

    Hello,
    We are trying to setup HA environment where master in pool1 crashes , master in pool2 can take over. We are using OEM to manage our environment. Please let us know if you were able to achieve this in your environment.
    Setup:
    Pool1 has VMServer1 (Master, Utility and Resource) & VMServer2
         VMServer1 hosts GuestVM1
    Pool2 has VMServer3 (Master, Utility and Resource)
    Scenario:
    If VMServer1 crashes, GuestVM1 should migrate to VMServer2 or VMServer3 as per the preferred list defined during configuration.
    Thanks a bunch
    Dani

    vdani wrote:
    We are trying to setup HA environment where master in pool1 crashes , master in pool2 can take over. We are using OEM to manage our environment. Please let us know if you were able to achieve this in your environment. What you describe is not possible. Each pool is an independent grouping of servers. You would have to put all three servers into a single pool for a guest to successfully restart on any of them. Also, in Oracle VM 2.2, the pool master role is automatically assumed by another server if the original pool master goes offline.
    However, OEM is not yet capable of managing Oracle VM 2.2-based pools. You would need to switch to using Oracle VM Manager, or wait until a patch for OEM is released to achieve this level of HA.

  • JSPM in high availability

    Dear Friends,
    I have installed PI 7.1 in High availability using Microsoft windows MNS based cluster. It was with initial patch level 04 of Netweaver 710.
    I have installed SCS, ASCS, Enque Replication Services etc in cluster mode. I have installed primary application server on Node 2 and Additional Application server on Node 1.
    Now I installed SP07 using JSPM from my Primary Application server i.e. Node 1. It went successfully and my support Pack level was upgraded to Sp07. But now when I open Java stack in Node 1, It shows only home page using url - http://10.6.4.178:50000
    After that if I open any link e.g. UME or NWA, it does not open and throughs following error :
    Service cannot be reached
    What has happened?
    URL http://10.6.4.178:50000/wsnavigator call was terminated because the corresponding service is not available.
    Note
    The termination occurred in system PIP with error code 404 and for the reason Not found.
    The selected virtual host was 0 .
    What can I do?
    Please select a valid URL.
    If it is a valid URL, check whether service /wsnavigator is active in transaction SICF.
    If you do not yet have a user ID, contact your system administrator.
    ErrorCode:ICF-NF-http-c:000-u:SAPSYS-l:E-i:sappicl1_PIP_00-v:0-s:404-r:Notfound
    HTTP 404 - Not found
    Your SAP Internet Communication Framework Team
    Java is started properly on both the nodes without any error. While it is working fine on Node 2 i.e. 10.6.4.179.
    Please suggest ! Do we need to do so special step for installing patch through JSM in High availability setup?
    Jitendra Tayal

    Hi Jitendra,
    While doing a patch upgrade with JSPM in HA scenario, you need to switch off the automatic switchover option because, once the components are deployed, the J2ee gets restarted. If the automatic failover is active, the service gets switched over to the other node.
    In this case, shutdown all the instances and stop the cluster.
    Restart the systems. Start the cluster in the original servers.
    Restart JSPM.
    It should work fine then.
    Let me know if it is helpful.
    Best Regards
    Raghu

  • BIA Load Check delivers wrong result for high availability

    Hi all,
    has anyone noticed the BIA Load Check (Function CHECK_BIALOAD_CHECK of class CL_RSDDTREX_SERVICE) handles a high-availability setup in the wrong way?
    We have 2 times x blades, with each blade being mirrored in a high-availability setup (dedicated backup mode).
    That means our usable space is: x * memory_size of blade
    However, the load check calculates the load by doing:
    load_factor = size_of_all_indexes / SUM(memory_per_blade)
    => If we have 36 GB/blade and 2 blades + 2 backup blades, total usable memory should be 236 = 72, and NOT 436 = 144!
    So the CALCULATED load_factor is only half of the ACTUAL (real) load factor => the alerting for BIA overload is never triggered!
    Do we need to write our own load check function then?
    Thanx,
    Hendrik
    Edited by: HMaeder on Jan 20, 2010 5:44 PM

    Hello Hendrik
    I believe your finding is correct. Backup blades should not be taken into consideration for total memory. Please open a customer message so SAP development can analyze the issue and provide a solution (just cut&paste your posting).
    Regards,
    Marc
    SAP NetWeaver RIG

  • High Available SRW2024

    I trying to create a high available setup using two SRW 2024 switches and Linux bonded network adapter in mode 1. Each network adapter of the bond is connected to a switch and in case of failure of an adapter or switch, the bonded network adapter will fail over to the other switch using the same MAC address.
    I have an aggregated link setup between the two switches. Furthermore I have spanning tree enabled on both switches. If I do a fail-over test and the bond fails over to the other switch, the port state of the aggregated link changes to blocking which blocks all traffic between the two switches.
    In case I use a single (none aggregated) link I’m able to prevent the blocking by disabling spanning tree on the inter switch port. In this case the switch needs about 10 seconds to learn the new port of the MAC address. However I’m not able to make this work on an aggregated link. The UI does have the option to disable spanning tree on the aggregated port, but when saving spanning tree is still enabled.
    Questions:
    1.       What is the recommended setup concerning inter switch aggregated links and spanning tree configuration for above mentioned high availability configuration.
    2.       Why is it not possible to disable spanning tree on aggregated links? Is this by design or is there some problem with the UI?
    Any help appreciated.

    I have rapid spanning tree enabled on both switches.
    The problem is that I have to disabled spanning tree on the link connecting the two switches together. If not the inter switch link will be blocked the moment I fail over the network bond because it probably thinks there is a redundant path. Is there some other way to prevent the inter switch link from blocking?
    If not, how can I disable spanning tree on the aggregated link? So far I only managed to do this on a normal link, but cannot do it on an aggregated link.

  • Generating license for ISE high availability primary/secondary nodes

    We have two ISE servers that will act as primary/secondary in a high availability setup.
    The ISE 1.0.4 installation guide, page 93, mentions that "If you have two Cisco ISE nodes configured for high availability, then you must include both the primary and secondary Administration ISE node hardware and IDs in the license file."
    However, after entering the PAK in the licensing page, the only required fields are:
    - Primary Product ID
    - Primary Version ID
    - Primary Serial No
    In this case, how can i include both primary and secondry HW and IDs?
    Thanks in advance.

    I am refering you a Cisco ISE Nodes for High Availability configuration guide, Please check:
    http://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_dis_deploy.html#wp1128454

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SharePoint Foundation 2013 - Can we use the foundation for intranet portal with high availability ( medium farm)

    Today I had requirement, where we have to use the SharePoint Foundation 2013 (free version) to build an intranet portal ( basic announcement , calendar , department site , document management - only check-in check-out / Version).
     Please help me regarding the license  and size limitations. ( I know the feature comparison of Standard / Enterprise) I just want to know only about the installation process and license.
    6 Server - 2 App / 2 Web / 2 DB cluster ( so total license 6 windows OS license , 2 SQL Server license and Guess no sharepoint licenes)

    Thanks Trevor,
    Is load balance service also comes in free license... So, in that case I can use SharePoint Foundation 2013 version for building a simple Intranet & DMS ( with limited functionality).  And for Workflow and content management we have to write code.
    Windows Network Load Balancing (the NLB feature) is included as part of Windows Server and would offer high availability for traffic bound to the SharePoint servers. WNLB can only associate with up to 4 servers.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • 11.1.2 High Availability for HSS LCM

    Hello All,
    Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
    My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
    grouping.size=100
    grouping.size_unknown_artifact_count=10000
    grouping.group_by_type=Y
    report.enabled=Y
    report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
    fileSystem.friendlyNames=false
    msr.queue.size=200
    msr.queue.waittime=60
    group.count=10000
    double-encoding=true
    export.group.count = 30
    import.group.count = 10000
    filesystem.artifact.path=\\server1\import_export
    I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
    [2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
    [2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
    [2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
    I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
    Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
    Bug Attributes
    Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
    Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
    Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
    Created 25-Jan-2011 Platform Version 2003 R2
    Updated 24-Feb-2011 Base Bug 11696634
    Database Version 2005
    Affects Platforms Generic
    Product Source Oracle
    Related Products
    Line - Family -
    Area - Product 4482 - Hyperion Lifecycle Management
    Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to top

    it is not possible to implement a kind of HA between two different appliances 3315 and 3395. 
    A node in HA can have the 3 persona. 
    Suppose Node A (Admin(primary), monitoring(Primary) and PSN). 
    Node B(Admin(Secondary), Monitoring(Secondary) and PSN). 
    If the Node A is unavailable, you will have to promote manually the Admin role to Primary. 
    Although the best way is to have
    Node A Admin(Primary), Monitoring(Secondary) and PSN
    Node B Admin(Secondary), Monitoring (Primary) and PSN. 
    Rate if helpful and Marked As correct if it is correct for the experts. 
    Regards,

  • Oracle Berkeley DB Java Edition High Availability (White Paper)

    Hi all,
    I've just read Oracle Berkeley DB Java Edition High Availability White Paper
    http://www.oracle.com/technetwork/database/berkeleydb/berkeleydb-je-ha-whitepaper-132079.pdf
    In section "Time Consistency Policy" (Page 18) it is written:
    "Setting a lag period that is too small, given the load and available hardware resources, could result in
    frequent timeout exceptions and reduce a replica's availability for read operations. It could also increase
    the latency associated with read requests, as the replica makes the read transaction wait so that it can
    catch up in the replication stream."
    Can you tell me why those read operations will not be taken by the master ?
    Why will we have frequent timeout ?
    Why should read transaction wait instead of being redirect to the master ?
    Why should it reduce replica's availability for read operations ?
    Thanks

    Please post this question on the Berkeley DB Java Edition (BDB JE) forum Berkeley DB Java Edition. This is the Berkeley DB Core (BDB) forum.
    Thanks,
    Andrei

  • Office Web Apps farm - how to make it high available

    Hi there,
    I have deployed OWAs for Lync 2013 with two servers in one farm. When I was testing, e.g. the case when other server is down I found out the following error:
    Log Name:      Microsoft Office Web Apps
    Source:        Office Web Apps
    Date:          17/03/2014 22:59:54
    Event ID:      8111
    Level:         Error
    Computer:      OWASrv02.domain.com
    Description:
    A Word or PowerPoint front end failed to communicate with backend machine
    http://OWASrv01:809/pptc/Viewing.svc
    Does that means, that OWAs cannot setup as high avaiable when using standalone OWAs only? Above error appeared when the server OWASrv01 was rebooting.
    Petri

    Hi Petri,
    For your scenario, you have achieve a  high performance for your Office Web Apps farm but not high availability. 
    For performing a high availability Office Web Apps farm, you can refer to the blog:
    http://blogs.technet.com/b/meamcs/archive/2013/03/27/office-web-apps-2013-multi-servers-nlb-installation-and-deployment-for-sharepoint-2013-step-by-step-guide.aspx 
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

Maybe you are looking for

  • Invoice simulation issue

    Dear Gurus, PO has has raised for 4 items each with Qty 500 item1 500 qty   value 4990 Eur item2 500 qty    value 4990 Eur item3 500 qty    value  460 Eur item4 500 qty     value 460 Eur GR happened for item1 491 qty   value 4900.18 Eur item2 500 qty

  • How do I stop the "Reset Firefox" prompt appearing when there is no need for it?

    Every time I start Firefox I now get the reset firefox prompt in the status bar: "It looks like you haven't started Firefox in a while. Do you want to clean it up .... etc." I use Firefox practically every day, so it is not true that I haven't used i

  • Handspring Visor and Mac OS 10.4

    Hi, I have an old Handspring Visor and I want to use it with my laptop. I came across it while cleaning out some old hardware that I know that won't work any more for me. Then I came across my old Visor. Does anyone have any idea if it will work with

  • How to transfer flash player windows 8?

    The question sounds silly, but its a real one. basically what's happening is the computer I have right now, that I'm typing on, browsing the internet, etc, is my brothers that I am currently borrowing in order to get some updates for the desktop I ha

  • HP8500 Wont cancel a document?

    I have a document stuck in 'Deleting mode' and therefore i can't print anything....for over a week now. I've even tried to  uninstall the software, but still the document say's it's deleting.......