High Availability Of Service Replicated Across Domains

Hi,
We have two Tuxedo application , one generate message and calls service of remote domain to send to another Tuxedo application(fix engine) which sends it to external world. There are two remote domains(individual means on separate nodes named SSGWBest and SSGWBoxt) which have the same service(OutFixEn) published of fix engine. We have done this for high availability scenario if one machine is not available or crashed then message can still be sent to external world.
We are using Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 095 on AIX 6.1 Power 7 machine. Following is snippet of domain configuration to show how service is published in local domain. SSWBest and SSGWBoxt site both publish service OutFixEn as local and remote(to point to another) sections.
*DM_LOCAL_DOMAINS
DEFAULT: SECURITY = NONE
Dom1 GWGRP = LGWGRP
TYPE = TDOMAIN
DOMAINID = "PATDom1"
DMTLOGDEV = "/appl/aer/a01/data/tcs_bancs//DMLOGDEVICE"
DMTLOGNAME = "DMLOGDEVICE"
*DM_REMOTE_DOMAINS
Dom2 TYPE = TDOMAIN
DOMAINID = "PATDom2"
SSGWBest TYPE = TDOMAIN
DOMAINID = "SSGWBest"
SSGWBoxt TYPE = TDOMAIN
DOMAINID = "SSGWBoxt"
*DM_TDOMAIN
# Local network addresses
Dom1 NWADDR = "//uaix3017.unix.bank.nl:50708"
# Remote network addresses
Dom2 NWADDR = "//uaix3028.unix.bank.nl:50708"
#SSG Machine1 Network Address
SSGWBest NWADDR = "//uaix3021.unix.bank.nl:50708"
#SSG Machine2 Network Address
SSGWBoxt NWADDR = "//uaix3034.unix.bank.nl:50708"
*DM_LOCAL_SERVICES
sh_COETGETMESSG
sh_COETPICXML
sh_COETFLATFILE
sh_COBTRPAIRMSG
InpFixEnOC1
InpFixEnOC2
InpFixEn1
InpFixEn2
InpFixBrs
InpFixIon
InpFixRtrs
InpMmtpEnDrv
InpMmtpEnCash
*DM_REMOTE_SERVICES
sh_COETGETMESSG
RACCESSPOINT=Dom2
sh_COETPICXML
RACCESSPOINT=Dom2
sh_COETFLATFILE
RACCESSPOINT=Dom2
sh_COBTRPAIRMSG
RACCESSPOINT=Dom2
OutFixEn
RACCESSPOINT=SSGWBest
OutFixEn
RACCESSPOINT=SSGWBoxt
OutFixBrs
RACCESSPOINT=SSGWBest
OutFixIon
RACCESSPOINT=SSGWBest
OutFixRtrs
RACCESSPOINT=SSGWBoxt
OutMmtpEnDrv
RACCESSPOINT=SSGWBest
OutFixEnOC
RACCESSPOINT=SSGWBoxt
*DM_ROUTING
We tried to test this scenario and started calling service OutFixEn from local domain and during this run we shutdown the tuxedo application server on SSWBoxt site so that OutFixEn was not available (To create service un-availability scenario). Our understanding was that all service calls only land on SSWBest site as domain will suspends this site for service availbility but it did not happen as first few service call failed with TPETIME (my assumption was it will fail through TPENOENT)and then service landing on SSGWBoxt were routed on SSGBest site.
Based on this test scenario, i have following questions.
1/ How to achieve routing of services to available domain with minimal service failures (means my only first one or two services fail and then application adjust to route service to available domain) ?
2/ Is there any other better way to organize these services so that better load balancing and high availability can be ensured ?
Regards,
Ajeet Tewari

Hi,
It is possible to configure failover and load balancing as you suggest, but that won't solve the problem described. The issue is that the local domain gateway doesn't know the availability of the services in a remote domain. It decides to advertise imported services locally only based on the connection establishment policy (ON_STARTUP or ON_DEMAND), and not the actual state of the remote service. If the connection policy is ON_DEMAND, the service is always advertised locally and when a request arrives for that service, the domain gateway will establish a connection to the remote domain if one isn't present. If the connection policy is ON_STARTUP, the domain gateway won't start advertising the imported services locally until the connection is established. However, once the connection is established, it assumes the imported service is available at the remote domain.
You have a couple of options here. One is to make sure the service is highly available on the domain, such that when a connection exists to the domain, the service will be available. The other alternative is to switch to an MP single domain configuration as the availability of a service across machines is known.
Regards,
Todd Little
Oracle Tuxedo Chief Architect

Similar Messages

  • High Availability, User Profiles, and UPRE

    I'm setting up High Availability using Metalogix Replicator to mirror the Content DBs. There's a Primary (current production SP2010 environment) and a Secondary environment. Both consist of a WFE, SQL, and APP server.
    User Profiles are set up on both. I created the Secondary environments by copying Profile, Social, and Synch DBs from the Primary SQL server, and creating the UP using the same DB names.
    Since then, we have created new Audiences and new User Profile settings.
    So the two choices are:
    1. Manually update both environments.
    2. Use UPRE to keep them synchronized. I've looked over
    https://technet.microsoft.com/en-us/library/cc663011(v=office.14).aspx and it seems like a good solution.
    I'm not sure whether to use synchronous or asynchronous. I was also wondering if anyone has experience with setting this up, and if there's any chance that I'll mess up my current Primary User Profile service/synch by using UPRE.
    Has anyone been down this road? Is this what I'm looking for?
    Thanks,
    Scott

    > I'm setting up High Availability using Metalogix Replicator to mirror the Content DBs
    Wouldn't recommend it. It makes schema changes to the databases and is completely unsupported by Microsoft.
    UPRE would be appropriate.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • SQL Server Analysis Services (SSAS) 2012 High Availability Solution in Azure VM

    I have been testing an AlwaysOn high availability failover solution in SQL Server Enterprise on an Azure VM, and this works pretty well as a failover for SQL Server Databases, but I also need a high availability solution for SQL Server
    Analysis Server, and so far I haven't found a way to do this.  I can load balance it between two machines, but this isn't working as a failover and because of the restriction of not being able to use shared storage in a Failover Cluster in Azure
    VM's, I can't set it up as a cluster which is required for AlwaysOn in Analysis Services. 
    Anyone else found a solution to use an AlwaysOn High Availability for SQL Analysis Services in Azure VM?  As my databases are read-only, I would be satisfied with even just a solution that would sync the OLAP databases and switch
    the data connection to the same server as the SQL databases.
    Thanks!
    Bill

    Bill,
    So, what you need is a model like SQL Server failover cluster instances. (before sql server 2012)
    In SQL Server 2012, AlwaysOn replaces SQL Server failover cluster, and it has been seperated to
    AlwaysOn Failover Cluster Instances (SQL Server) and
    AlwaysOn Availability Groups (SQL Server).
    Since your requirement is not in database level, I think the best option is to use AlwaysOn Failover Cluster Instances (SQL Server).
    As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a
    failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL
    Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
    It is similar to SQL Server failover cluster in SQL 2008 R2 and before.
    Please refer to these references:
    Failover Clustering in Analysis Services
    Installing a SQL Server 2008 R2 Failover Cluster
    Iric Wen
    TechNet Community Support

  • Is the Azure Files data highly available/replicated under the covers?

    I am assuming that with Azure Files, under the covers data is replicated multiple times for High Availability.  Is that correct?  For some applications, typically failover application, where the application doesn't assume highly available storage;
    the applications themselves build logic to either replicate/sync data.  An example is Elastic Search.  In these cases, the system lands up making too many copies.  Is the Azure File Semantic the same as blob store, in this regard.

    Hi,
    Would request you to refer to the article below to understand Azure File Service (in preview now):
    Introducing MS Azure File Service
    http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
    Below article helps us understand the same better with a "how-to" perspective:
    The Azure File Service
    http://clemmblog.azurewebsites.net/azure-file-service/
    Lastly, would like to keep you informed on the Features Not Supported By The Azure File Service
    http://msdn.microsoft.com/en-us/library/azure/dn744326.aspx
    Thank you,
    Arvind

  • High Availability for Existing Single Exchange Server holding Multiple Domains

    Hi,
    I'm trying to implement solution for Backup/Redundancy/High-availability for the existing Exchange Server 2013.
    Currently the Exchange Server 2013 deployed on 2012 Server is holding 4 Domains (1 internal domain  & 3 public/published domains). The Exchange Server is hosted Internally & published to send & receive mails from the SMTP relay(Third party
    resides on internet) server. Client Access & Mailbox Services are on the same server.
    Please advice me which is the best scenario to implement a backup/redundancy/high-availability for the current server.
    Thanks in advance.
    Regards
    Roopesh S

    Hi,
    The following links will help.
    High Availability and Site Resilience
    http://technet.microsoft.com/en-us/library/dd638137(v=exchg.150).aspx
    Deploying High Availability and Site Resilience
    http://technet.microsoft.com/en-us/library/dd638129(v=exchg.150).aspx
    Planning for High Availability and Site Resilience
    http://technet.microsoft.com/en-us/library/dd638104(v=exchg.150).aspx
    Backup, Restore, and Disaster Recovery
    http://technet.microsoft.com/en-us/library/dd876874(v=exchg.150).aspx
    ecsword

  • The File Replication Service has detected that the replica set "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" is in JRNL_WRAP_ERROR.

    Hi!
    I recently took over management of a Windows 2003 domain that had only one domain controller.  I was building a second DC for redundancy and discovered that the SYSVOL share on the original DC is in "JRNL_WRAP_ERROR" after the SYSVOL and NETLOGON
    share would not create on the new DC.  This error goes back as far as the log goes back so I don't know how long it has been in this state. 
    The message in the event log states to enable "Enable Journal Wrap Automatic Restore" but I found a KB article that says to use the BurFlags key instead. http://support.microsoft.com/kb/290762
    Should I run an authoritative restore since I don't have another domain controller with a good SYSVOL?
    The File Replication Service has detected that the replica set "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" is in JRNL_WRAP_ERROR.
     Replica set name is    : "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)"
     Replica root path is   : "c:\windows\sysvol\domain"
     Replica root volume is : "\\.\C:"
     A Replica set hits JRNL_WRAP_ERROR when the record that it is trying to read from the NTFS USN journal is not found.  This can occur because of one of the following reasons.
     [1] Volume "\\.\C:" has been formatted.
     [2] The NTFS USN journal on volume "\\.\C:" has been deleted.
     [3] The NTFS USN journal on volume "\\.\C:" has been truncated. Chkdsk can truncate the journal if it finds corrupt entries at the end of the journal.
     [4] File Replication Service was not running on this computer for a long time.
     [5] File Replication Service could not keep up with the rate of Disk IO activity on "\\.\C:".
     Setting the "Enable Journal Wrap Automatic Restore" registry parameter to 1 will cause the following recovery steps to be taken to automatically recover from this error state.
     [1] At the first poll, which will occur in 5 minutes, this computer will be deleted from the replica set. If you do not want to wait 5 minutes, then run "net stop ntfrs" followed by "net start ntfrs" to restart the File Replication
    Service.
     [2] At the poll following the deletion this computer will be re-added to the replica set. The re-addition will trigger a full tree sync for the replica set.
    WARNING: During the recovery process data in the replica tree may be unavailable. You should reset the registry parameter described above to 0 to prevent automatic recovery from making the data unexpectedly unavailable if this error condition occurs again.
    To change this registry parameter, run regedit.
    Click on Start, Run and type regedit.
    Expand HKEY_LOCAL_MACHINE.
    Click down the key path:
       "System\CurrentControlSet\Services\NtFrs\Parameters"
    Double click on the value name
       "Enable Journal Wrap Automatic Restore"
    and update the value.
    If the value name is not present you may add it with the New->DWORD Value function under the Edit Menu item. Type the value name exactly as shown above.

    > The message in the event log states to enable "Enable Journal Wrap
    > Automatic Restore" but I found a KB article that says to use the
    > BurFlags key instead.
    http://support.microsoft.com/kb/290762
    >
    > Should I run an authoritative restore since I don't have another domain
    > controller with a good SYSVOL?
    The automatic restore process AFAIK will initiate a D2 restore. And if
    there's no other DC, sysvol might be gone.
    I really would prefer to have control - this means I would do a D4.
    Absolutely I would :)
    Martin
    Mal ein
    GUTES Buch über GPOs lesen?
    NO THEY ARE NOT EVIL, if you know what you are doing:
    Good or bad GPOs?
    And if IT bothers me - coke bottle design refreshment :))

  • Event ID - 13568 The File Replication Service has detected that the replica set "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" is in JRNL_WRAP_ERROR.

    We had a major storm over the weekend which caused an unexpected shutdown.
    I am having an issue with one of my domain controller with Event ID 13568
    The domain controller which is running Windows Server 2012 was added successfully just a couple of days ago.
    I do not have a full backup of the server yet.
    It only has a GC role on it.
    What are the things I should look out for before I attempt to Enable Journal Wrap Automatic Restore and set it to 1?
    Would it be safer to just demote the server and start from scratch?
    Thank you all for reading!
    Mladen
    The File Replication Service has detected that the replica set "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" is in JRNL_WRAP_ERROR.
     Replica set name is    : "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)"
     Replica root path is   : "c:\windows\sysvol\domain"
     Replica root volume is : "\\.\C:"
     A Replica set hits JRNL_WRAP_ERROR when the record that it is trying to read from the NTFS USN journal is not found.  This can occur because of one of the following reasons.
     [1] Volume "\\.\C:" has been formatted.
     [2] The NTFS USN journal on volume "\\.\C:" has been deleted.
     [3] The NTFS USN journal on volume "\\.\C:" has been truncated. Chkdsk can truncate the journal if it finds corrupt entries at the end of the journal.
     [4] File Replication Service was not running on this computer for a long time.
     [5] File Replication Service could not keep up with the rate of Disk IO activity on "\\.\C:".
     Setting the "Enable Journal Wrap Automatic Restore" registry parameter to 1 will cause the following recovery steps to be taken to automatically recover from this
    error state.
     [1] At the first poll, which will occur in 5 minutes, this computer will be deleted from the replica set. If you do not want to wait 5 minutes, then run "net stop ntfrs"
    followed by "net start ntfrs" to restart the File Replication Service.
     [2] At the poll following the deletion this computer will be re-added to the replica set. The re-addition will trigger a full tree sync for the replica set.
    WARNING: During the recovery process data in the replica tree may be unavailable. You should reset the registry parameter described above to 0 to prevent automatic recovery from
    making the data unexpectedly unavailable if this error condition occurs again.
    To change this registry parameter, run regedit.
    Click on Start, Run and type regedit.
    Expand HKEY_LOCAL_MACHINE.
    Click down the key path:
       "System\CurrentControlSet\Services\NtFrs\Parameters"
    Double click on the value name
       "Enable Journal Wrap Automatic Restore"
    and update the value.
    If the value name is not present you may add it with the New->DWORD Value function under the Edit Menu item. Type the value name exactly as shown above.

    I set Enable Journal Wrap Automatic Restore to 1 and it was
    successful.
    I will monitor it to make sure it does not occur again.
    Thanks everyone on your replies
    Mladen

  • Let me know how to run high availability services in system 9 essbase.

    Hi
    Let me know how to run high availability services in system 9 essbase.
    thanks in ADV
    Message was edited by:
    user624564
    Message was edited by:
    user624564

    Hi,
    Now i am using essbase system 9.2 version,
    & URL : http://localhost:11080/eds/EssbaseEnterprise
    plz guide me to download the High Availability servers .
    thanks in Adv....

  • SQL Server 2005 Analysis Services across domains

    Hi,
    With SQL Server 2000, the Enterprise Edition was required to access
    Analysis Services across domains.
    Is this also the case in SQL Server 2005, that the Enterprise Edition
    is needed?
    Thanks, S

    Silver,
    Do you still need help with this?
    Thank you!
    Ed Price, Power BI & SQL Server Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • After start of Grid High Availability Service (HAS) can't umount filesystem

    I have weird problem when I start and stop HAS. File system get's into state, where it does not umount. So filesystem where grid software is running can't be umounted on boot. 9Is there something I do wrong, or don't do right.
    I have fresh install of latest 64-bit UEK2 kernel (2.6.39-400.109.1.el6uek.x86) and Oracle 11.2.0.3 Grid Infrastructure for standalone server. I Installed patch p12983005 (112036_Linux-x86-64) to get ACFS support for kernel. When I start HAS and ASM, file system does not umount anymore. Here's steps. Example is from local disk, but I have same problem with LUN from SAN. I thought it might have something to do with SAN and multipath, but SAN is not to blame cos same happens with local disk. Don't know why local disk seem be mapped by multipath in fstab.
    1. Mount fs
    [root@SERVER ~]# mount -t ext4 /dev/mapper/vg_SERVER-lv_app /u01
    2. Start HAS
    [grid@SERVER ~]$ crsctl start has
    CRS-4123: Oracle High Availability Services has been started.
    3. Stop HAS
    [grid@SERVER ~]$ crsctl stop has
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'SERVER'
    CRS-2673: Attempting to stop 'ora.cssd' on 'SERVER'
    CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'SERVER'
    CRS-2677: Stop of 'ora.cssd' on 'SERVER' succeeded
    CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'SERVER' succeeded
    CRS-2673: Attempting to stop 'ora.evmd' on 'SERVER'
    CRS-2677: Stop of 'ora.evmd' on 'SERVER' succeeded
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'SERVER' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    4. Try umount
    [root@SERVER ~]# umount /u01
    umount: /u01: device is busy.
            (In some cases useful info about processes that use
             the device is found by lsof(8) or fuser(1))
    [root@SERVER ~]# fuser /u01
    [root@SERVER ~]# lsof |grep /u01
    Neither of fuser or lsof returns nothing.
    This and the fact that ACFS cannon't be mounted on boot, get's me wondering should I use ASM and Oracle Restart at all at this point. Initial reason is that in future this is likely going to be part of RAC.

    I followed the instructions and copied the edid file I obtained while monitor was connected directly to the specified directory.
    Now I get:
    Jan 17 13:14:09 daniel-think kernel: platform VGA-1: Direct firmware load for edid/medion-edid.bin failed with error -2
    Jan 17 13:14:09 daniel-think kernel: [drm:edid_load] *ERROR* Requesting EDID firmware "edid/medion-edid.bin" failed (err=-2)

  • SCOM 2012 R2 Reporting Services High Availability Configuration Setting

    Can u guys tell me how I can make a scom 2012 r2 report server high availability configuration kindly share the steps or screen of it.

    SCOM reporting feature is depend on SQL reporting and as a result you may refer to SQL reporting service high availability.
    For detail, pls. refer
    http://technet.microsoft.com/en-us/library/bb522745.aspx
    http://www.mssqltips.com/sqlservertip/2335/scale-out-sql-server-2008-r2-reporting-services-farm-using-nlb-part-1/
    http://www.mssqltips.com/sqlservertip/2336/scale-out-ssrs-r2-farm-using-windows-network-load-balancing-part-2/
    http://technet.microsoft.com/en-us/library/hh882437.aspx
    Roger

  • Topology.svc - Endpoints - Web Services High Availability

    Hi,
    I was recently performing some simple DRP tests before going to production and i faced some issues i never encountered before..
    (followed
    http://blogs.msdn.com/b/besidethepoint/archive/2011/02/19/how-i-learned-to-stop-worrying-and-love-the-sharepoint-topology-service.aspx for usefull commands related to enpoints)
    My farm:
    SP 2013 - CU August 2013
    2 WFE (WFE1, WFE2)
    2 App (App1, App2) : Most services started on both servers (UPSS on APP1, UPS on both) - Central Admin on Both.
    SQL Cluster
    At normal state the command : (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    returns > ServiceEndpointUri :
    https://app1:32844/Topology/topology.svc
    (if i'm not wrong, this topology.svc can run on only one server at a time)
    I stopped WFE1, no pb, the NBL (appliance) is doing the job.
    Then I stopped the App1 and started to have some issues. (most enpoints not balanced to app2)
    I run the job "Application Addresses Refresh Job"
    Or launch PS command : Start-SPTimerJob job-spconnectedserviceapplication-addressesrefresh
    Wait 20 sec.
    A few endpoints are now on APP2 (MMS, Search), it seems to work, i reached my web page.
    I ask a mate to try and he got the "sorry we encountered and error..." > Can't load user profile.
    I refreshed my browser and got the same error.......
    ULS review, I can see that some svc request (most about user, profiledbcacheService) are still on APP1 !!
    A failure was reported when trying to invoke a service application: EndpointFailure Process Name: w3wp Process ID: 6784 AppDomain Name: /LM/W3SVC/93617642/ROOT-1-130445830237578923 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:e8315f8e5d7d4b1b90876e3b0043a4ae#authority=urn:uuid:164efb17f28c4d2d9702ce3e86f0c0e8&authority=https://app1:32844/Topology/topology.svc
    Active Endpoints: 1 Failed Endpoints:1 Affected Endpoint:
    http://app1:32843/e8315f8e5d7d4b1b90876e3b0043a4ae/ProfileService.svc
    The command (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    Still return me that my topology.svc is on
    https://app1:32844/Topology/topology.svc
    But App1 is down !!
    If my understanding is ok: Normally, the internal round robin loadbalancer (Application Discovery and Load Balancer Service, started on all servers, not configurable) should manage this.
    Application Addresses Refresh Job is running each 15 min. and refreshs available endpoints, using the topology.svc
    But, the topology.svc called is always on APP1 which is down !
    At this time, i haven't found why SharePoint is not detecting that APP1 is down and is not
    automatically recreating a topology service on another available server.....
    If you have any idea...your help is welcome :)
    Regards,
    O.P

    Hi  ,
    For achieving Web Services High Availability, you need to make sure service-applications have more than one server to service them to increase back-end resiliency to a SharePoint server dropping off the network.
    You can refer to the blog:
    http://blogs.msdn.com/b/sambetts/archive/2013/12/05/increasing-service-application-redundancy-high-availability-sharepoint.aspx
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Eric Tao
    TechNet Community Support

  • "There are currently no logon servers available to service the logon request." when trying to access a shared folder in domain environment.

    Hi,
    I already have a windows server 2003 working as a Primary Domain Controller (PDC) and now I created another windows server 2012 to work as an Additional Domain Controller (ADC).
    - PDC is doing (Active directory domain services + DNS + DHCP)
    - ADC is doing (Active directory domain services + DNS)
    For testing purposes, I shutdown the PDC and let the ADC up and running. Now, Whenever I try to access a shared folder on any server in the domain, I got this message "\\x.x.x.x is not accessible. You might not have permission to use this network resource.
    Contact the administrator of this server to find out if you have access permissions.
    There are currently no logon servers available to service the logon request."
    Actually, the replication of AD objects and DNS records between the PDC and ADC is done successfully.  I also can resolve names using the ADC's DNS. Also, I can ping servers hosting the files I want to access. 
    Appreciate your help.

    The PDCe is required for a lot of services, to include DFS.
    Turn the PDC back on and try again.  If it's still not working, ensure the file server has the correct DNS entries on the NIC.
    I think I see what you were trying to test.  The issue is that not all DCs are equal.  Yes, they hold the same information, but some DCs do extra work.  That is why FSMOs exist.
    Let me know how it goes.
    - Chris Ream -
    **Remember, if you find a post that is helpful, or is the answer, please mark it appropriately.**

  • Access Services 2013 high available on SQL 2012?

    Hi,
    Does anyone know how we get the randomly created databases from access services 2013 services high available on our SQL 2012 cluster?
    For the SharePoint content databases, we use the ‘always on’ functionality of SQL 2012. This functionality is not supported for Access services 2013.
    Thanks in advance,
    Johan

    Hi Edwin,
    Thanks again for your answer.
    Failover clustering is a perfect solution for high availability.
    A last additional question:
    Now we using ‘Always On groups’. The 2 instances are located one a different physical location.With that configuration, we have a High available solution and disaster recovery solution.
    If we going for a Failover Cluster we have a High Available solution, but we don’t have a disaster recovery solution anymore because everything is located on the same physical location.
    We can configure SQL mirroring, but then we have the same behavior like the ‘always On Groups’.-> Not possible to delete db’s from SharePoint and not automatically mirrored when created a new access services 2013 db.
    Is there any solution/configuration to accomplish this?
    Thanks,
    Johan

  • Oracle high availability service is showing as down in em 12c

    Hi,
    I thought I could use some help from here before going to oracle.
    Can you please share your ideas what might went wrong
    EM 12c is showing oracle high availability service as down though it is up and running.
    ps -ef | grep -i ohas
    /oracle/asm/grid/home/bin/ohasd.bin reboot
    /bin/sh /etc/init.d/init.ohasd run
    crsctl check has
    CRS-4638: Oracle High Availability Services is online

    Hello,
    We are not performing any system maintenence today so I'm not sure why you were presented with a message stating that.  Can you share you support request # (SR#) so I can look into your case and follow with you offline?
    Whenever we perform maintenence where you'll be unable to use a feature of the service notification should be provided on our status page.
    Thanks,
    Jon L. - MSFT - This posting is provided "AS IS" with no warranties and confers no rights.

Maybe you are looking for