Failover Cluster, Hyper-V, Virtualised PDC and Time Sync

Hi,
I wonder if anyone can clear something up for me.
We have two hosts running failover cluster and hyper-v. On these hosts we two Virtual DCs (one of which is our PDC) as well as a number of member servers. In Hyper-V integration services we have set the time synchronisation option for each server including
the DCs. Each host is joined to the domain so that Failover Cluster works. We also have two physical domain controllers.
When we run w32tm /query /source we can see that the VMs, including the Virtual DCs are getting their time from the Hosts (VM IC Time Synchronization Provider) and when we run w32tm /query /source on the hosts we find that they are getting the time from
each other (host1 has host2 and host2 has host1).
We are experiencing some time drift and I wonder whether this is due to the PDC getting its time from the host, instead of having its w32time type parameter setup to be NTP instead of NT5DS?
What's the best practice for a situation like this??
Thanks in advance!
Stephen

yes, I was actually just digging the article out to post back but looks like you found it already.
i'm not aware of snapshot issues that you mention, its not something I've experienced, but if I was snapshotting a DC I would make sure if was running on 2012 hyper-v and the dc was also 2012, with the pdc being 2012 too. otherwise snapshotting a DC could cause
issues.
Regards,
Denis Cooper
MCITP EA - MCT
Help keep the forums tidy, if this has helped please mark it as an answer
My Blog
LinkedIn:

Similar Messages

  • Windows Server 2012 Failover Cluster (Hyper-V) Event Id 1196

    Hi All,
    I just installed Failover Cluster for Hyper-V on windows server 2012 with 2 nodes. I got following error event id 1196.
    reCreated / deleted Cluster Host A record on DNS and nothing happened.
    Any suggestion?
    There is similar topic but it couldnt help
    http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/2ad0afaf-8d86-4f16-b748-49bf9ac447a3/
    Regards

    Hi,
    You may refer to this article to troubleshoot this issue;
    Event ID 1196 — Network Name Resource Availability
    http://technet.microsoft.com/en-us/library/dd354022(v=WS.10).aspx
    Check the following:
    Check that on the DNS server, the record for the Network Name resource still exists. If the record was accidentally deleted, or was scavenged by the DNS server, create it again, or arrange to have a network administrator create it.
    Ensure that a valid, accessible DNS server has been specified for the indicated network adapter or adapters in the cluster.
    Check the system event log for Netlogon or DNS events that occurred near the time of the failover cluster event. Troubleshooting these events might solve the problem that prevented the clustered Network Name resource from registering the DNS name.
    For more information please refer to following MS articles:
    DNS Registration with the Network Name Resource
    http://blogs.msdn.com/b/clustering/archive/2009/07/17/9836756.aspx
    Lawrence
    TechNet Community Support

  • Failover Cluster Hyper-V Storage Choice

    I am trying to deploy a 2 nodes Hyper-V Failover cluster in a closed environment.  My current setup is 2 servers as hypervisors and 1 server as AD DC + Storage server.  All 3 are running Windows Server 2012 R2.
    Since everything is running on Ethernet, my choice of storage is between iSCSI and SMB3.0 ?
    I am more inclined to use SMB3.0 and I did find some instructions online about setting up a Hyper-V cluster connecting to a SMB3.0 File server cluster.  However, I only have budget for 1 storage Server.  Is it a good idea to choice SMB over iSCSI
    in this scenario?  ( where there is only 1 storage for the Hyper-V Cluster ). 
    What do I need to pay attention to in this setup?  Except some unavoidable single points of failures. 
    In the SMB3.0 File server cluster scenario that I mentioned above, they had to use SAS drives for the file server cluster (CSV).  I am guessing in my scenario, SATA drives should be fine, right?

    "I suspect that Starwind solution achieves FT by running shadow copies of VMs on the partner Hypervisor"
    No, it does not run shadow VMs on the partner hypervisor.  Starwind is a product in a family known as 'software defined storage'.  There are a number of solutions on the market.  They all provide a similar service in that they allow for the
    use of local storage, also known as Direct Attached Storage (DAS), instead of external, shared storage for clustering.  Each of these products provides some method to mirror or 'RAID' the storage among the nodes in the software defined storage. 
    So, yes, there is some overhead to ensure data redundancy, but none of this class of product will 'shadow' VMs on another node.  Products like Starwind, Datacore, and others are nice entry points to HA without the expense of purchasing an external storage
    shelf/array of some sort because DAS is used instead.
    1) "Software Defined Storage" is a VERY wide term. Many companies use it for solutions that DO require actual hardware to run on. Say Nexenta claims they do SDS and they need a separate physical servers running Solaris and their (Nexenta) storage app. Microsoft
    we all love so much because they give us infrastructure we use to make our living also has Clustered Storage Spaces MSFT tells is a "Software Defined Storage" but they need physical SAS JBODs, SAS controllers and fabric to operate. These are hybrid software-hardware
    solutions. More pure ones don't need any hardware but they still share actual server hardware with hypervisor (HP VSA, VMware Virtual SAN, oh, BTW, it does require flash to operate so it's again not pure software thing). 
    2) Yes there are number of solutions but devil is in details. Technically all virtualization world is sliding away from ancient way of VM-running storage virtualization stacks to ones being part of a hypervisor (VMware Virtual Storage Appliance replaced
    with VMware Virtual SAN is an excellent example). So talking about Hyper-V there are not so many companies who have implemented VM-less solutions. Except the ones you've named it's also SteelEye and that's all probably (Double-Take cannot replicate running
    VMs effectively so cannot be counted). Running storage virtualization stack as part of a Hyper-V has many benefits compared to VM-running stuff:
    - Performance. Obviously kernel-space running DMA engines (StarWind) and polling driver model (DataCore) are faster in terms of latency and IOPS compared to VM-running I/O all routed over VMBus and emulated storage and network hardware.
    - Simplicity. With native apps it's click and install. With VMs it's UNIX management burden (BTW, who will update forked-out Solaris VSA is running on top of? Sun? Out of business. Oracle? You did not get your ZFS VSA from Oracle. Who?) and always "hen and
    chicken" issue. Cluster starts, it needs access to shared storage to spawn a VMs but VMs are inside a VM VSA that need to be spawned. So first you start storage VMs, then make them sync (few terabytes, maybe couple of hours to check access bitmaps for volumes)
    and only after that you can start your other production VMs. Very nice!
    - Scenario limitations. You want to implement a SCV for Scale-Out File Servers? You canont use HP VSA or StorMagic because SoFS and Hyper-V roles cannot mix on the same hardware. To surf SMB 3.0 tide you need native apps or physical hardware behind. 
    That's why current virtualization leader VMware had clearly pointed where these types of things need to run - side-by-side with hypervisor kernel.
    3) DAS is not only cheaper but also faster then SAN and NAS (obviously). So sure there's no "one size fits all" but unless somebody needs a a) very high LUN density (Oracle or huge SQL database or maybe SAP) and b) very strict SLAs (friendly telecom company
    we provide Tier2 infrastructure for runs cell phone stats on EMC, $1M for a few terabytes. Reason? EMC people have FOUR units like that marked as a "spare" and have requirement to replace failed one in less then 15 minutes) there's no point to deploy hardware
    SAN / NAS for shared storage. SAN / NAS is an sustained innovation and Virtual SAN is disruptive. Disruptive comes to replace sustained for 80-90% of business cases to allow sustained live in a niche deployments. Clayton Christiansen\s "Innovator's Dilemma".
    Classic. More here:
    Disruptive Innovation
    http://en.wikipedia.org/wiki/Disruptive_innovation
    So I would not consider Software Defined Storage as a poor-mans HA or usable to Test & Development only. Thing is ready for prime time long time ago. Talk to hardware SAN VARs if you have connections: How many stand-alone units did they sell to SMBs
    & ROBO deployments last year?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Problem with date and time syncing itouch to macbook

    i have a problem with the date and time on ical when i sync my itouch4 w my macbookair. whatever i entered into my calendar in itouch will appear different in my ical( with a 6 hrs advance difference) and the alarm i set on my itouch in the calendar doesnt ring at the right time as set even though it is shown the time that i wanted.
    Both my macbook n itouch are set to the same time zone and i have checked this many many times n the time is the same on both machine. but i just dont understand why is there a time difference when synced and my alarm just dont sound at the time it was supposedly set. it rings hours later n showing a different time that it was set but in the parenthesis the right time that i had set it.
    One time my computer ran out of battery n went into sleep mode and when i open it again, the time n date setting was all off. which im surprised why it had gone off. So did this create the problem i have now with the date n time sync or something else?
    please help!!!!

    Having the same problem, & am running Parallels (which I haven't used for some time) not boot camp. I tick the box labeled "Set time automatically" & lock the "click the lock to make changes". When I reboot, the Lock is unlocked and the "set time...." is unticked. Any ideas. Cheers.

  • Date and Time Sync at boot for Active Directory/Open Directory Authenticati

    All the macs in my school district are set to automatically sync their time with a network time server. They do not do this unless the system preference is opened. This poses a problem as all our users must authenticate against an Active Directory and an Open Directory server. If the time is out of sync they can not login, and therefore can not fix the time. I must then login in with a local admin account set the time and then the network account can login. I have tried using direct IP addresses for the NTP server. That doesn't work either. I have adjusted the tolerance of the AD server to accept a large discrepancy in time (did not work). Set the users to be mobile accounst (local home folders), did not work. The only fix will be to ensure that the time does sync at boot, before login. Is there a way to force the computer to sync at boot to a given NTP server, prior to the login window appearing?

    I have come up with my own solution for this issue. It is a two part solution. We found that the computers are experiencing time drift and that after they get out of sync by 5 minutes they can no longer login. One would think that the setting in the Date and Time system preference to automatically synchronize the time would take care of this. That however is not the case that check mark does not affect the ntp service at all. It merely eliminates the need to click a button when entering the system preference. How did we discover this? well that is part of the solution. We used webmin (http://www.webmin.com) to look at the ntp configuration. No matter what changes we made with the Date & Time preferences nothing changed in the system ntp settings. So on to the solution: Install webmin, and configure the ntp protocol manually to sync at your desired interval (I did hourly). This stops time drift. Next create a startup item and associated plist to force time sync at boot (be sure to loop it as different machines initialize their network cards slower). I have made ours available for download (http://www.manheimcentral.org/~getzt/netTime.zip). I hope this helps others. We have found that this works fairly well.

  • Date and Time sync 8 hours ahead!!

    Hey peepl...
    I'm not a complete vegetable when it comes to the MAC but my iMAC's problem has me mashed...
    I reformatted my hard drive recently so that I could put boot camp on (soiling my MAC with windows software!!) and then restored everything from my Time Machine backup but ever since I did that the time (and date) on my computer is 8 hours ahead of the real time even tho it's supposed to be syncing automatically with apple.asia...
    I know I can manually select the time in "Date and Time" in preferences but I prefer to keep it synced.
    I thought it was a problem with my firewall so I changed a few settings but it keeps on doing it.
    Can anyone help???? It's rather p@ss@ing me off

    Thanks for trying to help mate. I actually solved the problem late last night after noticing something. It seems that Windows in Boot Camp time was set 8 hours ahead which seems to have meant that every time if fixed the time on here, when I rebooted in windows in put it 8 hours further ahead.
    But it ALSO meant that when I tried to sync the time to apple.asia (for some strange and weird reason) the server returned that time as being correct (the one that was 8 hours ahead). I still don't understand it but at least (I think, and hope) it's fixed. :S
    thanks again for taking the time to try and help out Daniel...

  • Gentoo Linux and Microsoft Failover Clusters / Hyper-V

    Hello,
    Hoping there are a few people on the boards familiar with running Gentoo Linux guests under Microsoft FailOver Cluster / Hyper-V hosts.
    I have four Gentoo Linux guest VMs (running kernel 3.12.21-r1) running under the Microsoft Failover Cluster system with Hyper-V as the host. All of the Hyper-V drivers are built into the kernel (including the utilities and balloon drivers) and generally they
    run without issue.
    For several months now, however, I have been having strange issues with them. Essentially they stop responding to network requests after random intervals. However, these intervals aren't a few minutes or hours from each other; more like days or even weeks before
    one of them will stop responding on the network side.
    The funny thing is that the VMs themselves on the console side still responds. However, if I issue a reboot command on the externally non-responsive VM, the system will eventually get to a stage where all of the services are stopped and then hangs right after
    the "mounting remaining system ro" line (or something like that).
    The Failover Cluster Manager then reports that the system is "Stopping" but the system never reboots.
    I have to completely restart the HOST system so that either (A) the VM in question transfers to another host and starts responding again or (B) when the HOST comes back up I can work with the VM again.
    This *ONLY* happens on the Gentoo Linux guest VMs and not my Windows VMs.
    Wondering if anyone has hints on this.
    Thank you for your time.
    Regards, Christopher K.

    Hi ckoeber,
    Hyper-V has support most of Linux distribution but not all because it’s have so many distribution, until now the Gentoo Linux not in the support list, you can refer the following
    KB to know more about the detail:
    Linux Virtual Machines on Hyper-V
    http://technet.microsoft.com/en-US/library/dn531030
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V Failover Cluster Live migration over Virtual Adapter

    Hello,
    Currently we have a test environment 3 HyperHost wich have 2 Nic teams.  LAN team and team Live/Mngt.
    In Hyper-V we created a Virtual Switch (Wich is Nic team Live/Mngt)
    We want to seperate the Mngt and live with Vlans. To do this we created 2 virtual adapters. Mngt and Live, assigned ip adresses and 2 vlans(Mngt 10, Live 20)
    Now here is our problem: In Failover cluster you cannot select the Virtual Adapter(live). Only the Virtual Switch wich both are on. Meaning live migration simple uses the vSwitch instead of the Virtual adapter.
    Either its not possible to seperate Live migration with a Vlan this way or maybe there are Powershell commands to add Live migration to a Virtual adapter?
    Greetings Selmer

    It can be done in PowerShell but it's not intuitive.
    In Failover Cluster Manager, right-click Networks and you'll find this:
    Click Live Migration Settings and you'll find this:
    Checked networks are allowed to carry traffic. Networks higher in the list are preferred.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Failover Cluster Manager 2012 Showing Wrong Disk Resource - Fix by Powershell

    On Server 2012 Failover Cluster Manager, we have one Hyper-V virtual machine that is showing the wrong storage resource.  That is, it is showing a CSV that is in no way associated with the VM.  The VM has only one .vhd, which exists on Volume 16. 
    The snapshot file location and smart paging file are also on Volume 16.  This much is confirmed by using the Failover Cluster Manager to look at the VM settings.  If you start into the "Move Virtual Machine Storage" dialog, you can see
    the .vhd, snapshots, second level paging, and current configuration all exist on Volume 16.  Sounds good.
    However, if you look at the resources tab for the virtual machine, Volume 16 is not listed under storage.  Instead, it says Volume 17, which is a disk associated with a different virtual machine.  That virtual machine also (correctly) shows Volume
    17 as a resource.
    So, if everything is on Volume 16, why does the Failover Cluster Manager show Volume 17, and not 16, as the Storage Resource?  Perhaps this was caused by an earlier move with the wrong tool (Hyper-V manager), but I don't remember doing this.
    In Server 2003, there was a "refresh virtual machine configuration" option to fix this, but it doesn't appear in Failover Cluster Manager in Server 2012.
    Instead, the only way I've found to fix the problem is in PowerShell.
      Update-ClusterVirtualMachineConfiguration "put configuration name here in quotes"
    You would think that this would be an important enough operation to include GUI support for it, possibly in the "More Actions" right-click action on the configuration file.

    Hi,
    Thanks for sharing your experience!
    You experience and solution can help other community members facing similar problems.
    Please copy your post and create a new reply, then we can mark the new reply as answer.
    Thanks for your contribution to Windows Server Forum!
    Have a nice day!
    Lawrence
    TechNet Community Support

  • Failover cluster for BizTalk 2010

    Hi friends,
    I need to understand step by step configuration for failover for biztalk 2010.
    use of failover cluster manager from windows os and how its working from biztalk.
    Please do share with me the right url.
    Thanks all.

    Similar question was asked recently in our forum:
    http://social.msdn.microsoft.com/Forums/en-US/18239da7-74f2-45a7-b984-15f1b3f27535/biztalk-clustering?forum=biztalkgeneral#4074a082-8459-420f-8e99-8bab19c8fba2
    White paper from Microsoft on this context shall you with step-by-step guidance of failover cluster for BizTalk 2010.
    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=2290
    Also refer this blog where authors in series of his post guides with the steps required:
    Part 1: BizTalk High Availability Server Environment – Preparations
    Part 2: BizTalk High Availability Server Environment–Domain Controller Installation
    Part 3: BizTalk High Availability Server Environment – SQL & BizTalk Active Directory Accounts
    Part 4: BizTalk High Availability Server Environment – Prepping our SQL & BizTalk Failover Clusters
    Part 5: BizTalk High Availability Server Environment – SQL Server 2008r2 Failover Cluster
    Part 6: BizTalk High Availability Server Environment–BizTalk 2010 Failover Cluster Creation
    Part 7 – BizTalk High Availability Server Environment –BizTalk 2010 Installation, Configuration
    and Clustering
    Par 8 - Adding Network Load balancing to our High Availability Environment
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • Failover Cluster Validation Report Error with IBM USB Remote NDIS Network device

    We are setting up Microsoft Windows Server 2008 R2 Failover Cluster on IBM X3850 X5 and get errors in the Failover cluster Validation Report due to the IBM USB Remote NDIS Network Device is using APIPA adresses and both servers are using same APIPA-adresse.
    How should I configure the Server and OS for the Failover cluster to be MS approved?
    IBM don't recommend that i disable the Network device, but it is a possible solution!?!

    What I did was use ipconfig /all to see the settings it is using and then when and set the ip setting on the NDIS driver to be that except I increment the last value by 1 for each node so that they do not have the same IP address.   I ran the
    cluster validation again and it came up clean and I have not experienced any issues yet.  It does give warnings about it being an Automatic Private IP Address and should not be used, which is ok because we are not going to use it anyways.
    Rich Baumet

  • Using orchestrator to create failover cluster. out of options

    Hi all,
    I am trying to build failover cluster using orchestrator 2012 R2 and then services on the cluster.
    However, I am so stuck now at the first stage:getting the cluster online.
    It is very basic, 2 nodes 2008R2 servers, shared storage for quorum, a virtual name and an IP.
    I try to put as little things as possible at start. I added the failover clustering feature into the orchestrator to avoid the double hop problem. I found that I can create the cluster with a command like this.
    C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
    -NonInteractive -Command {Import-Module 'C:\Windows\system32\WindowsPowerShell\1.0\Modules\FailoverClusters\FailoverClusters.psd1';New-Cluster -name clustername -node servername1,servername2 -staticaddress 1.1.1.1}
    I use the powershell.exe to avoid the 32bit powershell problem from orchestartor.
    I can successfully create the cluster using this same command if I run it manually inside my powershell prompt and it usually took 20secs to complete and shows the progress within powershell prompt.
    but when I put in the Run .Net Script and powershell in my orchestrator and run it. It shown success without give any warning/error but unfortunately I found that there are errors in the event log, as below
    Could not retrieve the network topology for the nodes.
    Security must be initialized before any interfaces are marshalled or unmarshalled. It cannot be changed once initialized
    I'd like to say that if I replace "New-Cluster" with "Test-Cluster". It runs normally and perform all tests with a report as output.
    Tried to turn off UAC (admin approval mode for all administrators)
    I’ve also created the disabeld cluster AD computer object with enough permissions before running the command (once again. I can create the cluster manually running the command)
    Also, I’ve set the orchestrator to run the runbook service using my user account. So the credential used for manual execution and orchestrator should be the same (the account is local administrators group member to both node)
    P.S. I found that the error Security must be initialized before any interfaces are marshalled or
    unmarshalled is not specific for Orchestrator + powershell but also for other products.
    Please help, thanks

    Turn out I need to use the PowerShell OIP from
    link
    Setup CredSSP authentication
    Then use the following script to call powershell 3.0
    $ABC = powershell {Import-Module ‘C:\Windows\system32\WindowsPowerShell\1.0\Modules\FailoverClusters\FailoverClusters.psd1′;New-Cluster -name clustername -node servername1,servername2 -staticaddress 1.1.1.1}
    finally can run the activity without issue

  • GPS Access to high precision time and time stamping WASAPI microphone captures

    I am interested in using multiple WPs to capture microphone WAV that is time stamped with high accuracy (sub millisecond), hopefully from the GPS system. The GPS hardware obviously has the capability to sub microsecond levels but I can't find an API for
    it.

    What I would like to do is get geo positional data, which has a defined statistical uncertainty but might be relatively better correlated and as accurate a time stamp as possible.  Latency isn't an issue but consistency is. GPS, of course, easily produces
    time information to sub microsecond though I don't know a way to access it in WP.  .1ms consistency would be all I really need but it's important that each phone in a cluster be able to capture and time stamp a sound (assume all phones are equidistant
    from a source) to within .1ms. I am thinking of a product that could be used, for one obvious example at weddings, to capture the proceedings and allow after the fact enjoyment by replaying the proceedings and shifting the focus on the bride/groom minister
    as they talk using beam forming dsp tech from the data. There are other ways but it occurs to me that the ubiquity of smart phones would really make this easy. Just have the guests download an app. It would be part of a wedding doc package along with videography
    and stills.

  • Set Date and Time Automatically...doesn't

    We've noticed that the setting to automatically synchronize date and time to Apple's time server doesn't work if the system preference is 'locked'
    In other words, if the user that's logged in currently, isn't an admin user, then the machine doesn't set the date and time automatically.
    As soon as you unlock the system preference by clicking the padlock at the bottom, the date and time sync as they should.
    Anyone else noticed this?

    We do want the clock correct. The issue here is that the internal clocks drift - as I understand it, this is the one of the main reasons behind using a network time server.
    With the Date and time preferences set to 'Set date and time automatically', the but with the lock at the bottom corner locked- it's as though being locked is preventing the automatic date and time settings from updating.
    But the preference is checked on, and so the clock should be 'resetting' itself as necessary.
    To give an example- say the machine has been off for a while, and unplugged. This will cause the machine not to know what the Date and Time is. When the machine starts up, the preference should kick in, and the machine should 'Update date and time' automatically.
    But it doesn't. So long as that padlock is locked, even though the preference is checked on, the machine will not alter the date and time - until the padlock is unlocked.

  • OVS date and time

    Hi all,
    I am facing problem with date and time sync in Oracle VM server.
    Problem is that after the guest power off the guest time is getting old (16-jul-2009) . OVM server time is synced with NTP server and time is actual but guest time is not matched with OVS time even after I did ntp update in guest after reboot I see same old time.
    Seems like guest syncs with some old time but whence he found this old time ?
    If any one knows please help.
    ./thanks

    >
    what 6 hour offset. Where did you saw ?Sorry, I got confused with another post I was replying to somewhere else. Please ignore the 6 hour remark.
    Follow-up question: if you start this guest on a different server, does it get the right time, i.e. does it only have the wrong time on a particular server? Or is it one particular guest that always has the wrong time when it starts, regardless of which server it starts on?OK, everything is clear , thanks for help I will check all.

Maybe you are looking for

  • I can't open a particular document on any device.

    It just says "Document cannot be opened". I simply opened then closed the document on iPad then the problem arose. Any suggestions.

  • AD RMS 2012 integration with exchange server 2013

    AD RMS 2012 integration with exchange server 2013 I would like to  have a knowledge About this Topic 1.what is differrent if we use internal SSL certificate  with Public Certificate and in this case i would like to use Mobile Phone I eed to import Ro

  • Mac Word 04: Recover saved file

    I chose all the text in one document and then deleted it, started writing new text and when I was done I hit the disk in the upper left corner instead of save as. Now the document I saved replaced the old one. Is there any way I can get the original

  • Looking for a 60s / 70's reverb sound

    Hello, I am new to Logic express and I am having trouble finding a reverb sound that sounds right to me. I feel the reverb plug-ins available within logic express are perhaps too sophisticated, ie room modelling reverbs, and so I guess I am looking f

  • ICM_HTTP_CONNECTION_FAILED message in SM59 (ABAP Proxy Connection)

    Hi All, Recently, i got a problem with our XI server, suddently the ABAP proxy connectiont to XI from another SAP server was failed. i was check from sm59 the HTTP connection return me ICM_HTTP_CONNECTION_FAILED. i already check the port and calling