DR servers - mailflow question

So I have a main site with 8 Exchange servers and now a DR site with 4 Exchange servers all under the same DAG.  Two copies of DB at main site and a third passive copy at DR site.   The dr site DBs are set for manual activation only.   All
exchange servers have all roles installed, we did not seperate all the cas role.  So I'm noticing that mail is routing through my DR servers at times.  Exchange is just seeing these other four servers as four more CAS servers and is using them as
such.  What is the best practice here to not have any mail route through these DR servers?  Should I just disable all the recieve connectors on my DR servers?  Or is there another or better way to handle this?

Exchange 2013 does this by design as part of a HA mail flow. I would recommend you go with it. 
How messages are routed from the Transport service depends on the location of the message recipients relative to the Mailbox server where categorization occurred. The message could be routed to one of the following locations:
To the Mailbox Transport service on the same Mailbox server.
To the Mailbox Transport service on a different Mailbox server that's part of the same DAG.
To the Transport service on a Mailbox server in a different DAG, Active Directory site, or Active Directory forest.
For delivery to the Internet through a Send connector on the same Mailbox server, through the Transport service on a different Mailbox server, through the Front End Transport service on a Client Access server, or through the Transport service on an Edge Transport
server in the perimeter network.
http://technet.microsoft.com/en-us/library/aa996349(v=exchg.150).aspx
http://technet.microsoft.com/en-us/library/aa998825(v=exchg.150).aspx
DJ Grijalva | MCITP: EMA 2007/2010 SPA 2010 | www.persistentcerebro.com

Similar Messages

  • Understanding Outgoing mail servers - simple question

    My brother set up a pop account for me with the Italian ISP Aruba.
    I have a broadband connection in the UK with BT.
    When I set up the outgoing SMTP in Mail should it be an Aruba server or a BT one?

    Thank you. Switching to the BT server seems to be working for now.
    Through years of using many different machines this problem raises its ugly head every so often and the frustrating thing is always that a healthy email setup can seem to corrupt without any changes being made and is then put right with trial and error.
    Anyway, thanks to you both for helping me understand it a little better.

  • User-friendly way to issue chown commands on remote servers

    I'd like my technically unversed users to have, on demand, the benefit of chown commands giving them ownership of certain files being executed on remote servers. I'd like this to be doable without administrators' involvement and with no physical access to the servers by any of the users being entailed.
    By "benefit of chown commands" I mean the results a competent user would get entering the command if he or she were actually doing so. By "technically unversed" I mean specifically that said users aren't and won't ever be trained to ssh into subject servers and issue chown commands themselves directly.
    I should mention that the "Get Info" interface does not in this case avail users of a way to take ownership of particular files because ACEs apply to the files in question. That ACEs apply changes what is presented: instead of any editable fields under Ownership and Permissions, all users see in the "Get Info" interface is a list of whatever ACEs apply.
    Please note that users do, by virtue of ACEs, have "change ownership" permissions for the files in question. Also, authentication to the servers in question under subject users' own logins is possible as necessary.
    What I'd like to start with is getting some idea how complicated this could be for me to do myself as a beginning AppleScripter. I'll describe what I guess would be involved and hope for someone to shed light.
    I'm guessing that something the user at his or her own machine does involving a file he or she has selected would constitute an Apple Event which a process on the client would send to a process on the server. Then I expect the server process would issue the chown command locally respectively of
    1) which file was selected when the Apple Event took place, and
    2) subject user's identity.
    Finally, I expect some feedback might contingently be sent to client process incidentally to need to give user feedback.
    Is this a fair sketch of how this should work? What is a beginner with limited time likely to accomplish attempting this?
    (Find context for this posting here: http://discussions.apple.com/thread.jspa?threadID=831517&tstart=0)
    PowerMac   Mac OS X (10.4.8)  

    First, if I understand you correctly, I'd be using
    Curl and, say, Perl rather than Applescript to get
    this done. In other words, what you wrote in
    Applescript is about all I'd need in that
    language--yes?
    That's correct, give or take any errors in the script. (For obvious reasons I didn't test it.)
    Then, please note that I want to chown, not chmod. Is
    this an issue?
    Nope. (Beyond what you pointed out below.)
    I am looking at Perl documentation and read that "on
    most systems, you are not allowed to change the
    ownership of the file unless you're the superuser..."
    (http://perldoc.perl.org/functions/chown.html).
    However, isn't apache running as root?
    I never thought about that. Wow, this is complicated! Are you really sure you can't make do with chmod instead?
    Anyway, the answer is yes and no. The main Apache process usually runs as root, but executes CGI scripts (and other requests) as another user to avoid inherent insecurity. So unless you do something terribly, terribly insecure, you will not be able to chown from Perl. (And, although I am often lax about security, enabling root access for CGIs strikes even me as dangerous, which means it's a very bad idea.)
    Really what you want is for the CGI, which does not run as root, to hand off to another process which does. I'm not a Unix guru, and would never claim to be, but I think the two following methods might work:
    1. Set up a cron job running as root which looks in a directory once every minute/hour/whatever. The file name should be the user to change the owner to, and it should contain a delimited (in some form; return is possibly safe) list of files. Have the cron job walk through the list of files and use chown, then clobber the contents of the file. (Note that a CGI can use "chmod", which can make sure that the files it creates in the directory are readable by the cron job.) (Also note that you'll want to use flock to avoid race conditions between the cron job and the CGI!) This method would not be instantaneous, since the cron job only runs periodically.
    2. Set up a script which runs as root which takes a line of text in the format:
    user:path/to/file
    and executes chown using that information. Make this process run at startup as root. Have it open a named pipe, with permissions such that CGI script can write to it, and watch for input from that pipe.
    Some general notes:
    A. Whatever you do, make sure that the binary/script/whatever running as root can't be written to by anyone who doesn't have root permissions.
    B. Make sure to check that the user and file actually exist before doing anything with them. (And make sure to do it in the root process, since you have no guarantee that someone won't figure out what's going on and come up with some clever injection scheme to make your root process break security.) (And don't do it by passing a command to the shell; use Perl's chown or some equivalent, so that you'll be somewhat less vulnerable.)
    C. For that matter, don't forget to check and make sure that the path you're about to chown is within the share point, and that the user you're going to chown to makes sense in context, so that nobody can (for example) take over someone else's user directory, or get write permission to /sbin, or something evil like that. (In fact, it might be for the best if you limited the chown operations to files only, just to be sure.)
    Also, I get the part about how a constraint involving
    "do shell script" method argues against using pure
    Applescript in this case. But just for my information
    is Applescript otherwise sufficiently capable?
    If it weren't such matter of getting everything on
    one line, could Applescript send commands between
    hosts, convert local paths to paths on servers, issue
    change ownership commands, and handle authentication?
    Do methods adequate to those purposes exist in
    Applescript?
    Or would using multiple scripting languages be
    entailed anyway? I'm guessing the latter.
    Yes and no. Helpful answer, right?
    First and foremost: AppleScript was originally created as a language to control programs, which would have an extensible grammar through the installation of files called "Scripting Additions". It has since been puffed up via AppleScript Studio to an application-building language in its own right, but the language itself does not have support for a lot of things which, nevertheless, the language can do by controlling another program or by extension.
    AppleScript can send messages between hosts. If the remote host is a Mac, and has "Remote Apple Events" turned on in the "Sharing" control panel, then you can send commands to programs on the remote machine almost exactly as though they were local. (The only differences are in how you specify the application and how you let AppleScript know what the remote application "understands".) This support is built into the language.
    If the remote host is not a Mac, you must control a program which can "translate". When it comes to terminal programs, for security reasons Apple did not include any interactive systems which could be controlled. (Although they did include "expect", I see, which would theoretically allow you to work around this...)
    Since converting a path is really just text processing, yes, AppleScript can do that. I didn't try to build that in because I am under the impression that you know some other language/shell scripting tool better than AppleScript, so it makes better sense for you to put as much of the work into the parts you know, in order to make debugging easier. One method of doing it in AppleScript:
    set x to [a POSIX path found somehow for a file on a connected server]
    if (the offset of "/Volumes/" in x) is 1 then
    -- "the offset of" uses 1-based offsets, not 0 as in most languages
    set x to text 10 through -1 of x
    -- This removes "/Volumes/" from the beginning of x
    set x to text ((the offset of "/" in x) + 1) through -1 of x
    -- That removes up through the next slash, which is the volume name
    set x to "/Path/To/The/Share/Point/On/The/Server/" & x
    else
    error "The path isn't in /Volumes/, so either the server is mounted in a nonstandard way or the path isn't on a remote host at all." number 9000
    end if
    (The other method of which I am aware is to change AppleScript's text item delimiter to "/", convert the path to a list, test whether the first item is "Volumes", then put together items 3 and up into a string again. I have always had a semi-irrational prejudice against using this method because Apple's documentation circa about 1996, from which I learned AppleScript, made it sound like this might be dangerous, but it works.)
    The Finder (which can be scripted) can apparently change ownership and permissions -- a fact which I did not know until just now; I must have missed it last time I looked for it -- and of course "do shell script" can be used to call "chmod" and "chown". The problem with both of these methods, vis-a-vis your particular difficulty, is that your files are not local. You could turn on Remote Apple Events and have the Finder do it, but that's really a security hole. And a potentially maddening one to figure out if anyone starts exploiting it.
    I'd stick with a CGI and the cron/named pipe scheme. No matter what you do you're going to have a little extra security risk, just because chown requires root permissions, but minimizing that risk is probably a good thing.

  • Quarterly CPU question from a rookie

    Question qualifier: I am not a trained or experienced DBA, I have however been thrown under the Oracle bus to set up a Patch and Vulnerability Management program for the Oracle servers.
    Question: Are the quarterly CPUs cummulative? In other words, does each quarterly CPU include the patches from the previous quarterly CPU?
    If not, I would like to clarify theire installation order.
    The CPUs MUST BE installed in the released sequence to make sure that all vulnerabilities are covered.
    a. This is correct. The patches have to be installed in the released order.
    b. This is the official position, but in reality it is not that strict.
    c. It doesn't matter as long as all the patched are applied.
    d. What is a CPU?
    I thank everyone in advance for your help.
    Jonathan

    All these questions are answered in Note 360470.1 (FAQ)
    Basically, CPU patches are security patches likes the ones released by other vendors sent at predefined intervals to contain a list of fixes based on bugs or issues found. They are usually not cumulative (the answer depends on which Oracle product you are referring to).
    About the official position?.. Treat as strongly recommended.
    CPU = Critical Patch Update

  • Is Oracle Configuration Manager Available for Hyperion Planning Servers?

    The "My Oracle Support" site (https://support.oracle.com) describes the values of installing the Oracle Configuration Manager on servers.
    Question: is this available for the Hyperion products or just Oracle databases? If available for Hyperion products, does it add value to tech support?
    Any comments would be appreciated.

    John
    This is partially correct. If you user12136418 are looking for a way to monitor health automatically and get access to web ex session technical support from one console OCM is a great tool. The fact that it is not a thick client in nature nor a windows service makes it ideal for having agents monitor network traffic SNMP and still have all hyperion health monitored without having to run a tool like SCOM. I have been waiting for this tool for a year . Now if only Hyperion would work with IE8. But we all can not have egg in our beer.
    Thank you
    Michael Worthylake
    Systems Analyst
    DJO, LLC
    1430 Decision St.
    Vista, CA 92081-8553 U.S.A.
    Direct: 760-734-5631
    Cell: 760-445-0746
    www.djortho.com
    [email protected]

  • Joining existing OES servers running samba to DSfW domain

    I fell dense for asking but I have not seen an answer to this question anywhere. I added DSfW into my environment and I had existing OES 11 (sp1) servers that had samba running and functioning well for what I needed. Now (before I upgrade to service pack 2, I would like to get some ducks in a row.
    The servers that were running samba prior to the introduction of DSfW, I want to have as members of the domain.
    What are the steps to do this?
    Daniel Wells AIA, VCP
    Senior Associate | IT Coordinator
    MHTN Architects, Inc.
    Direct: 801.326.3215 | www.mhtn.com
    vision made real

    We are a file and print shop with users demanding all sorts of ways to access their files. The three file servers in questions are set up with NSS/NCP access to the file systems with SMB (as part of a workgroup, since the DSfW domain was added later) and AFP overlaid. SMB was originally used for web access to the file systems through our SSL VPN.
    I would like to join the Macs int he office to the DSfW domain and have them authenticate to the domain and have access to the files, hopefully without having to re-enter passwords for each server, and thus be able to discard AFP on the servers.
    If I need to redo the SMB configuration to get it to work with DSfW, then that is alright. I'm sure I can reconfigure the VPN to access any new SMB configuration.
    Daniel Wells AIA, VCP
    Senior Associate | IT Coordinator
    MHTN Architects, Inc.
    Direct: 801.326.3215 | www.mhtn.com
    vision made real
    >>> ab<[email protected]> 9/12/2014 10:11 PM >>>
    It may help to understand exactly what these servers are doing now, vs.
    what you want them to do. Should they be DCs when done? Are you just
    wanting to share files to workstations using SMB? Are there
    pieces/configurations of Samba right now on those other servers that you
    would like to preserve?
    Good luck.
    If you find this post helpful and are logged into the web interface,
    show your appreciation and click on the star below...

  • Exchange 2010 RPC Encyption question

    Quick question.  We are running Exchange 2010 SP3 RU 5 (latest and greatest).  All of our CAS servers have RPC Encryption disabled.  If clients are enabled for Encyrption will this encrypt traffic to the CAS servers?  Question is we're
    seeing issues with some of our Cisco Wan Optimizers.  I just want a better understanding if only the client side (Outlook 2007/2010) is enabled does that encrypt.

    Hi,
    Agree with above opinion, if all cas servers have disabled RPC Encryption, Clients will not be encrypted traffic to the server regardless of whether the client encryption is enabled.
    More details refer to the following articles:
    RPC Encryption Required
    Outlook connection issues with Exchange 2010 mailboxes because of the RPC encryption requirement
    In addition,Microsoft strongly recommends you leave the encryption requirement enabled on your server.
    Hope this helps!
    Thanks.
    Niko Cheng
    TechNet Community Support

  • Can No Longer Write files to a NSF-mounted drive on one Server. Error -36

    Hi Folks:
    I have searched the Net (including these forums now) for several hours and found nothing definite.
    The story is this: I have a Fedora 14 server that, up to this point, has been acting as a file server. Sometime last week (around the 28th we think) it stopped allowing people to copy files from the Mac OS X 10.6.8 workstations (which is what we have, largely) to the shared drives. It fails with a "The Finder can't complete the operation because some data in "" can't be read or written. (Error Code -36)."  People can mount the drives in question, see them, read from them, copy from them, but not copy anything, or write anything (Save as from Word) to the drives.
    Now at about the time period where this started we moved a mySQL database from this particular server somewhere else. As part of that operation, the server was rebooted. The server where we moved the DB to was not. It also have an NFS share that continues to work.
    I have verified service by service that nothing is running that shouldn't be. One of my thoughts was that something came up that shouldn't of when the reboot happened. Everything seems fine in that regard.
    My looking through the Net has yielded several possible answers none of which has worked. One has been that this is a sign of bad media.  I have checked the drive array in question three seperate times. It has passed all those times. I started another share on a separate spindle on the server and I still cannot write to this new share on a totally different drive.
    The drives in question are formatted for EXT3 (hey, I've giving out as many details as I can here!  :-)  ).
    I have also repaired permissions on the SOURCE drives. No luck there.
    My export file looks like this:
    /raid/data/BigBang 
    *(rw,insecure,sync,no_subtree_check,nohide,all_squash)
    /raid/data         
    *(rw,insecure,sync,no_subtree_check,nohide,no_root_squash)
    /raid              
    *(rw,insecure,sync,no_subtree_check,nohide,no_root_squash)
    /home/rkinne/test  
    *(rw,insecure,sync,no_subtree_check,nohide,no_root_squash)
    No thoughts on security right now. I'm trying to get this darn thing to work, especially on /raid/data/BigBang. That share WAS "no_root_squash" but I changed it to "all_squash" as I've been flailing around today. There was no change in behavior. The exports file on the other server that is working is virtually identical.
    Two Mac workstations work right now. One is running 10.4. The other is running 10.7.3. The rest - the problematic ones - are running 10.6.8. One is running 10.5 and is also not working.
    I have also flushed the DNS caches from the servers in question, but its too early to see if that will do something yet.
    This issue has flumouxed folks for years now. I understand that. Oddly, it JUST came out of the blue for us.  Any thoughts would be appreciated.
    Doc Kinne
    American Assoc. of Variable Star Observers

    TimVasilovic wrote:
    I understand the process you are describing,  In the past I have been able to embed metadata to a Raw file, move it to a server, pick that file up on another computer and see the metadata without need of the .xmp sidecar.  Is the ability to embed no longer supported  by Photoshop? Since this issue began we have taken to doing all our metadata editing in PhotoMechanic, which embeds without creating a sidecar.  If Photoshop is pushing people to create sidecar .xmp files only for writing metadata to Raw files I will probabaly move fully to PhotoMechanic because using sidecars has proven tricky in the past with how our files get moved around.
    If using Adobe Camera Raw on camera raw file you either have an XMP sidecar file created or the data is stored in a database.  Which happens it is your option in Edit/camera raw.  If you use ACR and edit a jpeg it does not create a distinct XMP file but it is also not directly written to the image either.  One can still delete the edits in Bridge with Edit/develp settings.
    If you use a DNG the metadata is written to the image.  Not sure what process Photo Mechanic uses.
    It appears to be a permission problem as other than CS6 Bridge is now 64 bit and has a new cache method there are no changes in how it handles metadata.

  • "Unable to log in to the user account"

    I'm having a problem I'm hoping one of you may have come up against and
    solved. We have two Mac OSX.4 servers - one a login server, the other
    contains the student Homes folders. Now when a student who has an account
    from last year logs in, a message says "You are unable to log in to the user
    account "username" at this time. Logging in to the account failed because an
    error occurred. The home folder for the user account is located on an AFP or
    SMB server. Contact your system administrator for help."
    It seems there is a problem with the hand off from the login server to the
    data server. I can connect to the data server through Connect to Server
    while logged on as admin, so the server is accessible on line. I double-checked the Sharing info of the shared points and they are set correctly. Also, when I run Server Monitor, the stats summary for both servers says "waiting for response."
    Any ideas? Thanks!

    Mike
    Server Monitor is an application that monitors XServe hardware providing feedback for the administrator. It has nothing to do with the Server Operating System. If your trying to use it on anything other than an XServe all you'll see is "Waiting for Response" all day long.
    If your hardware is an XServe then you need to use either localhost or the server's loopback address (127.0.0.1) in the name field followed by the default admin's account's password.
    +"The home folder for the user account is located on an AFP or+
    +SMB server. Contact your system administrator for help"+
    This error is usually down to (but not always) a DNS/DHCP issue or some other obscure network related issue affecting DNS. What does the logs say server and client side when the log-in fails?
    Its possible the affected user no longer exists as a principal? Does the same thing happen to this user regardless of which client computer is used? You could search the schema using dscl from a client to see if the affected user is listed in the LDAP database? Alternatively you could issue:
    sudo kadmin.local -q list_principals
    On the server itself. If the affected user is not listed but exists in WGM then review the password type. It's possible its been set to Crypt? You could delete the user and re-create the account again and re-locating the home folder and trying again. It's also possible the student's home folder has developed a problem? Does the 'jiggle' and error occur immediately or after a slight delay? Do you have a strict Password Policy in place? Sometimes problems can develop with the Password Policy (it does get logged) that affects single accounts only.
    You could try and create a completely new account and home for the affected user. Transfer the data from the old home propagate default permissions and go for a log-in again. Does it work now?
    I'm assuming the two servers in question are in a Master/Replica relationship?
    Tony

  • Oracle9iAS R2 - Virtual Hosts with Portal and SSO with OIDDAS application

    Hi!
    I have installed a the machine with name minsk.discover.local. The machine have installed Infrastructure and Portal. The instalation is sucessfull and i work fine. But i have publish Portal to WEB with name intranet.discover.com.br. The Oracle describe:
    1 - Create the virtual hosts in SSO and PORTAL - OK
    2 - run ptlasst to create SSO Partners Applications - OK
    After this steps iwork fine with Portal and SSO, but when i click in portlet to create user to access the application OIDDAS, the Portal redirect to login page of SSO in address mct.com.br, the internal name, when then name not responde in the internet.
    I need a help!!!!
    Marcio Mesti

    I just spoke to the Oracle App server admins, the two servers in question are clustered.
    So my question changes slightly to:
    What is the best way to install and configure a webgate for clustered Oracle App servers with mulitple virtual hosts, that are residing behind a load balancer (Traffic Manager)?
    Thanks,
    Andy

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • How do I delete the information from the old firefox sync mechanism?

    I have updated to the latest versions of Firefox and the latest/new version of Firefox Sync on both my devices by following the instructions from this support site.
    QUESTION: Did disconnecting my devices from the old sync mechanism delete any and all of my information (e.g. bookmarks, passwords, etc.) from Mozilla's servers?
    QUESTION: If disconnecting my devices from the old sync mechanism did NOT delete all of this information from the Mozilla's servers, how do I now delete all of that information from the old sync mechanism? Also, if it did not and a step was required to do so before disconnecting my devices, that should be added to the instructions about moving to the new sync.
    This seems to have been asked in a couple of different threads, but the answers I read all seem to describe how to either upgrade to the new sync mechanism or delete an account from the new sync mechanism. One provided a link to a site to delete an account, but it's unclear if that will delete the account from the old sync mechanism or from the new sync mechanism since there's no indication as to which is which in Mozilla's account services.
    Thanks for your help!

    If you used the same email address on the new Sync as you were using with the older Sync, there is no "old" data to delete. Basically - the same account which was updated for the new version of Sync.

  • How can I have a default servlet and an index.html?

    Hi,
    I writing a small webapp to test/understand the 2.2 Servlet Spec. I am deploying this as a WAR to Orion, Tomcat and Silverstream.
    The app's name is: "myapp"
    My application has an index.html, which is listed as the sole welcome-file in the welcome-file-list element in the app's web.xml.
    The interesting thing is that, after adding a default Servlet (<url-pattern>/</url-pattern>), I can no longer access the app's index.html either implicitly or explicitly:
    1. Implicit:
    - http://localhost/myapp
    - http://localhost/myapp/
    2. Explicit:
    - http://localhost/myapp/index.html
    - http://localhost/myapp/index.html/
    All of these invoke the Default Servlet in all 3 app servers.
    Question: How can I have both a default Servlet and an index page?
    Thanks in advance.
    Miles

    you can define it in the web.xml file
    look at the dtd, element "welcome-file-list"

  • VMM 2012 R2: error 410 / 0x80070001 when adding a new Hyper-V host. VMM Agent won't install automatically (status 1603)

    I have several Windows Server 2012 R2 hosts. The OS is freshly installed, no roles/3d party software besides Hyper-V role). I also have a freshly installed VMM 2012 R2 server. They all are in the same AD domain. I want to manage those Hyper-V hosts
    with this VMM instance, but I have troubles connecting those hosts to VMM.
    Whenever I try to add a Hyper-V host in the VMM console, the process is stopped almost immediately after it starts. The failing step is "1.2 Install Virtual Machine Manager Agent". The final error message is as follows:
    Error (410)
    Agent installation failed on XXX.
    Incorrect function (0x80070001)
    Recommended Action
    Try the operation again. If the problem persists, install the agent locally and then add the managed computer.
    If I try to install the agent manually copied from the VMM host (C:\Program Files\Microsoft System Center 2012 R2\Virtual Machine Manager\agents\amd64\3.2.7634.0
    as of now) by just clicking on its MSI file in Windows Explorer (and agreeing to the Windows prompt to run it with elevated privileges), the installation fails as well with "Installation success or error status: 1603." in the
    server Application log.
    The only way I found to install the agent is to run the installation from command prompt with elevated privileges. In this case the installation finishes successfully, and I can add the host to VMM by marking the "Reassociate the host" checkbox
    in the VMM wizard.
    Why is it so? Isn't the agent supposed to install without manual intervention? As I recall, I had no such problem with previous versions of Windows Server and VMM.
    Evgeniy Lotosh
    MSCE: Server infractructire, MCSE: Messaging

    I brought up the share because it is off by default with Hyper-V Server as well as a Core installation.  Regardless of the Firewall policy involved.
    Maybe, but I don't work with pure Hyper-V and Core servers. I have a full-fledged Windows Server 2012 R2 Datacenter.
    You're correct that File Server role is not enabled by default, this is my mistake. I open SMB-IN port manually (with the help of a group policy), and Windows considers it to be am equivalent of the enabled File Server role. Nonetheless, ADMIN$ share is
    always accessible in my environment (and without it, for example, I wouldn't be able to remotely install things like SCOM Agent which installs without a glitch). Just in case I manually installed the File Server role, and it didn't help.
    What you say about multihomed servers is interesting. The servers in question do have multiple network interfaces, but at the time I tried to install the VMM Agent only one of them (a designated host management / network access interface) had a real IP address.
    Two other interfaces were non-configured (I apply a virtual switch configuration after the VMM Agent is installed). Just in case I disabled them when tried to install the VMM Agent remotely on the last host, and the installation still failed immediately
    after start. Anyways, after the VMM Agent is installed manually the host can be normally re-associated with VMM, so it's doubtful that I have a network issue.
    And, of course, there is no 3d-party software on the hosts (no antiviruses/firewalls in particular) except the EMC Unisphere Host Agent (a piece of software necessary for connecting a host to an EMC storage system) and networking drivers. Actually,
    this is a pure Microsoft environment.
    So I still believe that is has something to do with the Windows Installer inability to configure Windows Firewall policy after being invoked from Windows Explorer with a double-click on a MSI file. But I don't know how to strictly confirm it.
    Evgeniy Lotosh
    MSCE: Server infractructire, MCSE: Messaging

  • Migration 7.1.5 to 9.1.2 LAB

    Hi All,
    I've a scenario in which I'm currently running with cucm 7.1.5, and i want to upgrade it to cucm 9.1.2.
    I've a cluster of 5 cucm.
    in my lab I have the possibility to have 3 VM started simultaneously.
    Is it possible to upgrade my cluster into 2 stages:
    PUB + 2 SUB (SUB 1 and 2)
    After removing 2 SUB (SUB 1 and 2), and recreate PUB SUB + 2 (3 and 4).
    My problem you have understood is that I can not reproduce in lab more than 3 servers simultaneously.
    question :
    I have needs to have all the sub up in my lab at the same time during upgrade?
    thanks
    Philippe

    Assuming you're attempting the bridge upgrade, yes, ALL of the servers from the live environment need to be in the lab.
    Otherwise, remove servers from the live environment prior to the DRS restore so you only have 3 servers.
    Upgrade only those 3 servers, and then reinstall the other 2 servers once on 9.1(2) and reconfigure everything as initially.
    HTH
    java
    if this helps, please rate
    www.cisco.com/go/pdihelpdesk

Maybe you are looking for

  • CS 1 upgrade question

    Hello I have the original CS1 (not CS2) and will be buying a new macbook pro. I have read a lot of threads on this but I can't seem to get a definitive answer to my questions. 1. Will CS1 install on my new macbook pro? 2. If it does install will it w

  • Jdbc-problem/Solaris/linux

    I tried to start the JdbcCheckup-example from the Oracle 8i-Package for OCI8! Although i set the LD_LIBRARY_PATH TO: /opt/app/oracle/product/8.1.5/lib:/opt/app/oracle/product/8.1.5/jdbc/lib/ when i start the Program i get following message: java.lang

  • Error message in trans code SMQ1(backend) for material replication

    Hi All, I am wandering if someone can help me with this query. I have basically followed all the customisation steps requried to replicate replicate material masters from R/3 to SRM. I had infact closley followed the advise posted on the blog: How to

  • MiniSap MBS "Abap Objects" problem HELP

    Hi All, I just bought ABAP objects book MiniSap MBS released in 2001, and managed to istall with no problems. My problem is that in the object navigator , i can not see the object selector. Has anyone been through this problem and how to fix it?? I a

  • Replacing BC with SAP XI-URGENT

    Hi Experts , I have a query related to a case in which we want to replace SAP BC with SAPXI. The client is going for an upgradation from SAP R/3 4.6 c to ECC 6.0 and they have architecture like 4.6c--- BC-SAP Adapter--- Customized Adapters(Third part