Primary site server a single point of failure?

I'm installing ConfigMgr 2012 R2, and employing a redundant design as much as possible. I have 2 servers, call them CM01,CM02, in a single primary site, and on each server I have installed the following roles: Management Point, Distribution Point, Software
Update Point, as well as the installing the SMS Provider on both servers. SQL is on a 3rd box.
I am now testing failover from a client perspective by powering down CM01 and querying the current management point on the client: (get-wmiobject -namespace root\ccm -class ccm_authority).CurrentManagementPoint . The management point assigned to
the client flips to the the 2nd server, CM02, as expected. However, when I try to open the CM management console, I cannot connect to the Site, and reading SMSAdminUI log reveals this error: "Provider machine not found". 
Is the Primary site server a single point of failure? 
Why can't I point the console to a secondary SMS provider?
If this just isn't possible, what is the course of action to restore console access once the Primary Site server is down?
Many Thanks

Yes, that is a completely false statement. Using a CAS and multiple primaries in fact will introduce multiple single points of failure. The only technical Eason for a CAD a multiple primary sites is for scale out; i.e., supporting 100,000+ managed systems.
HA is achieved from a client perspective by adding multiple site systems hosting the client facing roles: MP, DP, SUP, App Catalog.
Beyond that, all other roles are non-critical to client operations and thus have no built-in HA mechanism. This includes the site server itself also.
The real question is what service that ConfigMgr provides do you need HA for?
Jason | http://blog.configmgrftw.com

Similar Messages

  • Package content on Primary Site Server (no Distribution Point Role)

    Hey,
    I have the situation that there are 3 Servers with following roles installed:
     1.File Server
     2.SCCM 2012 R2 CU3 Primary Site Server / Database Server (Was a distribution Point previously to update to CU3)
     3.Distribution Point
    Our packages are all stored as "source" on the File Server.
    Now some SCCM Applications / packages are created and distributed to the distribution point.
    Unfortunately on the Primary Site Server the folder "SCCMContendLib" extends as it does on the distribution point (so this means that all packages are distributed to the primary site server, too.
    My question is now- why? :)
    Is this really needed (I see no reason for it) or is it a problem in the infrastructure (if yes - how can I resolve it)?
    Thank you for help
    Kind Regards

    Not a problem with your infrastructure. What you're describing it the default behavior for ConfigMgr, in terms of storing content on the primary.
    -Nick O.

  • Administrative Server - Single Point of Failure?

    From my understanding, all managed servers in a cluster get their
              configuration by contacting the administrative server in the cluster.
              So i assume in the following scenario, the administrative server
              could be a single point of failure.
              Scenrario:
              1. The machine, on which the administrative server was running got a
              hardware defect.
              2. due to some bad coding one of the managed servers on another machine
              crashed.
              3. a small script tries to restart the previously failed server from step 2.
              i assume, that step 3. is not possible, because there is no backup
              administrative server
              in the whole cluster. so the script will fail, wen trying to start the
              crashed managed server
              again.
              did i understand this right? do you have some suggestions, how to avoid this
              situation?
              what does BEA recommend to their enterprise customers?
              best regards
              Thomas
              

    Hi Thomas,
              There is no reason why you couldnt keep a backup administration server
              that is NOT running available. So that if the primary administration server
              went down, you could launch a secondary server with the same administration
              information and the managed server could retrieve the required information
              from the backup administration server.
              regards,
              -Rob
              Robert Castaneda [email protected]
              CustomWare http://www.customware.com
              "Thomas E. Wieger" <[email protected]> wrote in message
              news:[email protected]...
              > From my understanding, all managed servers in a cluster get their
              > configuration by contacting the administrative server in the cluster.
              > So i assume in the following scenario, the administrative server
              > could be a single point of failure.
              >
              > Scenrario:
              > 1. The machine, on which the administrative server was running got a
              > hardware defect.
              > 2. due to some bad coding one of the managed servers on another machine
              > crashed.
              > 3. a small script tries to restart the previously failed server from step
              2.
              >
              > i assume, that step 3. is not possible, because there is no backup
              > administrative server
              > in the whole cluster. so the script will fail, wen trying to start the
              > crashed managed server
              > again.
              >
              > did i understand this right? do you have some suggestions, how to avoid
              this
              > situation?
              > what does BEA recommend to their enterprise customers?
              >
              > best regards
              >
              > Thomas
              >
              >
              >
              

  • MBAM 2.5 integration broke my Primary Site Server - Managment Point http status code 404, not found

    I uninstalled MBAM 2.0 and installed MBAM 2.5. During 2.5 installation there was this one section to enter "Web service application pool domain account", where I entered domain admin account. I don´t know if that has any effect on this problem.
    Though, situation now is that my site is now out of order because of Management Point cannot connect to IIS or something like that.
    In status messages I see "MP Control Manager detected management point is not responding to HTTP requests.  The HTTP status code and text is 404, Not Found. Message ID 5436".
    In mpcontrol.log I see Call to HttpSendRequestSync failed for port 80 with status code 404, text: Not Found
    I have tried:
    Bindings checked, that http uses port 80
    MP component is uninstalled and reinstalled succsessfully
    Primary site server has restarted several times
    In IIS, Default Site -> SMS_MP->Basic Settings I did "Test Settings". With pass-through authentification it cannot Access the D:\SMS_CCM path, I changed to the domain admin, and it succseed. I have no idea, does this have something to do
    with it.

    I have experienced all the above problems also and they can be a pain to fix. I guess the moral of the story is (as Andy says) leave the ConfigMgr server alone. If you want an MBAM server then build an MBAM server.
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson
    Hmm, I guess ConfMgr integration will not be the problem if your MP is using http(80) and MBAM will use https(443). If you then set spn to https, it will use 443 as-is, right? But other custom ports might be the problem?

  • Database instance for SCCM 2012 and WSUS on a single primary site server

    I am going to install SCCM 2012 and its SQL database on a single physical server. This is going to be a single primary site server. The default SQL instance will be dedicated to SCCM 2012 with no other named instances to be added on the SQL server down
    the road.
    During the WSUS server role installation, there is the Database Options page asking for using (1) Windows Internal Database, (2) existing db server on this computer, or (3) an external db server.
    Since SCCM 2012 doesn't share db instance with others, how should I handle the WSUS db that's going to be hosted on the same SCCM/SQL physical server? Do I really need to create a separate SQL instance just for the WSUS db?
    Thanks and regards. 

    Even though you can do it, it is the best practice to have SCCM 2012 and WSUS installed on separate instances.
    http://technet.microsoft.com/en-us/library/hh692394
    When the Configuration Manager and WSUS databases use the same SQL Server and share the same instance of SQL Server, you cannot easily determine the resource usage between the two applications. When you use a different SQL Server instance
    for Configuration Manager and WSUS, it is easier to troubleshoot and diagnose resource usage issues that might occur for each application.

  • Can we assign 2 IPs for a SCCM 2012 primary site server and use 1 IP for communicating with its 2 DPs and 2nd one for communicating with its upper hierarchy CAS which is in a different .Domain

    Hi,
    Can we assign 2 IPs for a SCCM 2012 primary site server and use 1 Ip for communicating with its 2 DPs and 2nd one for communicating with its upper hierarchy CAS . ?
    Scenario: We are building 1 SCCM 2012 primary site and 2 DPs in one domain . In future this will attach to a CAS server which is in different domain. Can we assign  2 IPs in Primary site server , one IP will use to communicate with its 2 DPs and second
    IP for communicating with the CAS server which is in a different domain.? 
    Details: 
    1)Server : Windows 2012 R2 Std , VM environment .2) SCCM : SCCM 2012 R2 .3)SQL: SQL 2012 Std
    Thanks
    Rajesh Vasudevan

    First, it's not possible. You cannot attach a primary site to an existing CAS.
    Primary sites in 2012 are *not* the same as primary sites in 2007 and a CAS is 2012 is completely different from a central primary site in 2007.
    CASes cannot manage clients. Also, primary sites are *not* used for delegation in 2012. As Torsten points out, multiple primary sites are used for scale-out (in terms of client count) only. Placing primary sites for different organizational units provides
    no functional differences but does add complexity, latency, and additional failure points.
    Thus, as the others have pointed out, your premise for doing this is completely incorrect. What are your actual business goals?
    As for the IP Addressing, that depends upon your networking infrastructure. There is no way to configure ConfigMgr to use different interfaces for different types of traffic. You could potentially manipulate the routing tables in Windows but that's asking
    for trouble IMO.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • How can I design Load Balancing for distant Datacenters? without single point of failure

    Dear Experts,
    We are using the following very old and passive method of redundancy for our cload SaaS but it's time to make it approperiate. Can youplease advise:
    Current issues:
    1. No load balancing. IP selection is based on primary and secondary IP configurations. If Primary fails to respond, IP record for DNS changes to secondary IP with TTL=1min
    2. When primary server fails, it takes around 15 min for clients to access the servers. Way too long!
    The target:
    A. Activate a load balancing mechanism to utilized the stand-by server.
    B. How can the solution be designed to avoid single point of failure? In the previous example, UltraDNS is a single point of failure.
    C. If using GSS is the solution, how can it be designed in both server locations (for active redundancy) using ordinary DNS server?
    D. How can HSRP, GSS, GSLB, and/or VIP be used? What would be the best solution?
    Servers are running ORACLE DB, MS SQL, and tomcat with 2x SAN of 64TB each.

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • Is a cluster proxy a single-point-of-failure?

    Our group is planning on configuring a two machine cluster to host
              servlets/jsp's and a single backend app server to host all EJBs and a
              database.
              IIS is going to be configured on each of the two cluster machines with a
              cluster plugin. IIS is being used to optimize performance of static HTTP
              requests. All servlet/jsp request would be forwarded to the Weblogic
              cluster. Resonate's Central Dispatch is also going to be installed on the
              two cluster machines. Central Dispatch is being used to provide HTTP
              request load-balancing and to provide failover in case one of the IIS
              servers fails (because the IIS process fails or the cluster machine it's on
              fails).
              Will this configuration work? I'm most concerned about the failover of the
              IIS cluster proxy. If one of the proxies is managing a sticky session (X),
              what happens when the machine (the proxy is on) dies and we failover to the
              other proxy? Is that proxy going to have any awareness of session X?
              Probably not. The new proxy is probably going to believe this request is
              new and forward the request to a machine which may not host the existing
              primary session. I believe this is an error?
              Is a cluster proxy a single-point-of-failure? Is there any way to avoid
              this? Does the same problem exist if you use Weblogic's HTTP server (as the
              cluster proxy)?
              Thank you.
              Marko.
              

    We found our entity bean bottlenecks using JProbe Profiler. It's great for
              watching the application and seeing what methods it spends its time in. We
              found an exeedingly high number of calls to ejbLoad were taking a lot of
              time, probably due to the fact that our EBs don't all have bulk-access
              methods.
              We also had to do some low-level method tracing to watch WebLogic thrash EB
              locks, basically it locks the EB instance every time it is accessed in a
              transaction. Our DBA says that Oracle is seeing a LOT of lock/unlock
              activity also. Since much of our EB data is just configuration information
              we don't want to incur the overhead of Java object locks, excess queries,
              and Oracle row locks just to read some config values. Deadlocks were also a
              major issue because many txns would access the same config data.
              Our data is also very normalized, and also very recursive, so using EBs
              makes it tricky to do joins and recursive SQL queries. It's possible that we
              could get good EB performance using bulk-access methods and multi-table EBs
              that use custom recursive SQL queries, but we'd still have the
              lock-thrashing overhead. Your app may differ, you may not run into these
              problems and EBs may be fine for you.
              If you have a cluster proxy you don't need to use sticky sessions with your
              load balancer. We use sticky sessions at the load-balancer level because we
              don't have a cluster proxy. For our purposes we decided that the minimal
              overhead of hardware ip-sticky session load balancing was more tolerable
              than the overhead of a dog-slow cluster proxy on WebLogic. If you do use the
              proxy then your load balancer can do round-robin or any other algorithm
              amongst all the proxies.
              Marko Milicevic <[email protected]> wrote in message
              news:[email protected]...
              > Sorry Grant. I meant to reply to the newsgroup. I am putting this reply
              > back on the group.
              >
              > Thanks for your observations. I will keep them all in mind.
              > Is there any easy way for me to tell if I am getting acceptable
              performance
              > with our configuration? For example, how do I know if my use of Entity
              > beans is slow? Will I have to do 2 implementations? One implementation
              > using entity beans and anther implementation that replaces all entity use
              > with session beans, then compare the performance?
              >
              > One last question about the cluster proxy. You mentioned that you are
              using
              > Load Director with sticky sessions. We too are planning on using sticky
              > sessions with Central Dispatch. But since the cluster proxy is stateless,
              > does it matter if sticky sessions is used by the load balancer? No matter
              > which cluster proxy the request is directed to (by load balancing) the
              > cluster proxy will in turn redirect the request to the correct machine
              (with
              > the primary session). Is this correct? If I do not have to incur the
              cost
              > of sticky sessions (with the load balancer) I would rather avoid it.
              >
              > Thanks again Grant.
              >
              > Marko.
              > .
              >
              > -----Original Message-----
              > From: Grant Kushida [mailto:[email protected]]
              > Sent: Monday, May 01, 2000 5:16 PM
              > To: Marko Milicevic
              > Subject: RE: Is a cluster proxy a single-point-of-failure?
              >
              >
              > We haven't had too many app server VM crashes, although our web server
              > typically needs to be restarted every day or so due to socket strangeness
              or
              > flat out process hanging. Running 2 app server processes on the same box
              > would help with the VM stuff, but remember to get 2 NICs, because all
              > servers on a cluster need to run on the same port with different IP addrs.
              >
              > We use only stateless session beans and entity beans - we have had a
              number
              > of performance problems with entity beans though so we will be migrating
              > away from them shortly, at least for our configuration-oriented tables.
              > Since each entity (unique row in the database) can only be accessed by one
              > transaction at a time, we ran into many deadlocks. There was also a lot of
              > lock thrashing because of this transaction locking. And of course the
              > performance hit of the naive database synching (read/write for each method
              > call). We're using bean-managed persistence in 4.5.1, so no read-only
              beans
              > for us yet.
              >
              > It's not the servlets that are slower, it's the response time due to the
              > funneling of requests through the ClusterProxy servlet running on a
              WebLogic
              > proxy server. You don't have that configuration so you don't really need
              to
              > worry. Although i have heard about performance issues with the cluster
              proxy
              > on IIS/netscape, we found performance to be just fine with the Netscape
              > proxy.
              >
              > We're currently using no session persistence. I have a philosophical issue
              > with going to vendor-specific servlet extensions that tie us to WebLogic.
              We
              > do the session-sticky load balancing with a Cisco localdirector, meanwhile
              > we are investigating alternative servlet engines (Apache/JRun being the
              > frontrunner). We might set up Apache as our proxy server running the
              > Apache-WL proxy plugin once we migrate up to 5.1, though.
              >
              > > -----Original Message-----
              > > From: Marko Milicevic [mailto:[email protected]]
              > > Sent: Monday, May 01, 2000 1:08 PM
              > > To: Grant Kushida
              > > Subject: Re: Is a cluster proxy a single-point-of-failure?
              > >
              > >
              > > Thanks for the info Grant.
              > >
              > > That is good news. I was worried that the proxy maintained
              > > state, but since
              > > it is all in the cookie, then I guess we are ok.
              > >
              > > As for the app server, you are right. It is a single point
              > > of failure, but
              > > the machine is a beast (HP/9000 N-class) with hardware
              > > redundancy up the
              > > yin-yang. We were unsure how much benefit we would get if we
              > > clustered
              > > beans. There seems to be a lot of overhead associated with
              > > clustered entity
              > > beans since every bean read includes a synch with the
              > > database, and there is
              > > no fail over support. Stateful session beans are not load
              > > balanced and do
              > > not support fail over. There seems to be real benefit for
              > > only stateless
              > > beans and read-only entities. Neither of which we have many
              > > of. We felt
              > > that we would probably get better performance by locating all
              > > of our beans
              > > on the same box as the data source. We are considering creating a two
              > > instance cluster within the single app server box to protect
              > > against a VM
              > > crash. What do you think? Do you recommend a different
              > > configuration?
              > >
              > > Thanks for the servlet performance tip. So you are saying
              > > that running
              > > servlets without clustering is 6-7x faster than with
              > > clustering? Are you
              > > using in-memory state replication for the session? Is this
              > > performance
              > > behavior under 4.5, 5.1, or both? We are planning on
              > > implementing under
              > > 5.1.
              > >
              > > Thanks again Grant.
              > >
              > > Marko.
              > > .
              >
              >
              > Grant Kushida <[email protected]> wrote in message
              > news:[email protected]...
              > > Seems like you'll be OK as far as session clustering goes. The Cluster
              > > proxies running on your IIS servers are pretty dumb - they just analyze
              > the
              > > cookie and determine the primary/secondary IP addresses of the WebLogic
              > web
              > > servers that hold the session data for that request. If one goes down
              the
              > > other is perfectly capable of analyzing the cookie too. As long as one
              > proxy
              > > and one of your two clustered WL web servers survives your users will
              have
              > > intact sessions.
              > >
              > > You do, however, have a single point of failure at the app server level,
              > and
              > > at the database server level, compounded by the fact that both are on a
              > > single machine.
              > >
              > > Don't use WebLogic to run the cluster servlet. It's performance is
              > > terrible - we experienced a 6-7x performance degredation, and WL support
              > had
              > > no idea why. They wanted us to run a version of ClusterServlet with
              > timing
              > > code in it so that we could help them debug their code. I don't think
              so.
              > >
              > >
              > > Marko Milicevic <[email protected]> wrote in message
              > > news:[email protected]...
              > > > Our group is planning on configuring a two machine cluster to host
              > > > servlets/jsp's and a single backend app server to host all EJBs and a
              > > > database.
              > > >
              > > > IIS is going to be configured on each of the two cluster machines with
              a
              > > > cluster plugin. IIS is being used to optimize performance of static
              > HTTP
              > > > requests. All servlet/jsp request would be forwarded to the Weblogic
              > > > cluster. Resonate's Central Dispatch is also going to be installed on
              > the
              > > > two cluster machines. Central Dispatch is being used to provide HTTP
              > > > request load-balancing and to provide failover in case one of the IIS
              > > > servers fails (because the IIS process fails or the cluster machine
              it's
              > > on
              > > > fails).
              > > >
              > > > Will this configuration work? I'm most concerned about the failover
              of
              > > the
              > > > IIS cluster proxy. If one of the proxies is managing a sticky session
              > > (X),
              > > > what happens when the machine (the proxy is on) dies and we failover
              to
              > > the
              > > > other proxy? Is that proxy going to have any awareness of session X?
              > > > Probably not. The new proxy is probably going to believe this request
              > is
              > > > new and forward the request to a machine which may not host the
              existing
              > > > primary session. I believe this is an error?
              > > >
              > > > Is a cluster proxy a single-point-of-failure? Is there any way to
              avoid
              > > > this? Does the same problem exist if you use Weblogic's HTTP server
              (as
              > > the
              > > > cluster proxy)?
              > > >
              > > > Thank you.
              > > >
              > > > Marko.
              > > > .
              > > >
              > > >
              > > >
              > > >
              > > >
              > > >
              > >
              > >
              >
              >
              

  • New Site Server with Distribution Point & Software Update Point Roles not pulling SUGs

    I just set up a new server & installed the DP & SUP roles on it.  I am getting the following in the log this is just a small sample as its kind of repetative:
    Report state message 0x8000094F to MP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report Body: <ReportBody><StateMessage MessageTime="20140328183445.000000+000" SerialNumber="0"><Topic ID="FPP00002" Type="901" IDType="0"/><State ID="2383" Criticality="0"/><UserParameters
    Flags="0" Count="2"><Param>FPP00002</Param><Param>["Display=\\FPPSCCM02.FPP.WUCON.WUSTL.EDU\"]MSWNET:["SMS_SITE=FPP"]\\FPPSCCM02.FPP.WUCON.WUSTL.EDU\</Param></UserParameters></StateMessage></ReportBody>
     SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report status message 0x8000094F to MP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Status message has been successfully sent to MP from remote DP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Retry 10 times SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Start to evaluate all packages ... SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Start to evaluate package 'FPP00002' version 0 ... SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report status message 0x4000094C to MP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Status message has been successfully sent to MP from remote DP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Failed to evaluate package FPP00002, Error code 0x80070002 SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report state message 0x8000094F to MP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report Body: <ReportBody><StateMessage MessageTime="20140328183445.000000+000" SerialNumber="0"><Topic ID="FPP00002" Type="901" IDType="0"/><State ID="2383" Criticality="0"/><UserParameters
    Flags="0" Count="2"><Param>FPP00002</Param><Param>["Display=\\FPPSCCM02.FPP.WUCON.WUSTL.EDU\"]MSWNET:["SMS_SITE=FPP"]\\FPPSCCM02.FPP.WUCON.WUSTL.EDU\</Param></UserParameters></StateMessage></ReportBody>
     SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report status message 0x8000094F to MP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Status message has been successfully sent to MP from remote DP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Report status message 0x40000952 to MP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Status message has been successfully sent to MP from remote DP SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    DP monitoring finishes evaluation. SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    Failed to evaluate some packages after 10 retry SMS_Distribution_Point_Monitoring 3/28/2014 1:34:45 PM 6444 (0x192C)
    The Console shows for the DP that it's waiting for content and I don't see where I can create a Prestage Package
    # When I wrote this script only God & I knew what I was doing. # Now, only God Knows! don't retire technet http://social.technet.microsoft.com/Forums/en-US/e5d501af-d4ea-4c0f-97c0-dfcef0192888/dont-retire-technet?forum=tnfeedback

    D:\SMS_DP$\sms\logs\smsdpmon.log
    This is the 2nd server in our SCCM 2012 Sp1 Hierarchy.  It is set up with the following Site System Roles:
    Component Server
    Distribution Point
    Site System
    Software Update Point
    It is Prestaged Enabled.  My intent is that the client systems @ the location where this DP is located will use it to pull their Microsoft Updates from as well as content for any Applications we push to them rather than going over the WAN to the Primary
    Site Server.
    IN Distribution Point Configuration Status. the console shows "Failed to update package" & "Packaget Transfer Manager failed to update the package "yxz00040". Version 3 on Distribution Point server.my.domain.com review pkgxfermgr.log for more information
    about this failure."
    It also goes on toe list 2 possible causes and solutions
    Site servers does not have sufficient rights to the source directory.
    (Site server account is a member of local Administrators on Primary)
    Not enough disk space available
    (I have over 1TB of available space and primary site server has only 150GB available for entire content repository, both applications and sofware updates.)
    # When I wrote this script only God & I knew what I was doing. # Now, only God Knows! don't retire technet http://social.technet.microsoft.com/Forums/en-US/e5d501af-d4ea-4c0f-97c0-dfcef0192888/dont-retire-technet?forum=tnfeedback

  • Using a custom certificate store for SCCM 2012 clients and primary site server

    I have read what seems to be all the pki related documentation out there for SCCM 2012. I have a PKI infrastructure up and running issueing certificates with an offline root through group policy autoenrollment. The problem that i'm faced with is we are migrating
    from SCCM 2007 that was in native mode and we chose not to use the CA that we used for the old SCCM environment. When the clients attempt to communicate with the M.P. it runs through all of the different certificates and adds a tremendous amount of overhead
    to the M.P. We will have ten's of thousands of clients by migration end. Could someone please point me to a document that goes over how to leverage a custom certificate store that I could then tell the new 2012 environment to use? I know that it's in there,
    I've seen it in the console. The setup is one primary site server with SQL on box and the pki I just mentioned as well as the old 2007 environment that is still live.
    I read that you can try and use SAN as a method of identifying the new certs but I haven't found a good document covering exactly how that works. Any info you could provide I would be very grateful for. Thanks.

    Jason, thank you for your reply. I'm getting the impression that you have never been in the situation where you had to deal with 2 different PKI environments. Let me state that I understand what your saying about trust. We have to configure the trusted root
    CA via GPO. That simply isn't enough, and I have a valid example to backup this claim. When the new clients got the advertisement and began the ccmsetup process I used the /pki switch among others. What the client end up doing was selecting a certificate that
    had the longest validity period which was issued by our old CA. It checked the authentication chain, found it to be valid and selected it for communication. At that point the installation failed, period, no caveats as you say. The reason the install failed
    because the new PKI infrastructure is integrated into the new environment, and the old is not. So when you said " that
    are trusted and they can use *any* cert that is trusted because at the end of the day, there is no
    difference between two valid certs that have the same purpose as long as they are trusted. "
    that is not correct. Both certs are trusted, and use the same certificate template, but only one certificate would allow the install to complete successfully.
    Once I started using the CCMCERTISSUERS
    switch the client install went swimmingly. The only reason I'm still debating this point is because someone might read this thread see your comments and assume "well I've got my new PKI configured as a trusted root CA, I should be all set" and their
    deployment will fail, just as my pilot did.
    About Intune I'm looking forward to doing a POC in the lab i built with my Note 3. I'm hoping it goes well as I really want to have our MDM migrated into ConfigMgr... I think the
    biggest obstacle outside of selling it to management will be the actual device migration from the current MDM solution. From what I understand of the enrollment process manual install and config is the only path forward.
    Thanks Jason for your post and discussion.

  • Connect a primary site as a ditribution point to another primary site

    Hi @ all,
    I have a Central site and a primary site server running - all is working fine. Now I have to provide another primary site server. I setuo the server and connected the new site to the Central site - all is working fine.
    But on my forst primary site (P1) are some (many) software packages which I would like to deploy on the new primary site server (P2). Is it possible to establich a connection from P1 to P2 so that I can handle P2 as a distribution point from P1.
    On the other site the only way I see is to copy all software packgaes from P1 to the Central Site and ditribute the packages to the P1 and P2.
    Maybe there is a way which I don't see. Every help is very apreciated.
    Many thanks for your attention.
    Rolf

    Hi Jason,
    thanky for your reply. This way to is not the way I would like to go.
    I'll be able to make a decision, if I sent the package from P1 to P2. I'm sorry, but I did not explain that the P1 and P2 will work in parallel - because the servers are for two different branches. But the P1 server contains software which can be used in
    the other branch offfice. We are using SCCM 2007 R3 - I forgot in my first post.
    I hope this helps to understand which problem I have to solve.
    Regards
    Rofl

  • Single points of failure?

    So, we are looking into the xServe RAID, and I'd like some insight into making things as bulletproof as possible.
    Right now we plan to have:
    a load balancer and a failover load balancer (running on cheap BSD hardware, since hardware load balancers are so damned expensive) feeding into
    two application servers, which communicate with
    one back-end server, which serves as both a database server and an NFS server for the app servers
    And the volumes that will be NFS-mounted would be on our xServe RAID, which would be connected directly to the back-end server.
    The networking hardware would all be failover through multiple switches and cards and so forth.
    The idea here is to avoid as many single points of failure as possible. Unfortunately at the moment we don't have a DBA who is fluent in clustering, so we can't yet get rid of the back-end server as a single point of failure. (Which is also why I'm mounting the RAID on it and sharing via NFS... if the database goes down, it won't matter that the file service is down too.) However, in the current setup, there's one other failure point: the RAID controllers on the xServe RAID.
    Performance is less important to us on this than reliability is. We can't afford two RAID units at the moment, but we can afford one full of 500 gig drives, and we really only need about 4 TB of storage right now, so I was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc. As far as I understand it, this eliminates the RAID controllers as a single point of failure, and as far as I know they are at least supposedly the only single point of failure in the xServe RAID system. (I could also do RAID 10 that way, but due to the way we store files, that wouldn't buy us anything except added complexity.)
    And later on, down the road, when we have someone good enough to figure out how to cluster the database, if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it, thus cutting out the middle man NFS service. (I am under the impression that this sort of volume-sharing is possible via FC... is that correct?)
    Comments? Suggestions? Corrections to my misapprehentions?
    --Adam Lang

    Camelot wrote:
    A couple of points.
    was thinking of setting up drive 0 on controller 0 and drive 0 on controller 1 as a software RAID mirror, and the same with drive 1, etc.
    Really? Assuming you're using fourteen 500GB drives this will give you seven volumes mounted on the server, each a 500GB mirror split on the two controllers. That's fine from a redundancy standpoint, but it ***** from the standpoint of managing seven direct mountpoints on the server, as well as seven NFS shares, and 14 NFS mount points on the clients. Not to mention file allocations between the volumes, etc.
    If your application is such that it's easy to dictate which volume any particular file should be on and you don't mind managing all those volumes, go ahead, otherwise consider creating two RAID 5 volumes, one on each controller, using RAID 1 to mirror them on the back-end server and exporting a single NFS share to the clients/front-end servers.
    Quite simple, actually. But admittedly, two RAID 5s RAID-1-ed together would be much more efficient, space-wise.
    if I understand correctly, we can spend the money get a fibre switch or hub or whatever they call it and mount the RAID on the two (application server) systems that actually use it
    Yes, although you'll need another intermediate server as the metadata controller to arbitrate connections from the two machines. It becomes an expensive option, but your performance will increase, as will the ease with which you can expand your storage network (adding more storage as well as more front-end clients).
    But then that means that the metadata controller is a single point of failure...?
    --Adam Lang

  • Linux cluster, no single point of failure

    I'm having difficulty setting up a Business Objects cluster in linux with no single point of failure.  Following the instructions for custom install I am ending up connecting to the CMS server on the other server and no CMS running on the server i'm doing the install on. Which is a cluster, however we only have CMS running on one server in this scenario and we can't have a single point of failure.  Could someone explain how to setup a 2 server clustered solution on linux that doesn't have a single point of failure.

    not working, I can see my other node listed in the config, but the information for the servers state that the SIA is available, I've checked network/port connectivity between the boxes and SIA is running and available for each box.
    Via the instructions for installing on a system with windows capabilities I read about a step to connect to an existing CMS.
    http://wiki.sdn.sap.com/wiki/download/attachments/154828917/Opiton1_add_cms3.jpg
    http://wiki.sdn.sap.com/wiki/display/BOBJ/Appendix1-Deployment-howtoaddanotherCMS-Multipleenvironments
    via the linux install.sh script, no matter what I do I'm not coming across any way that allows me to reach that step.

  • Single point of failure for web dispatcher

    Hi
    I need advise on how can i resolve single point of failure for web
    dispatcher in case the web dispatcher goes down on another system, what
    are the alternative which can be used to avoid this.
    In our enviroment we have db server with two application server and web
    dispatcher is installed on db server and i need to know what can i do when
    the web dispatcher on db server crashes and cannot be restarted at all.
    We are running oracle 10.2.0.2.0 on AIX 5.3.
    Regards,
    Codlick

    Hi Codlick,
    the answer is, you cannot (switch to two web dispatchers).
    If you want to use two web dispatchers, they need something in front, like a hardware load balancer. This would actually work, as WD know their sessions and sticky servers for those. But remember you always need a single point for the incoming address (ip).
    Your problem really is about switchover groups. Both WD need to run in different switchover groups and need to switch to the same third software. I'm not sure if your switchover software can handle this (I'm not even sure if anyone can do this...), as this means the third WD needs to be in two switchover groups at the same time.
    Hope this helps,
    Regards,
    Benny

  • SCCM 2012 Primary Site server is piling up so huge

    Hello Folks,
    We recently implemented SCCM in our environment for a month ago. Its a stand-alone primary site model running DB on the primary site server itself. But, have segregated the drives for content, DB, and C drive for holding up the SCCM server related
    installations. I haven't noticed the data growth on the server, as a result the C drive ran out of space yesterday and increased 40 GB more than the earlier. As of now, C drive has about 120 gb in total and there is only 25% left on whereas it had 33%
    after expanding to 120 GB. When digging through the blogs, I doubted it could be BAD MIF inbox folder(screenshot attached), so went ahead and ran space check on the server, and looks like the major % of drive space is being occupied by the inbox folder and
    also it is increasing day by day. I need further more directions on how this can be monitored to find out growth % and fix this.
    I checked basic stuffs like clearing up profile & recycle bin data and any of this kind.
    Step by step guide in detail should be more helpful in this regard. And, I hope this should be helpful for a larger SCCM admin crowd like me in the universe.
    I also want to know what are the settings in SCCM console associates with these folders so that the settings can be streamlined to match the best practices (like inventory schedule for every 1 week & software inventory best practices) if at all, if we
    have enabled in the implementation phase to receive fast & large results.
    Thanks, V@s!m

    We are seeing a stable disk size nowadays.
    With that hopefully, this is a time to sum up the actions taken.
    1. SCCM 2012 implemented a couple of months ago.
    2. During the implementation phase, Client & Discovery settings were enabled to be quite an aggressive schedules to get good amount of results in a short period.
    3. Implementation had been successfully completed and the system was running just fine until I noticed the space crunch
    4. I thought it was normal and increased the disk space by 50% which got ate up in a couple of days and that's when I started suspecting about something is going wrong
    5. As an interim solution, have increased the disk space by 70% and started troubleshooting the SCCM buddy.
    6. Discovery settings were brought down to not to be aggressive (but this wasn't the issue) - still the issue was going on
    7. Inventory settings were too brought down to be nominal (Useful :
    http://www.informit.com/articles/article.aspx?p=1912064&seqNum=8)
    8. By this time, I brought my colleague who was involved during the Implementation. We did the clean-up on server, went through the Inbox\Auth folders and were trying to figure out still what is going wrong.
    9. We couldn't notice much improvement about the situation though we gave enough time for each and every fix that we applied.
    10. Through some warnings and errors in the site system status, remembered that the 'Configuration.mof' was modified for testing some customized inventory classes, during the implementation phase which was getting mismatched data from lot of different clients.
    - "Potential cause figured out"
    11. Replaced the original Configuration.mof, and we started noticing big differences - the space was no more increasing.
    12. Still we could see lot of old files in sub-folders of Inbox\Auth folder, we hope all these files would try to process for the information that we added in the 'Configuration.mof' > will fail > will go to respective 'Bad/Corrupt' folders and
    eventually will be removed as per the clean-up cycle.
    I will update the thread if anything goes wrong beyond this point. But, we believe it wouldn't as we have found the culprit and fixed.
    **** I appreciate all your responses to help me with this situation. Happy SCCMing..!
    Thanks, V@s!m

Maybe you are looking for

  • ERROR: transport error 202: bind failed: Address already in use

    Hey guys, I created 2 WL 10.3 Domains. I can start the admin server in domain but when I start the second admin server i get the following error: starting weblogic with Java version: ERROR: transport error 202: bind failed: Address already in use ERR

  • Website links won't open from iMessage?

    HI guys, if I get sent a website address in iMessage usually I'm able to click on the link and it will open in Safari but it doesn't seem to respond at all? I've tried resetting general and iMessage settings too Any ideas?

  • Unload dynamic data using JDBC into a file

    Hi, I have to download the result of a dynamic SELECT query into a file. The query will be provided at runtime. So I am unsure of the number of columns and the data types of the columns. Also I am unable to use io on the returned resultset object. So

  • Initializing Video Import?

    I recently downloaded Photoshop CC on my PC. And I'm trying to make GIFs but when I click on Import Video Frames to Layers, it says "Initializing Video Import" and loads a little bit then stops and never lets me choose a video. I've already tried ope

  • Audio recording using KinectStudio API

    Hi, I'm using KStudio APIs to record raw data from the kinect v2. I found some documents and I can now record the raw color and IR data. However, I cannot find a way to include the audio beam using EventStreamDataTypeIDs class. Would you let me know