Do I need multiple primary sites? Some design questions..

I have about a thousand users & devices, across two sites.   Setting up SCCM 2012 R2 and wondering if I need to have multiple primary sites?
From everything I've read so far, it seems that a single standalone site will handle tens of thousands of users/devices or some very large number, so I'm not sure if I'd ever need a secondary site or what it's function would be - failover, backup, or is
it just best practice to have different roles spread across multiple servers?
I was originally thinking of just doing a single primary site on a single server.  But then I'm not sure if my DP should be separate.
Can someone point me the the right direction to a high level planning document or blog?
Thanks
Nathan

How many clients are there in total? How many at each location? What's the WAN speed in between?
Multiple primaries are only needed for scale out purposes (>100k clients)!
Torsten Meringer | http://www.mssccmfaq.de
About 500 at each, T1 speeds connect the sites, so I want to enable software metering I think.
Also - if I have a single server with DP role installed, what kind of RAM/HD requirements are needed?  I know this probably varies with the install/features, but are there any ballpark estimates/starting points posted?  Sorry I know this is probably
on the MS site somewhere, the the volume of information is hard to weed through.   Update, I found some good guidelines here:
http://myitforum.com/myitforumwp/2012/06/27/sccm-2012-site-hardware-requirements/
For 1000 or less users, is a single Gb NIC sufficient?  Sounds like it might be?   Also, when using a virtual machine, do you need to still somehow separate SQL logs & data from OS?
Update #2, looks like that above post answered that question too:
If you’re in a VM, it’s not sufficient to have a single VHD file, and having the roles split among 4 virtual drives inside that file.  It’s not sufficient to have that single file on a shared set of remote disks.  It’s not sufficient to have that
single file on a dedicated set of disks, regardless of the number and size of those disks.  Any VM should be configured to only run the OS, and the 3 other spindles should be dedicated sets of disks, attached to the VM.  Otherwise, it’s like painting
with watercolors in a hurricane…you’re spending a lot of time, looking creative, with zero value.
But I'll take any follow up comments or recommendations if you have them on proper VM setup.
Thanks
Nathan

Similar Messages

  • Collection Evaluation - multiple primary sites

    Hello: 
    If I have a CAS and multiple primary sites and have a set of collections all created on the CAS, where is the collection evaluation done (I believe coll eval is done only on primary sites)?  Is it done on each primary site independent of others?  
    And then they replicate to CAS and CAS sorts out possible conflicts/discrepancies?   
    So if I set the collection evaluation schedule, is that based on the local time of that particular primary site?  
    Thank you, 
    Mustafa Hamid, System Center Consultant

    Thank you Jason, 
    I think I understand your comment about the managed systems - the systems that are assigned to that primary site.  To take a simple case to help me understand - So if I have a device collection that has a query based on OU name=Dallas and that collection
    evaluates on PS1 and also on PS2.   Both will send their results to the CAS.  Generally they should both evaluate to same result (maybe sometimes a bit different based on the DC they connect to).  It seems in this case they are repeating the
    same work since its all global data?  
    Thank you
    Mustafa Hamid, System Center Consultant

  • Multiple Primary Sites connected to a CAS

    On a global deployment of SCCM 2012 we are planning on deploying a CAS in the global domain and Primary Sites in the sub domains and Secondary Site in the smaller subsidiaries where needed.
    I understand that all collections, clients and packages come through to the CAS but could an administrator connected to Primary Site A see the collections and packages in Primary Site B? Are they independent of each other with the exception of the CAS or
    do all sites in the hierarchy show everything from all sites?
    If that is the case then ideally we would want administrators of Site A to not even be able to connect to Site B but am struggling to understand how this could be achieved?

    "The use of a CAS was made due to the scale of the deployment."
    So you mean you have more than 100k devices.  Regardless of having a CAS and multiple primaries or having only 1 primary, if you have (let's say) a total of 3,000 devices; split up into 1k silos of responsibility. 
    you Essentially create 3 Collections, and those 3 silos get rights to their collection of devices.  Connecting to the primary sites' for the console really isn't "normal" in CM12--it's called the Central Administration Site for a
    reason--it really makes it less confusing for those people that need to use the console.  You may not think so initially (coming from a CM07 point of view); but it really is the best in a CM12 world.  If you have people in different locations which
    need console rights, the easiest, IMO, is to have a Citrix-hosted console; and people just connect to that console remotely; where the citrix host is in the same data center as the Central Administration Site.
    Now, if you do NOT have 100k devices, or you are nowhere near that number, please please please, I beg you, PLEASE rethink your perceived need for a cas and primary sites.  T-shooting replication issues is no fun, no joy to be had there at ALL. 
    you need to setup RBA correctly regardless of a CAS and primaries, or just 1 primary--so having a CAS and primaries when there is ZERO NEED for it due to scale--well, all I can say is I sincerely hope you are a contractor, and are just setting this up and
    then bailing never ever to return, and leaving the mess behind for the poor day to day admins to deal with.
    Standardize. Simplify. Automate.

  • Unknown Computer collection with multiple Primary sites

    Hi All,
    We have a SCCM 2012 SP1 envionment with a CAS and 2 Primary sites in seperate countries. Last week the primary site server in site B was down, this affected PXE boot deployments to the Unknown Computers group across the whole environment eg: site A. PXE
    booting to existing collections worked fine but PXE booting to unknown computers would time out, like the deployment server was waiting for a response from both Site A and Site B site servers.
    Now my question is this expected behaviour? Do the primary site servers across the whole environment need to be up for the Unknown Computers collection to work properly?
    Another thing I noticed is that the admins for Site B have created their own site specific Unknown Computers collection so i'm wondering if this is getting referenced when Unknown Computers PXE boot in Site A.

    I doubt that there's something happening cross-sites, but - as Jason said - logs would be helpful.
    Torsten Meringer | http://www.mssccmfaq.de

  • Multiple primary site - Discovery issue

    Hi,
    I am working on a scenario where there is 1 CAS and 3 Primary sites(PS1,PS2,PS3). At PS1 site, Active directory system Discovery is only configured and all the OU for all the sites are added. On the other two Primary sites this discovery was not configured.
    Now I wanted to ask is there any issue due to this , or should I have to configure this discovery on both sites??
    Thanks
    Pallavi

    no, you can still control that. Site Assignment is either based on an AD Query or you hard coding the site code during the install process. Client deployment depends on the method, in this process content boundary Groups and DP's are being used.
    Kent Agerlund | My blogs: blog.coretech.dk/kea and
    SCUG.dk/ | Twitter:
    @Agerlund | Linkedin: Kent Agerlund |
    Mastering ConfigMgr 2012 The Fundamentals

  • Multiple WAN site redundancy design review (dark fiber, p2p, DMVPN)

    I'm re-designing a couple of wan sites.  I'm using EIGRP over both some leased dark fiber and p2p provider connections.  The attached (pdf) physical topology says it all, I'm thinking of using ip sla to track and inject routes over prefered connections, but really just looking for feed back if someone is interested in taking a look. 
    I've bought 2 2951's with es3g-16-p modules so I can build svi's and do hsrp between the paths, building redundancy between the 3 available paths back to our enterprise core (1Gbps, 40Mbps, 50Mbps).
    multiple vlans at both sites...
    e.g.: (wan site1 (vlan 10-15), want site2 (vlan 16-20))
    Thoughts and thanks?

    hi there
    not sure why you need to use DMVPN if it all internal same internal network unless you need to have all the traffic between sites to be encrypted
    anyway in general i would say of use the direct link to reach the directly connected networks per site
    example using site one 100M link to reach DC and WAN
    and use site2 50M local link to reach WAN as primary path and use the site1-site2 fibre to reach DC as primary path for site2 this could archive a good load sharing and reduce the load on the link between site1 and site2
    IP SLA in a topology like your for sure can very helpful to improve failover time and make the routing more topology aware
    hope this helps

  • Backup to Tape ; Some Design questions

    Version : We have DBs on 10.2, 11.2 Enterprise Edition
    Platform: Solaris 5.10 SPARC
    Helloo..
    Never worked on Tape before. Currently , all our RMAN Backup Pieces are going to Disk and from there Sysadmin moves it to the tape using TSM.
    1. If you were to design a Backup strategy, would you backup to Disk first and then get the Sysadmin to
    copy to Tape OR backup directly to Tape?
    2. Do you guys backup Level_0 backup on TapeA , Level_1 backups on Tape B
    and Archivelogs, Control file , Spfile on TapeC
    3. RMAN Backup to Disk or Tape; Which is faster?

    1. Best practice directly to tape
    Adv: a. Longer retention inexpensive. separate tapes for weekly, monthly and yearly rotation.
    b. Can compress
    c. Inexpensive for VLDB.
    DisAdv: a. Needs Tape vendor (MML) license to directly write to tape.
    b. Storing tapes and tape maintenance.
    c. Slow process
    Writing directly to Disks
    Adv:a. For smaller databases with less retention.
    b. Faster backups locally
    c. Faster restores.
    DisAdv:a.Media failure
    b.Does not work for VLDB
    2. Its expensive to separate tape drives as they come in big sizes.Have maxsetsize or section size set to few GBs for faster read to restore. RMAN can easily identify the files it needs for recovery and the storage vendor have the catalogs that helps. You can separate tapes for permanent retention. Deleting the obsolete backups is messy as you have to load the right tape, operational difficulties.
    3. Refer #1.
    hth.
    Edited by: user11155666 on Jun 2, 2011 5:52 PM

  • Some design questions

              Hello,
              I got two rather unrelated questions regarding the JMS implementation:
              - Is there any guarantee on when messages become available to consumers?
              Suppose my producer is a session bean which posts a message on a
              queue. Is there any guarantee on whether a QueueBrowser who starts right
              after the transaction is committed will see the message I posted ? Or is
              it possible that some delay causes the message to become visible only some
              time later ?
              - Can QueueBrowsers see messages with a scheduled delivery time that hasn't
              arrived yet ?
              Thanks & Regards,
              Francois Staes.
              Francois Staes
              NetConsult BVBA
              [email protected]
              Tel: +32/3/353.44.22
              Mobile: +32/475/73.74.48
              Fax: +32/3/353.44.06
              

    Hi Francois,
              Since you must process messages in order, then the MDB is the
              only consumer, so why not just cache state in the MDB that
              contains the message-id of the last message processed? If
              a newly received message's message-id matches the stored
              message-id from the previous message, then treat the newly
              received message as an error message.
              This seems much simpler than queue-browser/error-queue.
              Tom, BEA
              P.S. Note that an MDB is destroyed and recreated if the
              onMessage throws a Runtime exception or Error. So you will need to
              put in a "try/catch() {ejbctx.setRollbackOnly(); cacheMessageId(msg);}"
              to prevent the destroy in order to preserve the cached message id
              value...
              Francois Staes wrote:
              > Greg Brail wrote:
              >
              >
              >>>- Is there any guarantee on when messages become available to consumers?
              >>> Suppose my producer is a session bean which posts a message on a
              >>> queue. Is there any guarantee on whether a QueueBrowser who starts
              >>> right after the transaction is committed will see the message I posted
              >>> ? Or is it possible that some delay causes the message to become
              >>> visible only
              >>
              >>some
              >>
              >>> time later ?
              >>
              >>Well, theoretically, there's always going to be "some delay" ;-) In
              >>practice, you should see the message right away. However, if the queue has
              >>a consumer on it, we push messages to that consumer in batches (that's one
              >>of the things that the "messages pending" statistic in the console tells
              >>you about). So, it's possible that the message has already been pushed out
              >>to a consumer, which is why you don't see it.
              >>
              >
              >
              > Thanks for the answer.
              >
              > Let me clarify what I want to achieve: I have an MDB receiving messages
              > about customers. These messages need to be handled in the correct order.
              > But it can happen that some message cannot be processed. Those messages need
              > to be pushed onto a seperate queue (an administrative utility can be used
              > afterwards to requeue them on the default queue so they get re-processed).
              >
              > As soon as there is a message about a certain customer on this error queue,
              > no further messages about that customer should be processed. Hence, from
              > within the MDB we peek on the error queue using a QueueBrowser. If we find
              > anything on there for the same customer as the current message, we
              > immediately stop the processing, and enqueue it on the error queue too.
              >
              > Originally, we tried implementing this using the error-queue feature of WLS.
              > However, moving the messages on the error queue is an asynchronous activity
              > in that case. This means that if a message cannot be processed, and another
              > message arrives right afterwards for the same customer, it might be that
              > the first message isn't yet visible on the error queue....
              >
              > That's why I was thinking about moving them to some kind of error queue
              > manually, and I needed some kind of guarantee that they would be visible
              > immediately.
              >
              > If I understand correctly what you're saying, I think there is no problem
              > because a QueueBrowser is not an asynchronous consumer on the error queue.
              >
              > Thanks for your help,
              >
              > Francois Staes.
              >
              

  • Primary site server a single point of failure?

    I'm installing ConfigMgr 2012 R2, and employing a redundant design as much as possible. I have 2 servers, call them CM01,CM02, in a single primary site, and on each server I have installed the following roles: Management Point, Distribution Point, Software
    Update Point, as well as the installing the SMS Provider on both servers. SQL is on a 3rd box.
    I am now testing failover from a client perspective by powering down CM01 and querying the current management point on the client: (get-wmiobject -namespace root\ccm -class ccm_authority).CurrentManagementPoint . The management point assigned to
    the client flips to the the 2nd server, CM02, as expected. However, when I try to open the CM management console, I cannot connect to the Site, and reading SMSAdminUI log reveals this error: "Provider machine not found". 
    Is the Primary site server a single point of failure? 
    Why can't I point the console to a secondary SMS provider?
    If this just isn't possible, what is the course of action to restore console access once the Primary Site server is down?
    Many Thanks

    Yes, that is a completely false statement. Using a CAS and multiple primaries in fact will introduce multiple single points of failure. The only technical Eason for a CAD a multiple primary sites is for scale out; i.e., supporting 100,000+ managed systems.
    HA is achieved from a client perspective by adding multiple site systems hosting the client facing roles: MP, DP, SUP, App Catalog.
    Beyond that, all other roles are non-critical to client operations and thus have no built-in HA mechanism. This includes the site server itself also.
    The real question is what service that ConfigMgr provides do you need HA for?
    Jason | http://blog.configmgrftw.com

  • Is 100K devices a hard cap for a Primary Site (or a guideline)?

    Hi,
    I'd like to know if the 'supports up to 100,000 devices' per Primary Site is a hard cap on the number of devices that SCCM 2012 R2 can handle or if it is a recommendation?
    We are using SCCM 2012 R2 and currently have a single Primary Site with about 90K devices.  Very soon, we will be looking at adding a CAS (with multiple Primary Sites) to support our ever growing number of devices.
    Until we get there with the CAS, I'd like to know what to expect if we get over 100K devices.
    Will we break SCCM?
    Will any devices over 100K not be added (and therefore not be managed)?
    Nothing much, but the system may perform more slowly?
    Results will be unpredictable?
    Something else?
    Thanks, Joe.

    Actually, it's not really a guideline either, it's an official statement of support from Microsoft meaning that if you go over this number, you may have issues that Microsoft will not provide support for.
    This is officially documented at https://technet.microsoft.com/en-us/library/gg682077.aspx under the Clients per Hierarchy section.
    Is your org simply close to this number or fearful of going over in the future?
    Remember that a primary site can be expanded into a CAS with multiple primary sites under it if need be in the future.
    Also note that the although the 100,000 client limit has been there since the launch of 2012, that was over three years ago and there are some upcoming releases.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • SCCM Primary Site installation fails

    Hello!
    In my organization we have two domain/forests. DomainA.local and DomainB.local
    in one forest (DomainA.local) we have sccm 2012 sp1 CAS site. with dedicated database server on sql 2012 sp1 cu5
    in other forest (DomainB.local) we want to setup primary site on sccm 2012 sp1 with dedicated database server on sql 2012 sp1 cu5
    forests have trust both sided.
    all installation accounts have administrative rights on all SC servers. in both domains.
    when i try to install SCCM 2012 primary site in the hierarchy,
    i receiving the errors:
    INFO: Created SQL Server machine certificate for Server [S-SCDB-02.DomainB.local] successfully.
      ERROR: Failed to open certificate store (HRESULT=0x35)    Configuration Manager Setup    9/3/2013 11:56:19 AM    3268 (0x0CC4)
    ERROR: Failed to write S-SCDB-02.DomainB.local SQL Server certificate to store (TrustedPeople) on site server (S-SCDB-01.DomainA.local).
    ERROR: Failed to write certificate of primary site's SQL Server [S-SCDB-02.DomainB.local] to CAS SQL Server [S-SCDB-01.DomainA.local].
    Install user from domainB.local has administrative rights on S-SCDB-01.DomainA.local and sysadmin rights in sql server.
    Also, it has full administrator role on CAS.Of course, it has administrative rights on primary site server and sql server S-SCDB-02.DomainB.local and sysadmin rights.
    WHY????

    >Taking a step back: why? Are you using a CAS and multiple primary sites at all? Do you have 100,000+ clients to manage?
    we need CAS due to our network infrastructure.
    thank you for you help.
    we solved problem today.
    it was need to open "windows" ports on the firewall between SCCM Primary Site server and CAS SQL server to give SCCM
    primary site installation process the ability to install the primary site's sql-server's self-signed certificate to CAS sql-server trusted people local store.
    i did not remember this point in deploying documentation((((

  • Can we assign 2 IPs for a SCCM 2012 primary site server and use 1 IP for communicating with its 2 DPs and 2nd one for communicating with its upper hierarchy CAS which is in a different .Domain

    Hi,
    Can we assign 2 IPs for a SCCM 2012 primary site server and use 1 Ip for communicating with its 2 DPs and 2nd one for communicating with its upper hierarchy CAS . ?
    Scenario: We are building 1 SCCM 2012 primary site and 2 DPs in one domain . In future this will attach to a CAS server which is in different domain. Can we assign  2 IPs in Primary site server , one IP will use to communicate with its 2 DPs and second
    IP for communicating with the CAS server which is in a different domain.? 
    Details: 
    1)Server : Windows 2012 R2 Std , VM environment .2) SCCM : SCCM 2012 R2 .3)SQL: SQL 2012 Std
    Thanks
    Rajesh Vasudevan

    First, it's not possible. You cannot attach a primary site to an existing CAS.
    Primary sites in 2012 are *not* the same as primary sites in 2007 and a CAS is 2012 is completely different from a central primary site in 2007.
    CASes cannot manage clients. Also, primary sites are *not* used for delegation in 2012. As Torsten points out, multiple primary sites are used for scale-out (in terms of client count) only. Placing primary sites for different organizational units provides
    no functional differences but does add complexity, latency, and additional failure points.
    Thus, as the others have pointed out, your premise for doing this is completely incorrect. What are your actual business goals?
    As for the IP Addressing, that depends upon your networking infrastructure. There is no way to configure ConfigMgr to use different interfaces for different types of traffic. You could potentially manipulate the routing tables in Windows but that's asking
    for trouble IMO.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Design Question for table - related columns

    Hi,
    I have some design question about table I am working on.
    Here are the sample fields in the table,
    process_begin_date
    process_approved_by
    process_signed_by
    process_monitor
    process_communication
    the same way I have around 10 groups, for ex
    other_begin_date
    other_approved_by
    other_signed_by
    other_email
    other_something
    Question: Is good have all 50 fields in the same table? or any better idea?

    Hi,
    Number of columns should not be any issue, but, proper normalization may be better for your design and scalability. If you can explain what you are storing in this table, you might get help if you need to have more than 2 tables in this particular scenario.
    If all these fields are related to a single entity, probably this single table is already normalized and needs not to be replaced by two tables.
    Salman

  • LDAP design question for multiple sites

    LDAP design question for multiple sites
    I'm planning to implement the Sun Java System Directory Server 5.2 2005Q1 for replacing the NIS.
    Currently we have 3 sites with different NIS domains.
    Since the NFS over the WAN connection is very unreliable, I would like to implement as follows:
    1. 3 LDAP servers + replica for each sites.
    2. Single username and password for every end user cross those 3 sites.
    3. Different auto_master, auto_home and auto_local maps for three sites. So when user login to different site, the password is the same but the home directory is different (local).
    So the questions are
    1. Should I need to have 3 domains for LDAP?
    2. If yes for question 1, then how can I keep the username password sync for three domains? If no for question 1, then what is the DIT (Directory Infrastructure Tree) or directory structure I should use?
    3. How to make auto map work on LDAP as well as mount local home directory?
    I really appreciate that some LDAP experta can light me up on this project.

    Thanks for your information.
    My current environment has 3 sites with 3 different NIS domainname: SiteA: A.com, SiteB:B.A.com, SiteC:C.A.com (A.com is our company domainname).
    So everytime I add a new user account and I need to create on three NIS domains separately. Also, the password is out of sync if user change the password on one site.
    I would like to migrate NIS to LDAP.
    I want to have single username and password for each user on 3 sites. However, the home directory is on local NFS filer.
    Say for userA, his home directory is /user/userA in passwd file/map. On location X, his home directory will mount FilerX:/vol/user/userA,
    On location Y, userA's home directory will mount FilerY:/vol/user/userA.
    So the mount drive is determined by auto_user map in NIS.
    In other words, there will be 3 different auto_user maps in 3 different LDAP servers.
    So userA login hostX in location X will mount home directory on local FilerX, and login hostY in location Y will mount home directory on local FilerY.
    But the username and password will be the same on three sites.
    That'd my goal.
    Some LDAP expert suggest me the MMR (Multiple-Master-Replication). But I still no quite sure how to do MMR.
    It would be appreciated if some LDAP guru can give me some guideline at start point.
    Best wishes

  • Database Design for Multiple function site

    Hi
    I am working on one project which involve multiple function
    site, such as
    Company Product Catelog, Customer Support Forum, Document
    Exchange Engine and
    etc...
    Normally we will combine ALL TABLEs into one DATABASE.
    My question are:
    1) Is my break them to individual DATABASE, will it perform
    better?
    Means Product Catelog and Forum will have different
    DATABASE, but they
    will using the same DOMAIN NAME.
    2) I am worried about the break down and corruption of
    DATABASE, so I have the
    idea to separate them out. Are my idea correct or wrong?
    3) I am seeking for better DATABASE DESIGN, because I know
    the database will
    become huge in future. I request for your idea and opinion.
    Thank you very much.

    Creating views: not an option I think. It would involve a lot of programming with 'instead of' triggers etc.
    Seperate databases: a good way if the locations are completely independant and do not share information. This involves more DBA work.
    Separate schema's in one database: this would make public synonyms impossible, and is probably not a good option.
    Adding a location id to tables: the best way I think, and flexible. You can easily add another location, and locations can easily share information.

Maybe you are looking for

  • Open external url dynamic link in new tab  (no new window) in Chrome

    Hello, We need to navigate to a external link in response to user double-click in a row table. We've succesfully created the code to invoking a managed bean as response of double-click user event. For that, we added a clientListener object to the tab

  • Error target 'sda.icssb2b' does not exist in this project

    Hello sir, I want to build ear(Application Archive) by buildtool and ant-current-bin for Modified SAP Internet Sales Web Applications but i have war file and sda file ,i build form ISA_BuildTool_10 guidline document , build failed because Target 'sda

  • BEX User exit variables in WEBI

    Hi Experts, Need your help please. I have a report in BEX with a user exit variable wich depends on other manual input variable. In BEX it works properly but in WEBI it doesn't recognize the user exit variable. It displays the error message: Not poss

  • How read a file properties

    Hello, I need to read a file properties at start up the application to load some costants. How can I do this? Where put the code to read file? Thanks everybody!

  • Is Adobe Premiere Elements 11 supporting ATOM Z2760 processor?

    Hi, I'm trying to install Premiere Elements 11 on an Asus VivoTAB smart but installation won't run because the processor is not supported. Is ATOM Z2760 processor supported or ise there any procedure/update to have it working? Regards Enrico