Chatting Best Practices with Large Groups

We have a large group (125) people who are involved in a 4-hour training each month.  What best practices would you suggest for managing chatting with this large of group.  Perhaps layout options, polling options, any best practices would be appreciated.

I would leave chat alone with a group that large. You can provide that functionality to have an open communication between participants and possibly presenters/hosts for quick exchanges, but don't rely on it for question and answer functionality. The Q & A pod will queue up all the questions that are asked in it and you (or other presenters/hosts) can answer them while keeping the answers associated with the question and have the ability to reply publicly or privately. All questions are asked privately and are not seen by other participants, so duplicate or inappropriate questions can be easily removed or ignored.
Polling is also good to keep the responses in a controlled evironment.

Similar Messages

  • IPS Tech Tips: IPS Best Practices with Cisco Remote Management Services

    Hi Folks -
    Another IPS Tech Tip coming up and this time we will be hearing from some past and current Cisco Remote Services members on their best practice suggestions. As always these are about 30 minutes of content and then Q&A - a low cost high reward event.
    Hope to see you there.
    -Robert
    Cisco invites you to attend a 30-45 minute Web seminar on IPS Best   Practices delivered via WebEx. This event requires registration.
    Topic: Cisco IPS Tech Tips - IPS Best Practices with Cisco Remote Management   Services
    Host: Robert Albach
    Date and Time:
    Wednesday, October 10, 2012 10:00 am, Central Daylight Time (Chicago,   GMT-05:00)
    To register for the online event
    1. Go to https://cisco.webex.com/ciscosales/onstage/g.php?d=203590900&t=a&EA=ralbach%40cisco.com&ET=28f4bc362d7a05aac60acf105143e2bb&ETR=fdb3148ab8c8762602ea8ded5f2e6300&RT=MiM3&p
    2. Click "Register".
    3. On the registration form, enter your information and then click   "Submit".
    Once the host approves your registration, you will receive a confirmation   email message with instructions on how to join the event.
    For assistance
    http://www.webex.com
    IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and   any documents and other materials exchanged or viewed during the session to   be recorded. By joining this session, you automatically consent to such   recordings. If you do not consent to the recording, discuss your concerns   with the meeting host prior to the start of the recording or do not join the   session. Please note that any such recordings may be subject to discovery in   the event of litigation. If you wish to be excluded from these invitations   then please let me know!

    Hi Marvin, thanks for the quick reply.
    It appears that we don't have Anyconnect Essentials.
    Licensed features for this platform:
    Maximum Physical Interfaces       : Unlimited      perpetual
    Maximum VLANs                     : 100            perpetual
    Inside Hosts                      : Unlimited      perpetual
    Failover                          : Active/Active  perpetual
    VPN-DES                           : Enabled        perpetual
    VPN-3DES-AES                      : Enabled        perpetual
    Security Contexts                 : 2              perpetual
    GTP/GPRS                          : Disabled       perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 250            perpetual
    Total VPN Peers                   : 250            perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    This platform has an ASA 5510 Security Plus license.
    So then what does this mean for us VPN-wise? Is there any way we can set up multiple VPNs with this license?

  • FIM R2 - best practice handling large AD groups

    On attempting to create large security group (with 35k users) in AD, i get "dropped connection from the domain controller.
    The MS AD guy we have attached here tells me that there are some limitations on LDAP and even some known issues with writing 5k+ objects to a DC.
    Are there any "best practices" for writing large groups to AD?
    /Nicolai

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

  • Best Practice Regarding Large Mobility Groups

    I was reading the WLC Best Practices and was wondering if anyone could put a number to this statement regarding the largest number of APs, end users, and controllers which can contained in a Mobility Group.
    We would be deploying WiSMs in two geographically dispersed data centers. No voice is being used or is planned.
    "Do not create unnecessarily large mobility groups. A mobility group should only have all controllers that have access points in the area where a client can physically roam, for example all controllers with access points in a building. If you have a scenario where several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues and access points inside the group, which would not interact anyway.
    Keep in mind that WLC redundancy is achieved through the mobility groups. So it might be necessary in some situations to increase the mobility group size, including additional controllers for
    redundancy (N+1 topology for example)."
    I would be interested in hearing about scenarios where a Catalyst 6509 with 5 WiSM blades is deployed in data centers which back each other up for cases of disaster recovery.
    Can I have one large Mobility group? This would be easier to manage.
    or
    Would it be better to back up each blade with a blade in the second data center? This would call for smaller Mobility Groups.
    Be glad to elaborate further if anyone has a similar experience and needs more information.
    All responses will be rated.
    Thanks in advance.
    Paul

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

  • Skype Incredibly Slow with Large Group Chats

    I was invited to a very large group chat (170 memebers, 100s of posts a say, active for years) and every time i attempt to read it skype grinds to a halt. It can take minutes to load new messages, during which the entire application is unresponsive. Skype is basically always using < 200MB ram during this, but it does seem to use a substantial amount (~1GB) of virtual memory.
    It is very frustrating. Is there anything I can do to at least improve the situation?
    Im using 7.2 (412) on Yosemite on a Mac Book Air

    http://heartbeat.skype.com/

  • Best practices with sequences and primary keys

    We have a table of system logs that has a column called created_date. We also have a UI that displays these logs ordered by created_date. Sometimes, two rows have the exact same created_date down to the millisecond and are displayed in the UI in the wrong order. The suggestion was to order by primary key instead since the application uses an oracle sequence to insert records so the order of the primary key will be chronological. I felt this may be a bad idea as a best practice since the primary key should not be used to guarantee chronological order although in this particular application's case, since it is not a multi-threaded environment, it will work so we are proceeding with it.
    The value for the created_date is NOT set at the database level (as sysdate) but rather by the application when it creates the object which is persisted by Hibernate. In a multi-threaded environment, thread A could create the object and then get blocked by thread B which is able to create the object and persist it with key N after which control returns to thread A it persists it with key N+1. In this scenario thread A has an earlier timestamp but a larger key so will be ordered before thread B which is in error.
    I like to think of primary keys as solely something to be used for referential purposes at the database level rather than inferring application level meaning (like the larger the key the more recent the record etc.). What do you guys think? Am I being too rigorous in my views here? Or perhaps I am even mistaken in how I interpret this?

    >
    I think the chronological order of records should be using a timestamp (i.e. "order by created_date desc" etc.)
    >
    Not that old MYTH again! That has been busted so many times it's hard to believe anyone still wants to try to do that.
    Times are in chronological order: t1 is earlier (SYSDATE-wise) than t2 which is earlier than t3, etc.
    1. at time t1 session 1 does an insert of ONE record and provides SYSDATE in the INSERT statement (or using a trigger).
    2. at time t3 session 2 does an insert of ONE record and provides SYSDATE
    (which now has a value LATER than the value used by session 1) in the INSERT statement.
    3. at time t5 session 2 COMMITs.
    4. at time t7 session 1 COMMITs.
    Tell us: which row was added FIRST?
    If you extract data at time t4 you won't see ANY of those rows above since none were committed.
    If you extract data at time t6 you will only see session 2 rows that were committed at time t5.
    For example if you extract data at 2:01pm for the period 1pm thru 1:59pm and session 1 does an INSERT at 1:55pm but does not COMMIT until 2:05pm your extract will NOT include that data.
    Even worse - your next extract wll pull data for 2pm thru 2:59pm and that extract will NOT include that data either since the SYSDATE value in the rows are 1:55pm.
    The crux of the problem is that the SYSDATE value stored in the row is determined BEFORE the row is committed but the only values that can be queried are the ones that exist AFTER the row is committed.
    About the best you, the user (i.e. not ORACLE the superuser), can do is to
    1. create the table with ROWDEPENDENCIES
    2. force delayed-block cleanout prior to selecting data
    3. use ORA_ROWSCN to determine the order that rows were inserted or modified
    As luck would have it there is a thread discussing just that in the Database - General forum here:
    ORA_ROWSCN keeps increasing without any DML

  • SolMan CTS+ Best Practices for large WDP Java .SCA files

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

  • SCOM 2012 Agent - Best Practices with Base Images

    I've read through the
    SCOM 2012 agent installation methods technet article, as well as how to
    install the SCOM 2012 agent via command line, but don't see any best practices in regards to how to include the SCOM 2012 agent in a base workstation image. My understanding is that the SCOM agent's unique identifier is created at the time of client installation,
    is this correct? I need to ensure that this is a supported configuration before I can recommend it. 
    If it is supported, and it does work the way I think it does, I'm trying to find out a way to strip out the unique information so that a new client GUID will be created after the machine is sysprepped, similar to how the SCCM client should be stripped of
    unique data when preparing a base image. 
    Has anyone successfully included a SCOM 2012 (or 2007 for that matter) agent in their base image?
    Thanks, 
    Joe

    Hi
    It is fine to build the agent into a base image but you then need to have a way to assign the agent to a management group. SCOM does this via AD Integration:
    http://technet.microsoft.com/en-us/library/cc950514.aspx
    http://blogs.msdn.com/b/steverac/archive/2008/03/20/opsmgr-ad-integration-how-it-works.aspx
    http://blogs.technet.com/b/jonathanalmquist/archive/2010/06/14/ad-integration-considerations.aspx
    http://thoughtsonopsmgr.blogspot.co.uk/2010/07/active-directory-ad-integration-when-to.html
    http://technet.microsoft.com/en-us/library/hh212922.aspx
    http://blogs.technet.com/b/momteam/archive/2008/01/02/understanding-how-active-directory-integration-feature-works-in-opsmgr-2007.aspx
    You have to be careful in environments with multiple forests if no trust exists.
    http://blogs.technet.com/b/smsandmom/archive/2008/05/21/opsmgr-2007-how-to-enable-ad-integration-for-an-untrusted-domain.aspx
    http://rburri.wordpress.com/2008/12/03/untrusted-ad-integration-suppress-misleading-runas-alerts/
    You might also want to consider group policy or SCCM as methods for installing agents.
    Cheers
    Graham
    Regards Graham New System Center 2012 Blog! -
    http://www.systemcentersolutions.co.uk
    View OpsMgr tips and tricks at
    http://systemcentersolutions.wordpress.com/

  • What's 'best-practice' with external hard drives?

    Hello folks,
    I just got myself a 500GB LaCie d2 'Quadra' hard drive, and it works great - just as I was led to expect. Now I've connected it to my iMac with a FW400 cable. I've a few questions regarding general usage and 'best practice' when using an external hard drive like this:
    1. Do I need to disconnect it (pull out the cable from my iMac) every time I Shutdown - and reconnect on Startup? Or can I leave it it and pretty much just forget about it?
    2. Can I turn it 'on' and 'off' any number of times (using the on/off switch on the back) when working on the iMac? I might like to switch it off if I'm not using it for an extended period of time while still working on the computer. Is this okay?
    3. When I'm not using the drive and the drive switch is 'off', can the drive still remain connected to 'mains' power? Or is it necessary to disconnect it from the 'mains' entirely?
    4. I understand it's best to disconnect when 'Repairing Permissions?' Can this be confirmed?
    Thanks so much.
    Cheers!
    Steve.

    1. Do I need to disconnect it (pull out the cable from my iMac) every time I Shutdown - and reconnect on Startup? Or can I leave it it and pretty much just forget about it?
    What I do is shut down my Mac, leaving it connected to the mains: the external HD, external speakers and other peripherals are all connected to a mains switch and I turn these off. There's no need to disconnect the cable: some disks spin down when the computer is shut down, some don't. It probably wouldn't hurt to leave it spinning anyway, though I prefer to shut it off at the mains. Incidentally I shouldn't disconnect the computer from the mains when you shut down: doing this will run down the PRAM battery and hasten the day it needs replacing, which is expensive.
    2. Can I turn it 'on' and 'off' any number of times (using the on/off switch on the back) when working on the iMac? I might like to switch it off if I'm not using it for an extended period of time while still working on the computer. Is this okay?
    I shouldn't do this: the most strain on a hard disk is when it is starting up, not when it is running: I should leave it running all the time the Mac is on. If you do switch it off, make sure to unmount it first (drag it to the trash) otherwise you will have all sorts of problems.
    3. When I'm not using the drive and the drive switch is 'off', can the drive still remain connected to 'mains' power? Or is it necessary to disconnect it from the 'mains' entirely?
    No: I see no problem in leaving it plugged in to the mains: the 'off' switch disconnects it anyway.
    4. I understand it's best to disconnect when 'Repairing Permissions?' Can this be confirmed?
    I've never heard this, and I can't see that there's any neccesity: the repairing process will be confined to the disk you have nominated to work on in any case.

  • Best practice with WCCP flows for WAAS

    Hi,
    I have a WAAS SRE 910 module in a 2911 router that intercepts packets from this router with WCCP.
    All packets are received by external interface (gi 2/0, connected to a switch with port configured in WCCP vlan), and are sent back to the router via internal interface (gi 1/0 directly connected to the router) :
    WAAS# sh interface gi 1/0
    Internet Address                    : 10.0.1.1
    Netmask                             : 255.255.255.0
    Admin State                         : Up
    Operation State                     : Running
    Maximum Transfer Unit Size          : 1500
    Input Errors                        : 0
    Input Packets Dropped               : 0
    Packets Received                    : 20631
    Output Errors                       : 0
    Output Packets Dropped              : 0
    Load Interval                       : 30
    Input Throughput                    : 239 bits/sec, 0 packets/sec
    Output Throughput                   : 3270892 bits/sec, 592 packets/sec
    Packets Sent                        : 110062
    Auto-negotiation                    : On
    Full Duplex                         : Yes
    Speed                               : 1000 Mbps
    WAAS# sh interface gi 2/0
    Internet Address                    : 10.0.2.1
    Netmask                             : 255.255.255.0
    Admin State                         : Up
    Operation State                     : Running
    Maximum Transfer Unit Size          : 1500
    Input Errors                        : 0
    Input Packets Dropped               : 0
    Packets Received                    : 86558
    Output Errors                       : 0
    Output Packets Dropped              : 0
    Load Interval                       : 30
    Input Throughput                    : 2519130 bits/sec, 579 packets/sec
    Output Throughput                   : 3431 bits/sec, 2 packets/sec
    Packets Sent                        : 1580
    Auto-negotiation                    : On
    Full Duplex                         : Yes
    Speed                               : 100 Mbps
    The default route configured in WAAS module is 0.0.0.0/0 to 10.0.1.254 (router interface).
    Would it be better that packets leave WAAS module by the external interface (in place of the internal interface) ?
    Is there a best practice recommended by Cisco on this ?
    Thanks.
    Stéphane

    Hi Stephane,
    We usually advise the following in such scenario with an internal module:
    "ip wccp 61 redirect in" the LAN interface.
    "ip wccp 61 redirect in" on the WAN one.
    "ip wccp redirect exclude in" on the internal interface between the WAAS and the router.
    That way, we are sure that no loops are created because of the WCCP redirection.
    Regards,
    Nicolas

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Best practice with respect to wcf configuration files for SSIS

    So after reading a lot of posts and blogs on how to configure SSIS to read from configuration files , I am still not clear and would like any expert to provide a definitive stance. In my case the WCF service consumption is wrapped into a separate assembly.
    I am referencing the assembly in an embedded C# script within my SSIS package.
    When I make the helper class call to the webservice , I get the endpoint not found WCF exception.
    Keep in mind I am running this from VS 2012 IDE and did the following to make sure the WCF call works:
    1. Googled and found that you need to have config entries in DtsDebugHost.exe.config file. But it still did not work
    2. Had the same entries in the associated app.config file for C# script but it still did not work.
    It seems like SSIS is very fragile w.r.t consuming WCF entires in a config file. Is the best practice to just have the end point created in code and externalize the end point as a SSIS variable / xml file or is there really a way to get these config files working.
    Attached is the wcf snippet of my config file.
    <system.serviceModel>
    <bindings>
    <basicHttpBinding>
    <binding name="ITransactionProcessor">
    <security mode="TransportWithMessageCredential" />
    </binding>
    </basicHttpBinding>
    </bindings>
    <client>
    <endpoint address="https://ics2wstest.ic3.com/commerce/1.x/transactionProcessor" binding="basicHttpBinding" bindingConfiguration="ITransactionProcessor" contract="CyberSource.ITransactionProcessor" name="portXML" />
    </client>
    </system.serviceModel>
    SM

    I have the code working without use of config files. I am just disappointed that it is not working using the configuration files. That was one of the primary intents of my code re-factoring. 
     Katherine
    Xiong , If you are proposing this as an answer then does this imply that Microsoft's stance is not to use configuration files with SSIS?? Please answer.
    SM

  • Best Practice with States and lots of code lines

    Hi.
    This is my first application in flex.
    I'm ok with as3.
    Now, in as3 we were 'forced' to work mostly with external classes so hardly we have a unique code page with lots of lines.
    In flex, using States leads to build codes with lots of line IF we think on states as web site pages.
    I'm not sure it I understand it right. You mean: if an user visits a website built with 10 pages, but the users access only 2 pages, all that 8 remaining pages would have to be download to the swf the user loads?  (this is, considering the usage os states as pages)
    I'm building a system where the user logs to use it.
    2 states at now:  login page and home page.
    I access the db, and get the user and password with this event dispatched from the db.result (this works, however i found it too-old-style looping. Is there a better way, of course, which?)
    protected function usersService_resultHandler(event:ResultEvent):void
                    allUsers = event.result as ArrayCollection;
                    for (var i:uint;i<allUsers.length;i++){
                        if(allUsers[i].user == tx_user.text && allUsers[i].password == tx_password.text)
                            currentState = "home";
                        else
                            Alert.show("Fault", "Login");
    While I have start to build the "home" page/state, I realized that my code would dramatcally increase. Is it the best practice? Do I have to call another url after login (to open a Session - please, some Session tutorials in flex)? Or I keep doing all in states? I'm afraid my swf would grow bigger.
    Thanks

    Ok.
    The problem is that I'm not used to PHP, and I have generated the code to deal with the server automatically via Flex.
    However I could add a new function, and I could guess how to catch values in the db to compare.
    its a frankenstein function, but i'm afraid it also works. By now, there is no way to know whether user mistyped password or username.
    public function getUserVerification($user, $pass) {
            $stmt = mysqli_prepare($this->connection, "SELECT user, password FROM $this->tablename where user=? AND password=?");
            $this->throwExceptionOnError();
            mysqli_stmt_bind_param($stmt, 'ss', $user, $pass);       
            $this->throwExceptionOnError();
            mysqli_stmt_execute($stmt);
            $this->throwExceptionOnError();
            mysqli_stmt_bind_result($stmt, $row->user, $row->password);
            if(mysqli_stmt_fetch($stmt)) {
              return 1;
            } else {
              return 0;
    Also I had to update the _Super_UsersService.as  Class flex had generated before when I first created a php code to deal with db.
    Finaly, I had to assign return and input types for the new function I've created.
    Amazing... it works.
    Now, when pressing the submit button on the login, flex sends user and password so php compare them instead looping it in a Array.
    Also, I have made all this code inside a "loginView" component. So my main app is clean again.
    I guess I understand the idea of using components and reusing them as many as possible. I just have to get used to how to access a component value from outside and vice-versa.
    Now, the creationPolicy is something I would look for. This might be interesting.
    Thanks a lot.
    Btp~

  • What is the best practice with Keychain re: number of keychains?

    I'm not clear on what is the best way for using keychains. I have two that appear to overlap with similar entries (e.g., passwords for various websites and applications). One of the keychains is named login and other is named with my full name (first name and surname). There are two keychains that have blank boxes in front of them as opposed to the lock icon. One of those has the same name as my account name and the other is named X509Certificates.
    Is there a best practice that recommends either consolidating keychains or, if multiple keychains are better, than what type of information should be kept in what keychain? I find that I have to enter passwords for a number of apps when I boot up instead of the passwords being automatically retrieved. I would hope that setting up the keychains correctly will address this problem.

    You don't need to worry about setting it up properly, OSX will do this for you. The only reason I ever use Keychain Access is if I forget a password or want to delete one. Moving files around in keychain access could lead you to some serious problems. If I was you I would just leave the files be and OSX take care of the rest.

  • _msdcs subdomain best practice with NS records?

    I have the _msdcs subfolder under my domain (the grey folder). example below
    It has only one DC inside of it for a NS server. This DC is old and no longer exists. I checked my test environment and it has the same scenario (an old DC that does that not exist). example below
    I'm just wondering:
    1) Is this normal, should this folder update itself with other servers?
    2) should I be adding one of my other DC's? and removing the original?
    I have a single forest, single domain setup 2008 functional level. My normal
    _msdcs Zone does behave as expected and removes and add the appropriate records. Thanks.

    I apologize for the late response. I see you've gone further than what I've recommended.
    No, you shouldn't have deleted the _msdc.parent.local zone!!!!!! I'm not sure why you did that. Are you working with someone else on this that recommended to do that? If not,
    you're over-thinking it. I provide specifics to fix it by simply  updating the NS records, that's it. If you only found the _msdcs folder had the wrong record, then that's all you had to change.
    In cases where DCs are removed, replaced, upgraded, etc, it's also best practice to check a few things to make sure things are in order, and one of them is check the NS records on all zones and delegations. Delegation's NS records won't update
    automatically with changes, but zone NS records will if DCs are properly demoted.
    The _msdcs delegated zone is required by Active Directory. And yes, based on your thread subject, it's best practice. When Windows 2000 came out, and IF you had created the initial domain with it, it did not have it this way, but all domains initially created
    with Windows 2003 and newer are designed this way. If you had upgraded from 2000 to 2003, then one of the steps that we must perform is to create the _msdcs delegation.
    Please re-create it in this order:
    In the DNS console, right-click Forward Lookup Zones, and then click
    New Zone. Click Next
    On the Zone Type page in the New Zone Wizard, click
    Primary zone, and then click to select the Store the zone in Active Directory check box. Click
    Next
    On the Active Directory Zone Replication Scope page, click "To all DNS servers in the Active Directory forest parent.local.
    On the Zone Name page, in the Zone Name box, type
    _msdcs.parent.local
    Complete the wizard by accepting all the default options.
    After you've done that:
    Delete the _msdcs subfolder under parent.local.
    Right-click parent.local, choose New Delegation.
    Type in _msdcs
    In the Nameserver page, type in the name of your server, and its IP address.
    Complete the wizard
    You should now see a grayed out _msdcs folder under parent.local.
    Go to c:\windows\system32\config\ folder
    Find netlogon.dns and rename it to netlogon.dns.old
    Find netlogon.dnb and rename it to netlogon.dnb.old
    Open a command prompt
    Run ipconfig /registerdns
    Run net stop netlogon
    Run net start netlogon
    Wait a few minutes, then click on the _msdcs.parent.local zone, and click the F5 button to refresh it.
    You should see the data populate.
    Ace Fekay
    MVP, MCT, MCITP/EA, MCTS Windows 2008/R2 & Exchange 2007, Exchange 2010 EA, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Technical Blogs & Videos: http://www.delawarecountycomputerconsulting.com/
    This post is provided AS-IS with no warranties or guarantees and confers no rights.

Maybe you are looking for

  • Business Area in MIRO Vendor Line Item

    Dear All, We want to change the business area in the Vendor Line item of the MIRO. Please help us know how this can be done. Thanks. Arpita

  • Import CVI Instrument Driver in LabView 8.6

    Hi, I am working on Instrument Driver and have code developed in LabWindow/CVI. I have got the information of importing those code in LabVIew through Import CVI Instrument Drive option in Tools»Instrumentation but can't able to see such option on the

  • Using BC as a directory?

    Hello, I'm considering using BC for a community based website which will likley also have a directory listing with search facility. Does anyone know if directory listing and search is easily done in BC or even possible? Would an alternative like joom

  • Replying does not show original message

    After upgrading to Safari 4 (final release not beta), I started to notice that when replying an email the original message would not always show. I have selected all the correct settings in preferences. Does anyone have a suggestion?. As a side note,

  • Swap utilization %

    Hi, I am running 2 database instances of Oracle 11g on Solaris 10 (SPARC T-5120). At the OS level it seems that the swap is not being used at all, but at the OEM, it shows the "Swap Utlization %" is about 80% bash-3.00$ swap -l swapfile dev swaplo bl