FIM R2 - best practice handling large AD groups

On attempting to create large security group (with 35k users) in AD, i get "dropped connection from the domain controller.
The MS AD guy we have attached here tells me that there are some limitations on LDAP and even some known issues with writing 5k+ objects to a DC.
Are there any "best practices" for writing large groups to AD?
/Nicolai

Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
Steve Kradel, Zetetic LLC

Similar Messages

  • Best Practice Regarding Large Mobility Groups

    I was reading the WLC Best Practices and was wondering if anyone could put a number to this statement regarding the largest number of APs, end users, and controllers which can contained in a Mobility Group.
    We would be deploying WiSMs in two geographically dispersed data centers. No voice is being used or is planned.
    "Do not create unnecessarily large mobility groups. A mobility group should only have all controllers that have access points in the area where a client can physically roam, for example all controllers with access points in a building. If you have a scenario where several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues and access points inside the group, which would not interact anyway.
    Keep in mind that WLC redundancy is achieved through the mobility groups. So it might be necessary in some situations to increase the mobility group size, including additional controllers for
    redundancy (N+1 topology for example)."
    I would be interested in hearing about scenarios where a Catalyst 6509 with 5 WiSM blades is deployed in data centers which back each other up for cases of disaster recovery.
    Can I have one large Mobility group? This would be easier to manage.
    or
    Would it be better to back up each blade with a blade in the second data center? This would call for smaller Mobility Groups.
    Be glad to elaborate further if anyone has a similar experience and needs more information.
    All responses will be rated.
    Thanks in advance.
    Paul

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

  • Chatting Best Practices with Large Groups

    We have a large group (125) people who are involved in a 4-hour training each month.  What best practices would you suggest for managing chatting with this large of group.  Perhaps layout options, polling options, any best practices would be appreciated.

    I would leave chat alone with a group that large. You can provide that functionality to have an open communication between participants and possibly presenters/hosts for quick exchanges, but don't rely on it for question and answer functionality. The Q & A pod will queue up all the questions that are asked in it and you (or other presenters/hosts) can answer them while keeping the answers associated with the question and have the ability to reply publicly or privately. All questions are asked privately and are not seen by other participants, so duplicate or inappropriate questions can be easily removed or ignored.
    Polling is also good to keep the responses in a controlled evironment.

  • SolMan CTS+ Best Practices for large WDP Java .SCA files

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

  • MVC �Best Practice� (handling multiple views per action/event)

    Looking for the best approach for handling multiple views for one action/event class? Background: I have a small application using a basic MVC model, one controller servlet, multiple event classes, and multiple JSP views. For performance reasons, the controller Servlet is loaded once, and each event class is an instance within it. Each event has an �eventProcess()� and an �eventForward()� method called by the controller, standard stuff.
    However, because event classes should not use instance variables, how should I communicate which view to forward to should based upon eventProcess() logic (e.g. if error, error.jsp, if success, success.sjp)? Currently, there is only one view mapped per event, and I'm having to put error handling logic in the JSP, which goes against the JSP being for just view only.
    My though was 1) A session object/variable that the eventProcess() sets, and the eventForward() reads, or 2) Have eventProcess() return a mapping key and have the conroller lookup a view page based upon that key, as opposed to 1-1 event/view mapping.
    Would like your thoughts!
    Thanks
    bRi

    Your solution seems ok to me, but maybe the Struts framework from Apache
    that implements MVC for JSP is a better solution for you:
    http://jakarta.apache.org/struts/index.html
    You should take a look at it. It has in addition some useful taglibs that makes life much easier.
    We have successfully used it in a project with about 50 pages.

  • Best practice for large form input data.

    I'm developing an HIE system with FLEX. I'm looking for a strategy, and management layout for pretty close to 20-30 TextInput/Combobox/Grid controls.
    What is the best way to present this in FLEX without a lot of cluter, and a given panel being way too large and unweildy.
    The options that I have come up with so far.
    1) Use lots of Tabs combined with lots of accordions, and split panes.
    2) Use popup windows
    3) Use panels that appear, capture the data, then, make the panel go away.
    I'm shying away from the popup windows, as, that strategy ALWAYS result in performance issues.
    Any help is greatly appreciated.
    Thanks.

    In general the Flex navigator containers are the way to go. ViewStack is probably the most versatile, though TabNavigator and Accordion are good. It all depends on your assumed workflow.
    If this post answers your question or helps, please mark it as such.

  • Best practices for large ADF projects?

    I've heard mention (for example, ADF Large Projects of documentation about dealing with large ADF projects. Where exactly is this documentation? I'm interested in questions like whether Fusion web applications can have more than one ViewController project (different names, of course), more than one Model project, the best way to break up applications for ease of maintenance, etc. Thanks.
    Mark

    I'd like to mention something:
    Better have unix machines for your development.
    Have at least 3 GB of RAM on windows machines.
    Create all your commonly used LOVs & VOs first.
    If you use web services extensively, create it as a seperate app.
    Make use of popups, it's very user friendly and fast too. You no need to deal with browser back button.
    If you want to use common page template, create it at the beginning. It's very difficult if you want to apply it later after you developed pages.
    Use declarative components for commonly used forms like address, etc.
    Search the forum, you will see couple of good util classes.
    When you check-in the code, watch out some of the files don't show up in JDev like connections.xml
    Make use of this forum, you will get answers immediately from great experts.
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • Unity Connection 7.x - Best Practice for Large Report Mailboxes?

    Good morning, We have 150 mailboxes from Nurses to give shift reports in. The mailbox quota is 60MB and the message aging policy is on. Deletede messages are deletede after 14 days. The massage aging policy is system wide, and increasing the quota would cause storage issues. Is there a way to keep the message aging policy and reduce it for 1 group of users? Is there a way to bulk admin the mailbox quota changes?
    Version 7.1.3ES9.21004-9
    Thanks

    As for UC 8x, you're not alone.  I don't typically recommend going to an 8.0 release (no offense to Cisco).  Let things get vetted a bit and then start looking for the recommended stable version to migrate to.
    As for bulk changes to mailbox store configurations for users, Jeff (Lindborg) may be able to correct me if I am wrong here.  But with the given tools, I don't think there is a way to bulk edit or update the mailbox info for users (i.e., turn on/off Message Aging Policy).  No access to those values via Bulk Edit and no associated fields in the BAT format either.
    Now, with that said - no one knows better than Lindborg when it comes to Unity.  So I defer to him on that point.
    Hailey
    Please rate helpful posts!

  • Best practice question for Availability Groups setup

    I have created a 2012 WFC with 2 nodes (server 1 and server2). Each node will host 2 instance of SQL Server 2014. I am planning to use 2 Availability Groups. 
    So
    Server1\instance A
    Server1\instance B
    Server2\instance A
    Server2\instance B
    Thus "instance A" will have AG_A
    instance B will have AG_B,
    What is the recommend configuration for configuring the Quorum witness?  A file share or disk witness?  Note that the WFC does not have any shared disks since I'm only using AG (not a clustered SQL server service).
    Thanks,
    Pete

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Aperture best practices for large libraries

    Hi,
    I am very new to Aperture and still trying to figure out the best way to take advantage of it.
    I have been using iPhoto for a while, with just under 25,000 images. This amount of images takes up about 53 gig. I recently installed and built an Aperture library, leaving the images in the iPhoto library. Still, the Aperture library is over 23 gig. Is this normal? If I turn off the preview, is the integration with iLife and iWork the only functionality lost?
    Thanks,
    BC
    MacBook Pro   Mac OS X (10.4.10)  

    Still, the Aperture library is over 23 gig. Is this
    normal?
    If Previews are turned on, yes.
    If I turn off the preview, is the
    integration with iLife and iWork the only
    functionality lost?
    Pretty much.
    Ian

  • Building Resource Groups Best Practices

    Hi. I'm looking for some advice on best practices when building resource groups to support sales/telesales.
    Any information would be very helpful.
    Thanks,
    Monica

    Based on the organization sales structure customer maintains,you create sales groups.Use territories for geography based customer ,lead accesses etc...please post if you find any alternatives and other good practices..
    Edited by: user638250 on Oct 3, 2008 5:41 AM

  • Upscale / Upsize / Resize - best practice in Lightroom

    Hi, I'm using LR 2 and CS4.
    Before I had Lightroom I would open a file in Bridge and in ACR I would choose the biggest size that it would interpolate to before doing an image re-size in CS2 using Bicubic interpolation to the size that I wanted.
    Today I've gone to do an image size increase but since I did the last one I have purchased OnOne Perfect Resize 7.0.
    As I have been doing re-sizing before I got the Perfect Resize I didn't think about it too much.
    Whilst the re-size ran it struck me that I may not be doing this the best way.
    Follow this logic if you will.
    Before:
    ACR > select biggest size > image re-size bicubic interpolation.
    Then with LR2
    Ctrl+E to open in PS (not using ACR to make it the biggest it can be) > image re-size bicubic interpolation.
    Now with LR2 and OnOne Perfect Resize
    Ctrl+E to open in PS > Perfect Resize.
    I feel like I might be "missing" the step of using the RAW engine to make the file as big as possible before I use OnOne.
    When I Ctrl+E I get the native image size (for the 5D MkII is 4368x2912 px or 14.56x9.707 inches).
    I am making a canvas 24x20"
    If instead I open in LR as Smart Object in PS and then double click the smart icon I can click the link at the bottom and choose size 6144 by 4096 but when I go back to the main document it is the same size... but maybe if I saved that and then opened the saved TIFF and ran OnOne I would end up with a "better" resized resulting document.
    I hope that makes sense!?!?!?!
    Anyway I was wondering with the combo of software I am using what "best practice" for large scale re-sizing is. I remember that stepwise re-sizing fell out of favour a while ago but I'm wondering what is now the considered best way to do it if you have access to the software that was derived from Genuine Fractals.

    I am indeed. LR3 is a nice to have. What I use does the job I need but I can see the benefits of LR3 - just no cash for it right now.

  • Best Practices for Creating eLearning Content With Adobe

    As agencies are faced with limited resources and travel restrictions, initiatives for eLearning are becoming more popular. Come join us as we discuss best practices and tips for groups new to eLearning content creation, and the best ways to avoid complications as you grow your eLearning library.
    In this webinar, we will take on common challenges that we have seen in eLearning deployments, and provide simple methods to avoid and overcome them. With a little training and some practice, even beginners can create engaging and effective eLearning content using Adobe Captivate and Adobe Presenter. You can even deploy content to your learners with a few clicks using the Adobe Connect Training Platform!
    Sign up today to learn how to:
    -Deliver self-paced training content that won't conflict with operational demands and unpredictable schedules
    -Create engaging and effective training material optimized for knowledge retention
    -Build curriculum featuring rich content such as quizzes, videos, and interactivity
    -Track program certifications required by Federal and State mandates
    Come join us Wednesday May 23rd at 2P ET (11A PT): http://events.carahsoft.com/event-detail/1506/realeyes/
    Jorma_at_RealEyes
    RealEyes Connect

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Audio & Captivate: Looking for best practice

    Good Morning,
    i'am looking for some sort of best practice, handling a lot of audios within single captivate slides. Please let us take a look at the workflow:
    I write concepts in Word or OpenOffice, describing on a slide-base the content, media (pics, animations, ...), interactions
    The client read these concepts, and write a reading report with all changes, additions
    We held a harmonisation meeting for every 3-5 hours calculated e-learning-concepts, argueing about the clients annotations and looking for a stable agreement
    I produce the first version, in this case with Adobe Captivate, and for the audios i use text2speech.
    The client checks this first Version, send me his audit report.
    I produce version 1.
    So, where is the problem? My problem is the handling of the audio-files, up to 10 per slide. In version 1, all audios are spoken by professional speakers from german radio-stations, recorded in our own studio. And i'am looking for a comfortable way to exchange all synthetic audios without leaving anything within captivate, the final version must be as clean and slim as possible.
    For the handling and tracking i use AlienBrain, because in some projects, we have a few hundred of thousands assets to watch and track ... and it is no problem if anything wents wrong, just some clicks and i've restored the older version of a pic, audio or a complete project.
    Using other tools, i do not care about this. Within the project-folders, they are stored within a modul-based audio-folder. And every single audio-files as an unique identifer (A024_37_12_004.mp4, "A" for Audio, then chapter_module_page_sequentialnumberperpage.mp4/mp3/wav). After i have recieved the spoken audios from the studio, i just overwrite the synthetic audios and the "real" audios are automatically embedded in my slides, so if i make a new release, everything is fine. Older, synthetic version kept by AlienBrain.
    In Captivate, everything seems to be ... hm, i have to be polite ;-) ... a little more complicated. Or even worse, i'am unable to see the solution. Maybe somebody may share a working and fast way for the needed audio-procedure? Or something like a proofed workaround?
    Kind regards
    Marc :-)

    Additional information:
    Mostly, we have 3-5 audios per slide/page. 10 is absolutely maximum.
    And because we are talk about overall thousands of audio-files, i hopefully find a good way to keep them as external sources and Captivate embedd them just at the moment, when i have to produce a new release. (That is the main problem - sorry, my english language modules are still asleep after a long and busy weekend) And as i've learned, object-audio can't be used with external audio-files.
    At the company, we've talked about the best process (theres is another huge project running and the guys - experienced people - are also new to captivate). At this moment, the prefered solution is to connect all audios, 4 by example. And then we can "Play/Pause" the audio at our needs. It is a little bit like stumbling blindfolded through heavy mist.
    Another idea, brought up by me, was to multiply the slide so the first slide shows paragraph/pic/audio. After a click, next slide loads, exactly in the same state as the previous slides ends, next text/pic/animation and the next audio. But with that, i will end up with thousends of slides only because of the audio-handling.
    Maybe it is a good idea to explain a really typically update process which may show what we like to see:
    The modul is ready, delivered and the client is happy
    After 6 months, there is a technical change within one part and the client need a single new audio
    In reality in one special module, part of over 200 hours learning, after a year we've been instructed to record new versions for 35 of 80 audios ... in just one of somehundred modules
    In captivate we try to avoid touching every single slide again.
    At the moment i own just a standalone version of Captivate (5.5), the eLearning Suite has been ordered and arrives next week.
    Tools we've used in the past (and until now due to clients requirements) without any problems ... dealing with audios ;-)  :
    Toolbook
    Sumatra
    CourseLab
    Individual Flash Solutions
    others

Maybe you are looking for