AD MA Large Group (9000+ users) export failing due to connectivity timeout:: cd-connectivity-error

I am currently implementing FIM 2010 in a test environemnt. I have MAs connected to the FIM Portal, an Oracle view, AD, and eDirectory 8.8SP6. All agents are working fine, users attribute flow is working great to all connected systems and groups are flowing
into the MV without issue. I have a few criteria-based groups in the FIM portal that I am synchronizing to AD. All but one of those groups is synchronizing correctly. The failing group contains by far the most members of any group I've attempted to export
so far, numbering around 9000 members. The only error I receive on an export run on the AD MA is "cd-connectivity-error" for the group object itself and the agent shows a run status of "dropped-connection." Event Viewer simply shows:
The management agent "Test Active Directory MA" failed on run profile "Export" because of connectivity issues.
Additional Information
Discovery Errors : "0"
Synchronization Errors : "0"
Metaverse Retry Errors : "0"
Export Errors : "1"
Warnings : "0"
User Action
View the management agent run history for details.
While 9000 members in a group is not insubstantial I can't help but feel like this should work. Are there configuration changes I need to make to AD in order to allow a longer timeout on the connection itself? I have tried increasing the timeout on the export
run profile without success. Any guidance would be greatly appreciated.
~Jeff

@Evgeniy Oddly enough, that is exactly the timeout change I made - 300 seconds. I am currently populating the group throung the FIM Portal. It's a criteria-based group. Basically I am taking every employee status type (we have ~12 of them) and I am creating
sub-groups for each. The largest one of these is the one I'm timing out on. The second largest one has ~2500 members and functions as expected. If I need to change this to provision with .dll code I certainly can - I was just hoping to provide some transparency
for our support staff and my fellow admins on these groups.
@Carol Good idea. I will do just that. I have 2 DC's isolated in a site with an exorbitantly high site link cost specifically to service our LDAP clients and other back end services. Those 2 DC's also happen to host the majority of our FSMO roles here as
well. I'll hardocde the link to one of those DC's and report back.
As a secondary question, would it be better perhaps if I simply used set transitions on the user accounts to individually add them to these groups? I don't plan on using employee status for any other functions beyond this one so it seems like it may be more
complicated to go that route but it may also be more stable.
Thanks for all the great replies! I'll keep you posted.

Similar Messages

  • OMF export failed due to internal error

    I get this error message whatever I do when I try to export as OMF:
    "OMF export failed due to internal error"
    I tried OMF 1 and 2.
    I tried to uncheck "include audio".
    I tried an extremely small project (one region, one track).
    Always the same error. Problem is there's absolutely no information on what kind of internal error this is.
    All other export options (AFF, Open TL and Final Cut) works.
    Any ideas?

    I suppose that beside CCM 4.1(2) you are using AC 1.4 (1), if yes, then you need to upgrade to AC 1.4 (1) ES11 or upgrade at least to CCM 4.1(3)ES42 or CCM 4.1(3) sr3c
    HTH
    //Jorge

  • Creating Protection Group Fails with Error:360 The operation failed due to a virtual disk service error

    Hello
    I'm setting up a DPM server (2012 R2) at a remote site; everything goes well with no issues until a protection group is created, at which point I get the following error;
    Create protection group: Protection Group 1 failed:
    Error 360: The operation failed due to a virtual disk service error
    Error details: The system cannot find the file specified
    Recommended action: Retry the operation.
    The environment is as follows;
    - Virtual Machine Running Server 2008 R2 Fully updated
    - Storage Pool is an iSCSI connection Thick Provisioned 1TB GPT and shows in Disk Management with no issues.
    - Have connected to Session 0 (console) 
    - Error log shows The provider did not receive a Plug and Play service notification for the volume. volume=10  for the VDS Dynamic Provider
    Can Anyone Help?
    Thanks
    .Adam Langdon

    Hi,
    Disk defrag is initiated when a volume shrink is attempted. See if there is any problem defragging a volume and correct any problems doing that.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • 'OMF export failed due to internal error' message

    Hi,
    Has anyone ever got an 'OMF export failed due to internal error' message when trying to export in OMF format?
    I get that every time I try to export in that format.
    How can I make it work?
    Thanks and best,

    Hello sj holcombe,
    And welcome to Apple Discussions!
    Have you had a chance to check out this Apple support document directly related to this issue? If not, it is a good place to start.
    [iTunes displays a -69 error when syncing iPod|http://support.apple.com/kb/HT1210]
    Hope it helps.
    B-rock

  • Weblogic invoking web service failed due to socket timeout

    Hi,
    I encountered an error when I invoke web service from OBIEE 11g. The web serivce resides on Websphere running on other machine.
    An error says that "Invoking web service failed due to socket timeout." and it seems that it stopped in just 40 secs.
    Is there any settings of WebLogic server to avoid this? This web service normally runs for more than 60 sec.
    I have checked several parameters by WebLogic admin console and changed those values, but I still receive same errors.
    Regards,
    Fujio Sonehara

    Hey Eason,
    As I had previously mentioned, I have checked the FE server certs and have mentioned the signing algorithm it used to sign the certs, which was sha1DSA and not sha1RSA, I even checked my CA list of issued certs and have found all certs are signed the same.
    Signature algorithm: sha1DSA
    Signature Hash Algorithm: sha1
    Public Key:  RSA (1024 bit)
    I could run request and reinstall all day long it will still get the same certs signed with the algo..
    Doing some research I attempted to see if I could change the signing cert for a specific cert template that was being used to issue the Lync FE certs... however seems that from according to
    this, that I'd have to completely rebuild my CA before I'd be able to request and issue a cert with the proper signing algorithm?!
    This
    says its possible but not supported, what do I do in this situation? Is my only option to rebuild teh entire CA and cert infrastructure?
    I noticed my CSP is set to Microsoft Base DSS Cryptographic Provider, and under the CSP folder there is no "CNGHashAlgorithm" key so I'm using a "Next Gen CSP" apparently? Is this CSP good enough to support Lync...Straight up where is
    the Lync documentation on the CA setup requirements??
    This google link doesn't tell you how you should setup a CA for Lync, what settings need to set etc..

  • Access 2010 InfoPath Data Collection Export Fails Due to Date Format That Includes Time Zone

    I created an Access 2010 database that has multiple data collection (InfoPath) forms that were generated from Access and have been in use for about 1.5 years.  Starting in 2013 (for about a week now), the submitted data fails to Export due
    to a "data type conversion error" with the date fields.  Prior to 2013, the date string returned in the InfoPath form looked like this: "2013-01-07T00:00:00", but now it looks like this: "2013-01-07T00:00:00-05:00".  The time zone is appended
    to the string and it kills the Outlook Export feature.
    To test this, I created a new database with one table and one date time field.  I generated an InfoPath template and emailed it to myself.  I entered the date into the template and submitted it (tried manually entering the date as well as
    using the date time picker control - made no difference).  The InfoPath template now contains "2013-01-07T00:00:00-05:00" and will not Export from Outlook to Access.  I tried manually pasting the string into the Access table and it would take it,
    but would show "1/7/2013 5:00:00 AM" in the date time field (which isn't correct either but at least it took).  Note: This problem has appeared at my office (Win 7 with Office 2010), but my testing was done on my personal laptop using Win 8 with Office
    2010.
    It looks like Microsoft has created a bug and now all of my data collection forms are unusable.  Any help will be appreciated.

    Microsoft confirmed that the issue was introduced with MS12-066 as follows:
    ***Start Quote
    We have been able to identify that this issue was introduced with the change made for the hotfix detailed in KB 2687395. This update was included in the security update MS12-066 when you installed KB 2687417.
    2687395          Description of the InfoPath 2010 hotfix package (ipeditor-x-none.msp): August 28, 2012
    http://support.microsoft.com/kb/2687395/EN-US
    2687417           MS12-066: Description of the security update for InfoPath 2010 Service Pack 1: October 9, 2012
    http://support.microsoft.com/kb/2687417/EN-US
    Investigating workarounds I've only come up with using HTML forms or changing the datatype of the control to text.
    ***End Quote
    My own testing also indicates that if you are using InfoPath with SQL Server, you may be able to change the Date/Time picker control in InfoPath to a Date only picker control (if the SQL Server data
    type will support that).

  • Compilation failed due to timing violations, Getting Xflow error and resources over use.

    Hi,
    I am extensively using Fixed point math library functions which i have downloaded from NI website in my FPGA application.I am getting following errors while compiling the code.
    I would like to know is there any limitations in using fixed point functions?
    I am configuring all funtions as a 64 bit word length and 32 bit integer length in cofig parameter set up and all are outside the timed loop.
    Apart from below error it is occupieng LUT's 300% in one simple VI (mathematical calculations using fixed point functions).
    So i would like to know is there any way to optimize the code?.
    Status: Compilation failed due to timing violations.
    The compile process reported a timing violation.
    Suggestions for eliminating the problem:
      * For Timed Loops with timing violations
          - Reduce long arithmetic/combinatorial paths
          - Use pipelining within Timed Loops
          - Reduce the number of nested case structures
      * Reduce clock rates if possible
      * Recompile
    Refer to the LabVIEW Help for more information about resolving compilation errors. Click the Help button to display the LabVIEW Help.
    Compilation Summary
    Device Utilization Summary:
       Number of BUFGMUXs                        2 out of 16     12%
       Number of External IOBs                 214 out of 484    44%
          Number of LOCed IOBs                 214 out of 214   100%
       Number of MULT18X18s                     69 out of 96     71%
       Number of SLICEs                       4387 out of 14336  30%
    Clock Rates: (Requested rates are adjusted for jitter and accuracy)
      Base clock: 40 MHz Onboard Clock
          Requested Rate:      40.408938MHz
          Achieved Rate:       36.974044MHz    <<<=== Timing Violation

    lion-o wrote:
    The Fixed Point Library is certainly not a demo, and we encourage you to use it extensively.
    Really? "Demo" is perhaps too strong a word, but the NI Labs page says that the toolkits there "aren't quite ready for release" and are "experimental prototype"s. My understanding was that they work, but are only meant to show potential future products and get feedback on them. If this is not the case, perhaps the wording needs to be changed.
    I know I wouldn't want to be using something throughout my code and then find out that it is not supported when the next LV version came out because it was only a prototype. Can you promise support into the future for these? If you can't, that should be clearly stated.
    Try to take over the world!

  • Failed to open connection error message on Crystal Server

    I have recently upgraded my desktop to Windows 7.  Crystal Reports XI Developer works fine on this new system as it did on my earlier Vista system.  I can create and run reports with no problems on each of these system using Developer.
    The problem I am experiencing is this:  when I publish, using the Publishing Wizard, a report on my Crystal Server using my Windows 7 system, the report does not work.  Instead, the error message: 'Failed to open connection' appears from Crystal Report Viewer.
    If I open the same report using my Developer license on my old Vista machine and then re-save it, I can publish it to my CR Server and it runs successfully.
    ODBC is used on all systems and all appear to be  32 bit versions, albeit different version numbers.
    This problem can be repeated with any of the many reports currently in use.

    HI Don,
    Let's see if I understand what you're saying:
    When a report is created on a legacy desktop in CR Developer, the ODBC connection on that computer works with direct access to the SQL Server 2008 on my Windows Server 2003 R2 system.  I can then also publish these using the Publishing Wizard to the CRS these same reports and they run successfully; no connection issues, works great.
    However, when a report is created on my 64 bit Windows 7 desktop in CR Developer, the ODBC connection on that computer also works well with direct access to the SQL Server 2008 on my Windows Server 2003 R2 system.  BUT, when I then publish these using the Publishing Wizard to the CRS these same reports do NOT successfully connect to the database and that is when I get the Crystal Viewer message about the connection failed.
    So, I would assume then that Crystal Server on my Windows Server 2003 system is similar enough to older MDAC criteria on legacy systems but different enough my Win7 desktop to introduce the problem even though Developer on that same Win7 system works fine directly against the SQL DB.
    If this is all correct, then I either need a legacy MDAC to work on my Win7 system, or an updated MDAC\WDAC for Crystal Server to use.
    By the way, I'm using ODBC and here are some details from the Component Checker:
    Server:
                 MDAC 2.8 SP2 on Windows Server 2003 SP2
    Desktops:
                 MDAC 2.8 SP1 on Windows XP SP3 - on an XP machine that works fine
                 UNKNOWN on Vista (but it works fine with CRS)
                 UNKNOWN on Windows 7 - (doesn't work with CRS - must be a different UNKNOWN than on Vista)
    Thanks for your assistance as this is rather important -
    Bob.

  • BAT User Export Fails

    Hi,
    I have CM 4.1(3) and trying to export users, however only 500 out of 1900 are exported.  The log error description is "user has incomplete profile in the directory".  Is there a way to "mass" fix this?
    TIA.

    Hi,
    Did you ever upgraded this CCM from an earlier version? such as CCM 4.x or 3.x? if yes then did you also uninstalled and reinstalled appropriate version of BAT? if not then this is most likely your problem.
    Compare the version of BAT betweenn your and your customer CCM system.
    Thanks
    Ovais.

  • Scheduled restart fails due to connected user

    Hi
    Here's another 'feature' Apple have added in Mountain Lion which is really irritating. Under Lion and previous if you scheduled a restart for a set time, you got a restart. Under ML, if any other pc or mac on my network has copied a file to or from my restarting machine since the previous restart, it cancels the restart because the restart will disconnect other users. I can understand why, in some circumstances, disconnecting another user would be a bad thing and why they have 'fixed' it. But for me it is just another pointless change which breaks something which worked well. They might at least have added another preference control in the energy saver/restart system pref to enable you to decide whether to disconnect users or not!
    I can find various posts on line about ways to use Cron or launchd in the command line to do a forced restart (which I'm a bit nervous about, not least because none of them come with instructions about how to undo or edit what you did if it isn't effective), and various apps in the app store which need to be running 24/7 to do it, but nothing which really replaces the old pre-ML functionality.
    Has anyone found a way round this irritation, please? And is there a feature request anywhere on Apple.com so I can bring this to apple's attention?
    Thanks in advance for any help.

    Hai i guess you previous change run have not completed fully wait and check in the joblog once its done restart it will solve your problem .Also be sure no other load is updating 0customer in some other master data chain.
    sm12 solves but be careful if somebody else editing check it since its a shared object.
    Goodday.
    Edited by: SapBuddy on Aug 25, 2008 3:07 PM

  • User authenticaion failed after ADS connection

    Hi,
    We have a problem where the same user ID exists both in the Portal and LDAP.
    I can see both(from LDAP and UME db) in the Portal with same logon ID. Now the user is not able to logon to the portal with any of the password. The config for ADS is
    Data source                  =  Micrsoft ADS (Deep Hierarchy) + Database
    Data Source File name  =  dataSourceConfiguration_ads_deep_writeable_db.xml
    Another problem is I cannot create new users in the Portal or copy from existing users.
    Help me to resolve it.
    Thanks
    Jimmy

    Hi Jimmy,
    Then first remove the AD server from your portal configuration and give default settings for your data source as given below:
    Data source = Database
    Data Source File name = dataSourceConfiguration_database_only.xml
    Once you do this ,restart your portal server and create users in portal database and delete the duplicate entries.
    remember once you delete all your duplicate Login-ids you can reconfigure your
    AD server with old datasource settings and everything should be fine.
    Hope it helps.Reward points if Helpful
    Regards
    Rani A

  • AUTOMATED EXPORT FAILED

    Hi Azure team,I received the below error mail for automated export.
    "We recommend checking that the storage account is available, and that you can perform a manual export to that account. We will attempt to export again at the next scheduled time."1. May i know, What is this message conveying?2. Storage is not accessible during that time?
    If yes is answer for 2nd question then please let me know why it was happened?and how could we overcome from this???Kindly help on this by providing suggestions.

    Hello,
    Did the automated export works on the SQL Database? you can review the Import and export history in the SQL Database server.
    The message showed in the mail indicates that automated export failed due to some reason. If your storage account is not accessible, the export may fail as the .bacpac file cannot be stored in the storage.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Lync server call failed due to network issues

    hi,
    i receive an error while making a lync call to an internal user
    "call failed due to network issues"
    i have two ip's installed on My NIC but when i remove one ip of the ip's from
    my nic, everything seems to work fine.
    Kindly help

    Hello Matt,
    Hope you keeping well..!!
    We are communicating with external customers lync server through federation option setup on our corporate lync server.  We have received the federation setting from the customer with SIP address which has been setup on our corporate lync servers after
    that we were able to browse the customer contact through corporate lync account.
    We were also able to chat with external customer but however voice/video functionality are not working through same session.  Whenever we try to dial out external customer lync account it ended with error message "call ended due to network issue".
    We have checked the setting from corporate lync servers and network point of view but doesn't find any issue which cause the disconnection to voice/video over lync.  Could you pl help or guide with the way.
    Thanks

  • TP error: connect failed due to DbSL load lib failure

    Hi All,
    I have prepared Quality system from Production system as System copy.I have upgarded Kernel from 620 to 640 in Quality system.
    After Kernel upgrade , Server details are as below
    Operating System : windows 2003 IA 64
    Database : SQL server 2000 IA 64
    Kernel :640 and compiled for 64 bit non unicode system ,
    patch number is 43.
    SAP : SAP R/3 4.7
    Before kernel upgrade , server details are as below
    Operating System : windows 2000 server
    Database : SQL server 2000
    Kernel :620 and compiled for Non unicode system ,patch number
    is 1363
    SAP : SAP R/3 4.7
    I am able to transport request from Development to Quality . But when I try to import request from Quality I am getting error message "Connect failed due to DBSL Load lib failure"
    Error message is " connect failed due to DBSL load lib failure
    cannot find the DLL dbmssslib.dll in path( K:\usr\sap\NBQ\SYS\exe\run)
    problem when loading DB-dependent DLLs."
    I have set Environmnet variable ( DBMS_TYPE value as mss) for nbqadm user.
    I have seen that dbmssslib.dll file is exist in exe/run path.
    Does anyone having an Idea , how to resolve this issue.Pls help me
    Thanks in advance
    Regards
    Venu

    Hi ,
    Thanks for all
    Issue is resolved by myself.
    Instead of update Kernel to 640 , I have downloaded latest 620 kernel patch on IA 64 .
    Since my production server is running on x86 and 620 kernel , i have downloaded 620 kernel for IA 64 .
    That resolved my issue.
    Regards
    Sai

  • Unable/failure to start upload in DW CS4 (recent issue, was working) and DW CC 2014. Filezilla also fails but BOTH connect to the server. No known changes to desktop system.

    Started: 7/14/2014 12:44 PM
    cContacts\Copy of 0100.html - Transferring
    cContacts\Copy of 0100.html - error occurred - An FTP error occurred - cannot put Copy of 0100.html.  Internal data error, possibly because of failure to start the upload.
    File activity incomplete. 1 file(s) or folder(s) were not completed.
    File Transfer failed due to following reasons:
    - Internal data error, possibly because of failure to start the upload.
    Files with errors: 1
    cContacts\Copy of 0100.html
    Finished: 7/14/2014 12:45 PM
    FROM FILEZILLA
    Command: TYPE A
    Response: 200 Switching to ASCII mode.
    Command: PASV
    Response: 227 Entering Passive Mode (38,102,237,240,170,242).
    Command: STOR 0000.html
    Response: 550 Permission denied.
    Error: Critical file transfer error

    If you cannot upload files with Filezilla, there's a problem with your server.  Have you exceeded your server's space allotment?  If your server is full, it will reject new file uploads.
    Nancy O.

Maybe you are looking for

  • My touchpad just randomely stopped working and i dont know why? can someone please help me.

    i was using a plug in mouse and now i cant use my touchpad even when the mouse is unplugged. i have a hp pavillion gseries. can someone please help me, i dont like carrying a mouse around with me everywhere i take my laptop.

  • BlackBerry Wireless Headset HS-700 Problem

    I can't seem to get the audio turn by turn directions on. Also I was under the impression that if you had the person calling in your contacts the HS-700 would say the name of the person callin. Please help. [Subject Title edited to reflect new topic.

  • Can't move Layer break point

    Hi, In desperate straits here. I've created a dual layer project and set a break point via the disc inspector drop down menu. Now I need to change the break point but DVDSP won't execute the change. I've deleted the VTS build folders as Apple recomme

  • Maverick OSX: Memory use is increasing without running programs

    I noticed that under Mavericks, my iMac is getting slower and slower. So I opened the Activiy Monitor to see what was going on. It appears that the amount of used Memory is increasing without that I even run 1 single program. While writing this post,

  • Allow application installations and updates for standard user account

    Good Afternoon. Is there a way to configure/allow standard, non-admin user accounts to have access to only install applications or update applications? Is there a group those user accounts can be added to via dscl or some other method? Appreciate any