OIM 10g: Best practice for updating OIM user status from target recon?

We have a requirement, where we need to trigger one or more updates to the OIM user record (including status) based on values pulled in from a target resource recon from an LDAP. For example, if an LDAP attribute "disable-flag=123456" then we want to disable the OIM user. Other LDAP attributes may trigger other OIM user attribute changes.
I think I need to write a custom adapter to handle "recon insert received" and "recon update received" events from the target recon, but wanted to check with the community to see if this was the right approach. Would post-insert/post-update event handlers be a better choice?

Thanks Nishith. That's along the lines of what I was thinking. The only issue in my case is that I might need to update additional custom attributes on the OIM User in addition to enable/disable. Because of that requirement, my thought was to call the API directly from my task adapter to do the attribute updates in addition to the enable/disable. Does this seem like a sound approach?

Similar Messages

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Best practices for implementing OIM

    We plan on putting OIM servers behind LB (hardware). When I develop OIM client I am required to specify OIM endpoint(s) via property java.naming.provider.url. In case of LB I'd specify a virtual host there. The question is what is the best practice for configuring LB - timeout, persistence, monitoring? I don think LB vendor is relevant, but just in case, I have a choice of F5 BigIP and Citrix Netscaler.
    My understanding is that Java class tcUtilityFactory is supposed to be instantiated once (in a web client) and maintain the connection, but LB will close the connection after timeout is exceeded. So another question is if I want to use LB I have to take care of rebuilding connection when it is expired, or open/close connection every time tcUtilityFactory is needed. Any advice will be appreciated.
    Thanks,
    Alex

    No i was not going to sync timeouts - just let it close connections after say, 5 min of inactivity. The reason is that performance data is horrible - from my desktop environment, initialization takes almost 9 sec, while reading data from OIM - only 150 milliseconds. I can't afford more than .5 sec on the whole OIM operation, as we are talking about customer experience.
    Thanks,
    Alex

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best Practice for Deleted AD Users

    In our environment, we are not using AD groups. Users are being added individually. We are running User Profile Service but I am aware that when a user is deleted in AD, they stay in the content database in the UserInfo table so that some metadata can be
    retained (created by/modified by/etc).
    What are best practices for whether or not to get rid of them from the content database(s)?
    What do some of you consultants/admins out there do about this? It was brought up as a concern to me that they are still being seen in some list permissions/people picker, etc.
    Thank you!

    Personally I would keep them to maintain metadata consistency (Created By etc as you say). I've not had it raised as a concern anywhere I've worked.
    However, there are heaps of resources online to delete such users (even in bulk via Powershell). As such, I am unaware of cases of deleting them causing major problems.
    w: http://www.the-north.com/sharepoint | t: @JMcAllisterCH | YouTube: http://www.youtube.com/user/JamieMcAllisterMVP

  • Best Practice for Extracting a Single Value from Oracle Table

    I'm using Oracle Database 11g Release 11.2.0.3.0.
    I'd like to know the best practice for doing something like this in a PL/SQL block:
    DECLARE
        v_student_id    student.student_id%TYPE;
    BEGIN
        SELECT  student_id
        INTO    v_student_id
        FROM    student
        WHERE   last_name = 'Smith'
        AND     ROWNUM = 1;
    END;
    Of course, the problem here is that when there is no hit, the NO_DATA_FOUND exception is raised, which halts execution.  So what if I want to continue in spite of the exception?
    Yes, I could create a nested block with EXCEPTION section, etc., but that seems clunky for what seems to be a very simple task.
    I've also seen this handled like this:
    DECLARE
        v_student_id    student.student_id%TYPE;
        CURSOR c_student_id IS
            SELECT  student_id
            FROM    student
            WHERE   last_name = 'Smith'
            AND     ROWNUM = 1;
    BEGIN
        OPEN c_student_id;
        FETCH c_student_id INTO v_student_id;
        IF c_student_id%NOTFOUND THEN
            DBMS_OUTPUT.PUT_LINE('not found');
        ELSE
            (do stuff)
        END IF;
        CLOSE c_student_id;   
    END;
    But this still seems like killing an ant with a sledge hammer.
    What's the best way?
    Thanks for any help you can give.
    Wayne

    Do not design in order to avoid exceptions. Do not code in order to avoid exceptions.
    Exceptions are good. Damn good. As it allows you to catch an unexpected process branch, where execution did not go as planned and coded.
    Trying to avoid exceptions is just plain bloody stupid.
    As for you specific problem. When the SQL fails to find a row and a value to return, what then? This is unexpected - if you did not want a value, you would not have coded the SQL to find a value. So the SQL not finding a value is an exception to what you intend with your code. And you need to decide what to do with that exception.
    How to implement it. The #1 rule in software engineering - modularisation.
    E.g.
    create or replace function FindSomething( name varchar2 ) return foo.col1%type is
      id foo.col1%type;
    begin
      select col1 into id from foo where col2 = upper(name);
      return( id );
    exception when NOT_FOUND then
      return( null );
    end;
    And that is your problem. Modularisation. You are not considering it.
    And not the only problem mind you. Seems like your keyboard has a stuck capslock key. Writing code in all uppercase is just as bloody silly as trying to avoid exceptions.

  • What are the best practices for generating an EPS logo from InDesign?

    Our costomer is running into technical issues with the logo we sent them, which was exported from Indesign. Images were not embedded and fonts missing. I was able to embed the images and fonts. However, we DO NOT want them to be able to make any text changes. So after exporting an eps, I opened the file in Adobe Illustrator and made all the text outlines. I hope this works. But I just wanted to post the question on what are the best practices for doing this?
    The client needs the logo with transparent background, images emebdded and type in outlines. Also, they need some space around the text. When I exported the eps, the file is right up on the edge of the type.

    It sounds like you are pretty far from "best practice" with regard to logo design and delivery.
    These days, the very use of the EPS format should be considered bad practice, and some other terms in your post, (i.e., 'images,' 'missing fonts'), make it sound like there is not a seasoned logo designer involved.
    That said, you probably already got the advice you need to get out of the immediate jam. However, without proper logo design, you and the client will soon be facing other problems. You should be delivering a 100% vector graphic in single-color (black) and corporate-color(s) versions, with no live font data, that has been test-scaled to very small and very large sizes; ensuring it will work at postage-stamp size and on the side of a truck or building, with specific spot color(s) and proportions that will enable it to be offset printed, embroidered and screen-printed on apparel, and cut into signage materials and decals.

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for Active Directory User Templates regarding Distribution Lists

    Hello All
    I am looking to implement Active Directory User templates for each department in the company to make the process of creating user accounts for new employees easier. Currently when a user is created a current user's Active directory account is copied, but
    this has led to problems with new employees being added to groups which they should not be a part of.
    I have attempted to implement this in the past but ran into an issue regarding Distribution Lists. I would like to set up template users with all group memberships that are needed for the department, including distribution lists. Previously I set this up
    but received complaints from users who would send e-mail to distribution lists the template accounts were members of.
    When sending an e-mail to the distribution list with a member template user, users received an error because the template account does not have an e-mail address.
    What is the best practice regarding template user accounts as it pertains to distribution lists? It seems like I will have to create a mailbox for each template user but I can't help but feel there is a better way to avoid this problem. If a mailbox is created
    for each template user, it will prevent the error messages users were receiving, but messages will simply build up in these mailboxes. I could set a rule for each one that deletes messages, but again I feel like there is a better way which I haven't thought
    of.
    Has anyone come up with a better method of doing this?
    Thank you

    You can just add arbitrary email (not a mailbox) to all your templates and it should solve the problem with errors when sending emails to distribution lists.
    If you want to further simplify your user creation process you can have a look at Adaxes (consider it's a third-party app). If you want to use templates, it gives you a slightly better way to do that (http://www.adaxes.com/tutorials_WebInterfaceCustomization_AllowUsingTemplatesForUserCreation.htm)
    and it also can automatically perform tasks such as mailbox creation for newly created users (http://www.adaxes.com/tutorials_AutomatingDailyTasks_AutomateExchangeMailboxesCreationForNewUsers.htm).
    Alternatively you can abandon templates at all and use customizable condition-based rules to automatically perform all the needed tasks on user creation such as OU allocation, group membership assignment, mailbox creation, home folder creation, etc. based on
    the factors you predefine for them.

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6
    Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions.
    FYI I will want to install Rosetta for older applications, CS3 & CS4, that I need for old client files. Thanks.
    Ali

    Buy one. Anything you want to keep shouldn't be on only one drive; problems may occur at any time, and are particularly likely to occur during an OS update or upgrade.
    (78403)

  • Best practice for update/insert on existing/non existing rows

    Hello,
    In an application I want to check if a row exists and if yes I want to update the row and if not I want to insert the row.
    Currently I have something like this:
    begin
    select *
    into v_ps_abcana
    from ps_abcana
    where typ = p_typ
    and bsw_nr = p_bsw_nr
    and datum = v_akt_date
    for update;
    exception
    when no_data_found then
    v_update := false;
    when others then
    raise e_error_return;
    end;
    if v_update = false
    then
    /* insert new row */
    else
    /* update locked row */
    end if;
    The problem is that the FOR UPDATE lock has no effect for inserts. So if another session executes this part exactly the same time then there will be two rows inserted.
    What is the best way to avoid this?

    for me the 1st solution is the most efficient one.
    in your 2nd solution it seems to me that you're gonna create a dummy table that will serve as a traffic cop. well that possible but not the proper and clean approach for your requirement. you're absolutely complicating your life where in fact Oracle can do it all for you.
    First thing that you have to consider is your database design. This should somehow corresponds to the business rule and don't just let the program to do it on that level leaving the database vulnerable to data integrity issue such as direct data access. In your particular example, there's no way you can assure that there'll be no duplicate records in the table.
    this is just an advice when designing solution: Don't use a "Mickey Mouse" approach on your design!

  • Best Practice for update to iPhone and iTouch

    OK, when 3.0 comes down the pike, what is the best way to get 3.0 as a "clean" install? Currently 2.2.1 is on both. If I do a restore, will the system only pick up 3.0 or will it see 2.2.1 which is currently on the hard drive? With that in mind, how can I delete the 2.2.1 version of the iPhone and iTouch software? Sorry for two question in one post.
    Steve H

    When firmware update 2.0 was released, the entire iPhone was eraseed first including the existing firmware - just as when restoring an iPhone with iTunes, followed by 2.0 being installed, which was followed by the iPhone's backup being transferred to the iPhone.
    The same may apply with firmware update 3.0 with your iPhone's backup being updated immediately before. If not, firmware version 2.2.1 will be updated with 3.0.
    If 2.2.1 is updated and you want a "clean" install of 3.0, you can follow the initial upgrade by restoring your iPhone with iTunes.

Maybe you are looking for

  • How do I transfer Adobe CC application from one hard drive to another?

    How do I transfer Adobe CC application from one hard drive to another?

  • Error in Deploying the web service process

    Hi all, I will working on web service process in ebs server. In this server i had executed the web service process successfully. After Restarting my ebs server i checked again the web service process , but it throws following error running the "Asant

  • Issue with GL account determination of House Bank

    Hello everybody, we have small hiccup with gl account determination for house bank while doing FBWE (bill of exchange) transaction. I have configured the house bank and created a sub account for factoring and assigned values for G/L and Discount acct

  • OLE Issue with MS-WORD

    hi,     I am using OLE to integrate SAP and MS-WORD to achieve a requirement. In the requirement i am supposed to create a table in MS-WORD however i am able to create a table however when i am trying the second table its creating in the first table

  • How do I separate mail and text msgs?

    Why does everything come through the mail icon and through the messages icon? It doesn't make sense. All I want when I click the envelope is e-mail. I found a post on here that sounded similar to my problem but I my phone doesn't have the same menu o