Controlling the Provisioning Tasks before EventHandler

In our OIM environment we have Post process create event handler and Post process update event handler.. The order for both these event handlers is LAST.
Now during create of user my custom code is triggering first and after that its evaluating the policies and provisioning the resources but during update of user first the policy evaluation is triggering first then my update code is triggering because of this its taking a lot of time for my trusted recon to complete during updates.
IS there a way that first my postprocess update code gets executed first and then the policy evaluation by provisioning the resources during updates..
The order of access policy evaluation is 6.

Hello,
I was curious about the Neds thing... I've been searching the net for any clue as to the original author. I found the problem in a book entitled "Great Ideas in Computer Science with Java". I wanted to find the original source code the the Neds evolution simulation, but couldn't, so using the little bit of information introduced in the book by the authors of the book (not the student, Joshua Carter, who wrote the original program), I wrote my own simulation called "Neds." I am curious to see how others managed to do it. Would you can to exchanged ideas? Or perhaps did you happen to find the original author's source code or notes?
Thomas J. Clancy

Similar Messages

  • How to limit provisioning task to a batch of entries from the queue

    Hi Folks,
    In our IDM system we have 60000 entries sitting in the provisioning queue for a single repository for a single task. The provisioning task handles such amount of entries very slow working for days.
    I'm wondering if it's possible to limit somehow the provisioning task to a certain batch of entries from the queue. Let say, to force somehow the provisioning task to handle TOP 1000 entries at once and then finish. In such a way it will start multiple times to process the 60000 entries, but each iteration of the task will not longer than 1000 entries.
    I have the idea to overwrite the default view MXPROV_ENTRIES for the task to return top 1000 entries only. Do you think this will work ?
    Any other ideas please welcome.
    Thank you a lot!
    Siarhei

    Hi,
    You don't mention what kind of task it is queued for, or perhaps it's even an action? Or the IdM version for that matter. I'm not sure how changing the views would change anything though.
    In case it's a task (ordered, condt., switch etc) the dispatcher has for the latest versions been using the views mxpv_grouptasks_ordered/switch/conditional which already have 1000 row limiters in them.
    If it's an action then the runtime should not be much affected by the size or really care how many are in the mxprov_entries view. It might take a few seconds to get the entire list og 60.000 rows if it doesn't add it's own limitation in addition to the action/repid filter. If they're all for the same actionid/repository combination it will just works through it one entry at a time untill it's done and each entry is enclosed in its own transaction so at this point there would be no effect of having a top 1000 or not It's also not possible to split up the queue action/repository combination between multiple runtimes either.
    Br,
    Per Christian

  • Provisioning tasks not getting initiated when done in bulk

    Hello IDM Gurus,
    Needed your help with an issue we're currently facing; We're having an odd problem with provisioning/deprovisioning to our ABAP repositories. For each repository we are using the Add Member/ Remove Member tasks; for all repositoies, both the Add Member and Remove Member event tasks trigger a similar task that basically through the means of a script checks to see whether a user already has privileges within the target repository or not, then accordingly either adds the new privilege to the existing account or creates a new account and adds the new privilege; after the initial check is made, the decision on whether to add the privilege to the existing account or create a new one and add the privilege is done through a uProvision call from the script itself to the appropriate provisioning task for the specific repository in question; the check for whether the account exists or not is done within the Provisioning task itself.The same process is followed for deprovisioning as well. An example of how this would work is:
    JohnDoe has no account in Repository A;
    Privilege X (associated with repository A) is added to his account;
    The script is called and a check is made; the provisioning task for repository A is called;
    The provisioning task checks and sees that JohnDoe doesn't have an account in repository A, so an account is created and Privilege X is added to the new account.
    After this, we add two new privileges Y and Z(both associated with repository A) to JohnDoe
    The script is called and a check is made; the provisioning task for repository A is called;
    The provisioning task checks and sees that JohnDoe has an account in repository A, so the two new privileges are simply added to the existing account.
    This all works perfectly as long as we only work with one repository at a time; i.e. only add and remove privileges from one repository at a time; make all changes related to privileges for one repository; hit update; then try doing the same again for another repository. Whenever we make multiple changes related to multiple repositories, random things start happening, some changes go across in full, but some just don't; there's no logic in why certain changes happen and certain don't.
    Does this have something to do with working with just one dispatcher? is it not able to handle that many changes at once? I tried using privilege/assignment grouping for each repository, grouping it by repository name as it should inherently group add and remove task, but even that didn't have any effect.  Privilege changes were still going missing.
    Any suggestions / ideas to rectify this behavior?
    I would appreciate any help with the issue! Thanks in advance!
    Best regards,
    Sandeep

    Hey Matt,
    Thanks a lot for your quick response! I tried changing the number of runtime engines from the default of 1 to 4 but it had no effect; I added 3 roles for 3 systems but only one system got an account provisioned to it; is 4 not enough? should I try a higher value? is the uProvision script not supposed to be called or used in that fashion for multiple simultaneous calls?
    If looking at the backend to resolve this, would I need to only be looking at the MXP_PROVISION and MXP_AUDIT tables?
    Thanks a lot in advance!
    Best regards,
    Sandeep

  • Retrying provisioning Task error in OIM11g

    Hi to all.
    I'm facing a strange issue when trying to retry a provisioning task. In particular, I have connected the OIM11g to an UNIX Server. When the provisioning task is rejected (I'm trying to provision an already provisioned user login, causing SSH_USER_USEREXISTS_FAIL error), the provisioning task goes correctly in "Rejected" status. With the same user that tries to finalize the privisioning, I try to "Retry the provisioning task". But this retry fails. It says me: "Unable to retry task", and the OIM log says "<XELLERATE.APIS> <BEA-000000> <The logged in user does not have permissions to perform this operation>". But the user is the same that I used to finalize the provisioning task.
    Could anyone help me? I've already tried to assign the user all the privileges to complete the retry of the task but this did not solved my issue.
    Thank you all in advance.
    Regards,
    Giuseppe.

    Hi All,
    I have the same requirement so I created a custom role and addded the above Data Object permissions for the role. In the organization I had added the custom role in Administrative role list with write access but still I am getting error saying 'Logged In user does not have permissions'. Added all the permissions present in Data Object Permissions for this role but still I am getting the same error. When I add the system Administrator role as super role to my custom role then the user who belongs to the custom role is able to retry the task but I cant give this permission to this role. So please suggest me if I miss anything.
    Thanks,
    Rajesh

  • No email notification from manual provisioning task - 11gR2

    I have a disconnected application instance and I am using the standard DisconnectedProvisioning flow for provisioning - modified to assign the provisioning task to a group, using a rule which determines the group via the app. instance name.
    The flow itself works as expected and the Manual Task in the flow is assigned to the correct group - the task shows up in the Inbox of the member of the group in the Self Service console
    I would like to send an email to the user when the task is assigned and I have therefore configured SMTP Notification according to this http://allidm.com/blog/2012/11/configuring-smtp-notifications-in-oracle-identity-manager-11gr2/
    I have confirmed that the SMTP server works as expected - I am able to send and receive message using this server.
    On the Notification Tab of my Manual Task I have configured:
    General:
    Status: Assign / Recipient: Assignees
    Status: Complete / Recipient: Initiator
    Status: Error / Recipient: Owner
    Advanced:
    Remind Once: 0 days, 0 hours, 1 minute "After Assignment"
    Encoding: UTF-8
    Make Notifications Secure: False
    Show worklist/workspace URL in notification: True
    Make notification actionable: True
    Send Task attachment: False
    Group Notification Configuration: "Send individual emails" 
    I have redeployed my DisconnectedProvisioning flow to my SOA server a couple of times - remembering not to change the version number and to force overwriting the existing flow.
    My SOA and IDM servers have been restarted as well.
    When requesting the disconnected application instance using the catalog, the manual provisioning task is assigned to the correct user - however - he does not receive any email nor do I send any errors or stack traces from my SOA or IDM servers.
    There must be some check box somewhere that I forgot to tick!
    Does anybody have an idea - did I forget to configure something?
    Kind regards,
    - Tom

    Ok - so I did in fact forget to tick a check box!
    After setting up UMS using this: http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/notification.htm#CACCEDGF and enabling notifications via:
    Enterprise Manager, SOA, soa-infra, SOA Infrastructure drop-down, SOA Administration, Workflow Config:
    Notification Mode: Email
    After restarting my SOA and OIM managed servers - I now receive emails as expected.
    Case closed :-)
    Kind regards,
    - Tom

  • Can I control the spacing before and after a signature?

    When using the Mail app, can I control the spacing before and after a signature? It seems like the app adds space, but it doesn't add the same amount every time.
    Thanks.
    Greg

    The only way I can think of to maintain equla spacing between the text ov vartying lengths and the vertical lines is to put each icon/text pair in it's own frame fitted to content, make each vertical line a separate object, then select them all and distribute the spacing.

  • Control mult access to the Human Task

    Hi All,
    I have a human task in the BPM process which is assign to the role of the user. I've noticed that the task can be accessed by multiple users at the same time within the BPM workspace task list.
    How can I control it that when one person is already open the human task and nobody can open it. Is it a way to
    set the concurrency level for bpm human task?
    thanks in advance.

    Guessing you might already know this, but the human tasks have an auto-claim property that is turned on by default. The way this works is that if more than one end user selects a task at the same time, the first one to submit the task will be the one that gets committed. While I'd agree that it's not ideal, the second end user (who also had the form displayed for the task when they simultaneously selected the same task) will not be able to submit the task. The second end user sees this rather cryptic message when they try to submit the form:
    >
    "Insufficient privileges to access the task information for this task. User 9616d414-d4bc-4573-9d39-cc100fda7391 cannot access the task information for task: {1}. Ensure that the user has been granted appropriate privileges to access the task information for this task."
    >
    This doesn't sound like the route you want to go either, but from the Workspace - end users can "Claim" a task from the Actions dropdown. Claim is useful when an end user is working on a work item instance and they want to make sure that no one else assigned also tries to work on the same work item instance. End users do not have to claim tasks in the Workspace, but it's useful to keep another end user from working on (or even seeing in their queue) the same work item instance that they had already started working on.
    Once claimed, a work item instance can only be viewed and worked on by the user who claimed it and anyone assigned to the Process Owner role (it is shown in the Process Owner's "My Staff Tasks" tab - not in their "My Tasks" tab). The instance could be returned back into the general queue by clicking the Actions dropdown and clicking Release.
    Dan

  • What was the exact functionality of "Device Control/Status.vi" (version 5)? Is there any vi in Labview 6.1 which performs the same tasks?

    I have a vi developed in LabVIEW 5.1 and I want to upgrade it to LabVIEW 6.1. So I must replace "Device Control/Status.vi" with a newer one but I do not know wich vi performs the same tasks in v6.1

    The Device Control/Status.vi is included with LabVIEW 6.1 as part of the serial compatibility VIs. You can find it by opening up and looking at
    Instrument I/O -> I/O Compatibility -> Serial Compatibility -> Bytes At Serial Port.vi
    Also, if you open up the VI found in
    vi.lib/platform/_sersup.llb/serial line ctrl.vi
    it will expose the functionality of Device Control/Status
    Thanks,

  • Can I dynamically control the position of objects on a printed page?

    Forum:
    Oracle Developer Reports Builder (10g)
    I have a need to control the position of printing on a peel off label in my report. I currently am forced to represent a complex object three times , i.e. LEFT, RIGHT and CENTER, printing whichever is configured in the PRINTER_INFO table of my schema. I am printing to over 200 printers and they all print the location differently.
    Also, since printing through the DESTYPE=PRINTER ,vs printing from a displayed PDF have different results, I would have to refresent these three objects yet again, sligh lower and to the right to support label printing from a PDF.
    I have seen where Word Macro can dynamically position this same object by using xy coordinates.
    Does anyone know how to do something like this in Oracle Reports.
    Thx in advance,
    Gary
    [email protected]

    Dora,
    Yes, that is exactly what I am doing. As I mentioned, the label object has many elements and they all sit on top of each other when doing the layout. This by itself is messy to work with.
    This has been workable so far with regard to generating output to the several hundred printers of different models and firmware.
    The output is rarely if ever centered in both x & y axis. We have only been able to use in production by relaxing the rules so that the text only need be within the boundaries of the label.
    I also found that to support printing from the PDF view that the text positioning is well out of the label boundaries. I would be forced to create an additional three complex objects in the 'Y' axis. Needless to say this is quite messy.
    I do appreciate your input, but your recommendation does not meet the requirements of my project.
    I don't believe that Oracle Reports can do this task. I am thinking that BI Publisher can, and am doing "due diligence" before recommending a new reporting platform.
    Thx again.
    Gary

  • Changing Duration, Start/Finish, %comp all result in MS-Project warning that task(s) [normally the last task in the project plan] is being moved to start Sat Jan1, 84...

    Running MS-Project 2007 12.0.4518.1014 MSO (12.0.6607.1000) non-server edition.  The Project was created on July 17, 12.  It has gone through 162 revisions w/o presenting this warning.  The date range is 7/17/12 thru 5/3/13.  The task
    I'm changing (#16) shows a Start of 8/17/12 and Finish of 1/14/13; it has no Predecessors or Successors.  I'm changing the %comp to 100% and I get the warning message "You moved task 264 of 'project name' to start Sat Jan 1, '84. This is before
    the project start date (Tue Jul 17, '12)".  I have tried changing other tasks with the same resulting warning message.  I have tried breaking the Predecessor link to the last task only to be told of another task being moved to Jan 1, '84. 
    If I accept the move, 51 of the 264 rows (both completed and future tasks) are moved to start on 1/1/84 with everyone being flagged as a Milestone--the top summary task [Outline Seq 1] Start and Finish change from 7/17/12-5/3/13 to 8/17/12-2/19/13.
    Any thoughts as to what is happening behind the scenes to cause this warning and subsequent corruption of the plan?

    I will need to check w/FLS to determine--they control patching and releases.  As for your other questions...fixed start, every task is ASAP constrained by task sequencing only.  I've never ask MS-Project to Resource, Load Level.  Calculations
    are Automatic.  There are three tasks that have a SF relationship to a specific event such that their start is backward scheduled from when the dependent task starts depending upon the task sequences that drive it.  I have used this approach on many
    other projects and have never run into this issue.
    I agree that it sounds like a file corruption--just looking to nail down which event might have caused the corruption so that I can restore to a prior version and rebuild [obviously HOPING to eliminate the sequence that caused the file to be corrupted in
    the first place].

  • How to control the force return in table cell data?

    I have some xml format files.When I import them into FrameMaker,They display as table data.but when the data is very long in table cell,I want to control the new line by myself.for example,I add some \r\n in my xml file data,then in FrameMaker,It can identify the \r\n, force return.In framemaker I don't know the actual symbol that means the newline.How Can I deal with the problem?thank you!

    Hi Russ,
    yes, but you have to agree that forcing a return in the SOURCE content is really not a wise thing to do - It would be better to break the content into multiple paragraphs or used an XSLT to determine the column width and insert your own breaks in a version of the XML for rendering in Frame. If, at a later date, your templates in Frame change to allow wider columns in your table, then you'd have to go back into the source code and change every occurrence of the c/r in the data - Yeuch! Better to transform the data once, before importing into Frame and then if the col-width changes it is a simple task to change the width in the XSLT - personally, I would make sure the EDD and DTD allows multi-lines in the table cell and then break-up the data to fit the table cell size in an XSLT before importing. Then you don't taint your source code...and it is quite easy to do this is an XSLT...

  • How to know the current Task/Job ID in which a request is waiting for

    Hi Team,
    Is there any table/view in IDM 7.2 that provides current task/job ID number a request is waiting for?
    Even though the last completed phase of the request can be seen from admin UI that only shows the description of the completed steps but not the technical details like task ID/Job ID of the current step.
    Knowing the current step/Job ID helps the admin a lot to directly go the step when the request is waiting for long/got failed etc..
    Regards,
    Venkata Bavirisetty

    Hi Matt,
    Thanks for your response.
    I read few articles in your blog couple of weeks before and they are very much informative.
    The information provided in the provisioning queue just shows the task ID and no.of requests (without request nos) waiting in that task.
    Basically the question I have mentioned in my previous post is not related to any specific task type. It is like using request no we wanted to know complete request history specially the current task (task ID) at which the request is waiting for.
    Regards,
    Venkata Bavirisetty

  • How do I control the sampling rate of a PCI-6602 counter

    Labview version: 7.1
    Processor: Pentium 4 1.8 GHz
    Hello All,
    I have two avalanche photo diodes connected to two counter entrances on a 6602 (ctr 0 and ctr 1).
    As can be seen in the attached VI, I'm generating a signal on ctr2 and using that as an external clock for the counters.
    My objective is to accurately read the counter values at a rate of ~100KHz (i.e. 0.01 ms between measurements) and to be able to control the rate via the front panel. This should be possible (80MHz timebase), however when I run the program it is only able to actually sample at ~5 ms intervals. Also I'm encountering error 200141 even though I set the rate in the counter to 1M, which should buffer enough.
    What am I doing wrong? Any help would be greatly appreciated.
    Thanks a lot,
    Attachments:
    FRET_ver_14.vi ‏456 KB

    Hello Aadam,
    After looking at your code, I would not use the DAQ assistants to implement this. The DAQ assistant are best to be used for simple acquistion and since you are looking to use more lower level properties of you card, I would suggest to move to use the lower level VIs. It will be easier to understand where the problem is occurring at and will run better than using the DAQ assistant.
    I have done some initial research for you to implement this design. I have described each of them below and how they will assist you.
    The first example I have found is the Change Counter Output Frequency While the Task is Running. In this example, it will show you how to change the frequency and duty cycle of a counter output frequency task. One thing to remember about these counters are that the frequency will not change until the period has finished. For example, if you changed the frequency in the middle of a period of frequency of 1 Hz, it has to complete the 1Hz period before changing it to the next frequency.
    The next example I would investgate is in the NI Example Finder called Correlated Dig Write With Counter.vi. The NI Example Finder can be found in Help » Find Examples... This VI will explain how to use the counter as a clock for another task. You will need to combine the concepts of the first example with this to make sure that you can vary the sample clock.
    Finally, in the NI Example Finder, there is an example how to count digital events with an external clock called Count Digital Events-Buffered-Continuous-Ext Clk.vi. With this example, I first suggest to add the counter task and make sure you can get the external clock to work. After this, I would then implement changing clock.
    Jim St
    National Instruments
    RF Product Support Engineer

  • How to Deploy the Scheduler Task OIM 11g

    Hi.
    I have deployed the scheduler task in OIM 11g and also I have configured the Scheduler task in OIM Admin Console. The Java Scheduler class was not invoked when I run the Scheduler task.
    I have done the following configuration to develop and deploy the scheduler task in OIM.
    1) Developing the Java Class File.
    package edu.sfsu.oim11g.scheduler;
    import java.util.HashMap;
    import oracle.iam.scheduler.vo.TaskSupport;
    import edu.sfsu.oim11g.logger.SfsuLogger;
    public class SfsuTrustedSourceReconciliation extends TaskSupport {
         //private Logger logger= Logger.getLogger(SfsuTrustedSourceReconciliation.class);
         private SfsuLogger logger= new SfsuLogger("SFSU-LOGGER");
         private String className=this.getClass().getCanonicalName();
         private String methodName="";
         public SfsuTrustedSourceReconciliation() {
              methodName="SfsuTrustedSourceReconciliation";
              debug("SfsuTrustedSourceReconciliation() Called");
         @Override
         public void execute(HashMap arg0) throws Exception {
              // TODO Auto-generated method stub
              methodName="execute";
              debug("SfsuTrustedSourceReconciliation Arguments "+arg0);
         @Override
         public HashMap getAttributes() {
              // TODO Auto-generated method stub
              return null;
         @Override
         public void setAttributes() {
              // TODO Auto-generated method stub
         private void debug(Object message)
              logger.info(className+" : "+methodName+" : "+message);
    2) Custom Scheduler XML File
    This file is located in
    /home/oracle/confg_files/schedulers/db/SfsuTrustedSourceReconciliation.xml
    <scheduledTasks xmlns="http://xmlns.oracle.com/oim/scheduler">
         <task>
         <name>SfsuTrustedSourceReconciliation</name>
              <class>edu.sfsu.oim11g.scheduler.SfsuTrustedSourceReconciliation</class>
              <description>Reconciliation IDSync Data</description>
              <retry>5</retry>
              <parameters>
                   <string-param required="true" encrypted="false" helpText="Source Data Source">Source Resource Data Source Name</string-param>
                   <string-param required="true" encrypted="false" helpText="Config Data Source">Config Resource Data Source Name</string-param>
                   <string-param required="true" encrypted="false" helpText="Reconciliation Type is Full Or Cincremental. Default is Incremental">Reconciliation Type</string-param>
                   <number-param required="true" encrypted="false" helpText="Max Records" >Max Records</number-param>
              </parameters>
         </task>
    </scheduledTasks>
    3) Plugin File Configuration.
    <?xml version="1.0" encoding="UTF-8"?>
    <oimplugins>
    <plugins pluginpoint="oracle.iam.platform.kernel.spi.EventHandler">
    <plugin pluginclass="edu.sfsu.oim11g.eventhandlers.SfsuPostProcessEventHandler" version="1.0" name="SfsuPostprocessExtension"/>
    </plugins>
    <plugins pluginpoint="oracle.iam.scheduler.vo.TaskSupport">
    <plugin pluginclass="edu.sfsu.oim11g.scheduler.SfsuTrustedSourceReconciliation" version="1.0" name="SfsuTrustedSourceReconciliation"/>
    </plugins>
    </oimplugins>
    The event handler is successfully deployed and working fine without any issue.
    4) making the scheduler jar file.
    5) Making the scheduler zip with the following directory format.
    plugin.xml
    lib/scheduler.jar
    6) Registering the Schedule task and event handler as a Plugin.
    ant -f pluginregistration.xml register
    7) Importing the SfsuTrustedSourceReconciliation.xml file into the MDS.
    7.1) weblogic.properties
    wls_servername=oim_server1
    application_name=oim
    metadata_from_loc=/home/oracle/configfiles/schedulers
    7.2) Running the weblogicImportMetadata.sh file
    It imported the SfsuTrustedSourceReconciliation.xml file into the MDS Schema and it is available in the MDS_PATH table in MDS schema
    Path_Name : SfsuTrustedSourceReconciliation.xml
    PATH_FULL : /db/SfsuTrustedSourceReconciliation.xml
    8) Imported the EventHandlers.xml file into the MDS Schema.
    9) Run the PurgeCache.sh file
    10) Restarted the OIM Server.
    11) Loggin into the OIM Admin Console and Created the Scheduler job based on the SfsuTrustedSourceReconciliation listed in the scheduler.
    12) Run the Scheduler job and log file entries are not logged into the log file. My Log File Configuration
    <log_handler name='sfsu-handler' level='FINEST' class='oracle.core.ojdl.logging.ODLHandlerFactory'>
    <property name='logreader:' value='off'/>
    <property name='path' value='/u01/app/wl-10.3.5.0/Oracle/Middleware/user_projects/domains/oim_domain/servers/oim_server1/logs/sfsu-connector.log'/>
    <property name='format' value='ODL-Text'/>
    <property name='useThreadName' value='true'/>
    <property name='locale' value='en'/>
    <property name='maxFileSize' value='5242880'/>
    <property name='maxLogSize' value='52428800'/>
    <property name='encoding' value='UTF-8'/>
    </log_handler>
    <logger name="SFSU-LOGGER" level="FINEST" useParentHandlers="false">
    <handler name="sfsu-handler"/>
    <handler name="console-handler"/>
    </logger>
    Is there any special configuration Do i need to do invoke my Scheduler Java Class File.
    Help is Greatly Appreciated.

    Seems like you did it all right but just piece which if you can modify and test. The plugin.xml has two artifacts eventhandler and the schduler. Can you try creating separate plugin.xml with one for the scheduler zipped up with /lib/scheduleClass.jar and test it?
    Just deregister everything before trying it and let us know how it goes.
    Also the logger as I see is a custom logger, so it is extending the OOTB Logger? Just put some sysouts in the code and check for those in the server out file to be sure.
    Edited by: bbagaria on Jul 22, 2011 5:15 PM

  • How to control the enabling of resources?

    Hello,
    How can we control the "Enable" Resources?
    I have a group of users I don't want to allow this group to be able to "Enable" resources for a user this group manages.
    Because we have approval process for resources, I need to continue to allow this group to submit requests for these resources.
    Thanks
    Khanh
    Edited by: user12049102 on Dec 16, 2009 4:26 PM

    Well that could be achieved in two ways:
    *1* Using two approval processes. Do this as follows:
    1) Go to Rule Designer. Create a Process Determination Rule with following values:
    Type - Process Determination
    Sub-Type - Approval
    Object - 'Your Object Name'
    Process - 'Your process Name'
    Add a rule element with following values:
    Attribute Source - Request Information
    Attribute - Request Object Action
    Operation -n '=='
    Attribute value - Enable
    2) Now create two approval processes. App Process1 and App Process2. Allow one process two create all normal requests.
    The other process checks whether the 'Requester' is from your particular group. If true 'Reject' the request.
    3) Now go to your resource, say ABC Resource. Go to tab Process Determination Rules. For 'Approval Processes':
    Add this rule condition and your ptocess accordingly.
    *2* Using a single task which checks the 'Request Object Action' attribute in the request for 'Enable' action:
    Add task in your approval workflow which would be invoked first before other tasks. This task checks the value for Request Object Action attribute. And if this is equal to*Enable* along with the requester being from your desired group, then "Reject' the request.
    Hope it helps.
    Thanks
    Sunny

Maybe you are looking for

  • What is the process to "force close" a pdf that has opened in a separate browser opened by PeopleSft

    PeopleSoft opens a new browser window to view a report from the application.  When the PeopleSoft session closes, is there a process to force close the PDF as well? Thanks! RMGHRMS Bob

  • ABAP Proxy Timeout

    Hi! I have this scenario: SAPR3 (A.Proxy) -> XI ->SOAP SAP R3 sends Sync Messages to XI, through ABAP proxy, which starts a BPM, this BPM calls a WebService that sometimes takes more than one minute to give a response. When the process reaches "Close

  • FI/GL extraction

    Hi Experts, Currently i'm working with FIGL module and extracting data from 0FI_GL_6 and 0FI_GL_4 data sources. I'm not getting data into BW and is in yellow status. I'm getting the error as mentioned below while extracting data with full load and in

  • Migrate MS SQL Server procedure to Oracle

    Can any one suggest me any user friendly tool to Migrate MS SQL Server procedure to Oracle. I think using OMWB we can migrate schemas, as i could not find any interface to migrate a single procedure

  • Single Speaker Mode

    Hi I have a problem with single speaker mode.  Users icons are blanked out, and they are unable to click on the asterix mic icon. What do I need to do so that this works? Many thanks Bob