Is it the good practice?

Hi,
With OIM 11g, I try to implement different UI requests for user creation.
A request for external users and a request for internal users.
For both of them, UI displays only :
-Last name
-First name
-Birth Date
For external user request, email is generated by firstname an lastname and prefixed by external.
For internal user request, email is generated by firstname an lastname and not prefixed.
To do that, I would like to use the request API "Platform.getService(RequestService.class)" in an event handler to get the request template name and generate the mail according to the template name.
Can I get the request id in the event handler?
Is it the good practice?
Regards,
Pierre.

user1214565 wrote:
thank you very much bbagaria,
Can I use different datasets for user creation, one for external user, one for internal (I thought I could only modify and use the default dataset: /metadata/iam-features-requestactions/model-data/CreateUserDataSet.xml for all creation request)?
If yes, how? (I tried to import MyCreateInternalUserDataSet.xml but it didn't work)
With the default dataset, I expected to create two request templates, one for internal and one for external and get the template name in the a preprocess event handler.
Regards,
PierrePierre,
I would suggest that you just modify the CreateUserDataSet.xml (not rename but import it back at same location in MDS [over-write]) to add additional field for type of user (hidden if you want) and use prepopulate with the type of user internal or external based on the template selection. The prepopulate adapter takes in RequestData object and that has getRequestTemplateName() method. Or just populate the email based on the template selection.
I haven't tried this but theoretically it seems that you can user this.
http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e17334/oracle/iam/request/vo/RequestData.html#getRequestTemplateName__
HTH,
BB

Similar Messages

  • Good Practices and Code Conventions

    Hi All,
    What are the good practices in java? Are there coding convention to be followed?
    I would like to improve my coding standards and write better code. Please help me.
    Thanks
    Diana

    Diana12 wrote:
    Then how to make it excellent?Are you serious? Do you expect us to give you a 2-line summary of how to make "excellent" code?
    It takes some time to learn to write excellent code and it can't easily be summarized into 1-2 sentences. If it could, then it would be much easier to learn (that would be nice!).
    Experience, making mistakes, realizing they were mistakes, not making the same mistakes later on. Having your code criticized by someone who writes better code than you. That's how you learn to write excellent code.
    Write simple code, keep it clean, make each method do one thing, write code that doesn't need documentation, document where you must, adhere to the open-close principle, adhere to the KISS principle, adhere to the YAGNI principle, ...
    Grab [The Pragmatic Programmer|http://www.pragprog.com/the-pragmatic-programmer] and read it. Then read it again. Do some more exercises and read it a third time.

  • WHAT IS SAP  " GOOD PRACTICE" FOR TESTING PROCEDURES ?

    Dear Gurus, could you please share with me the good practice of testing procedures? I am a 1st level fico support and being asked for this. Thanks in advance for your kindest help.

    Hi,
    Please use the below links FYR...
    http://help.sap.com/erp2005_ehp_02/helpdata/en/2f/75ba3bd14a6a6ae10000000a114084/content.htm
    http://www.ebooksquad.com/search/SAPmoduletesting+procedures
    With Regards,
    Lolla.

  • What is the best practice to replicate the change of an object name in HANA AT/AN view where there are dependent views in it?

    Dear All HANA Studio Experts,
    I'm using SAP HANA Studio Version: 1.0.68 in our current landscape and looking for a expert and intelligent solution for a problem that we have encountered recently.
    We have created multiple AT views, AN views and created CA views on top of these AN views to cater different LOB related to our organization. We have also designed BO universe on top of the CA views from HANA and finally created webi reports out of these universes.
    In a particular scenario, we have one AT view which is been used by 10 different AN views.
    Now we had 3 Requirements like below:
    1. To add a new field in this AT view,
    2. To update an existing field in this AT view and
    3. To remove a field from this AT view which is not required anymore.
    Problems Encountered:
    Req 1. When we added a new field (e.g. XX_ADD) in the AT view and activate it, its throwing warning that there are dependent objects on this AT view. However no error thrown and it leads to re-work to check if this new field is available in all those dependent views and re-deployed those all.
    Question: How to ensure if this addition of object will not impact the rest of the dependent views?
    Req 2. When we updated an existing field (e.g. XX_UPDATE_NEW) in the AT view and activate it, its throwing warning that there are dependent objects on this AT view. However the BO reports built on top of these different dependent AN views became inoperative.
    Question: How to ensure if we need to update the name of an existing object it will not impact the rest of the dependent AN views?
    Req 3. Finally we have removed one object from the existing fields as it is not required. Again we faced warnings that there are dependent AN views on this AT view.
    Question: How to ensure if we need to delete an existing object, it will automatically reflect in all dependent AN views?
    Please suggest, if it is a good practice to create separate AT,AN and CA views for each of the LOB's and avoiding any dependent views (hence gradually the total no. of logical objects will be increase) or to reuse the AT/AN views whenever possible and in case of any updation/deletion we need to do a huge rework to check and redeploy the dependent objects again and again?
    Thanks for reading this detailed explanation and please let me know in case you need any other inputs....don't hesitate to share your expert thoughts on this.

    Hi Avishek,
    Are you using Development or Modeller Prespective?
    Avishek Chakraborty wrote:
    Now we had 3 Requirements like below:
    1. To add a new field in this AT view,
    2. To update an existing field in this AT view and
    3. To remove a field from this AT view which is not required anymore.
    1) Adding a new Field in the AT View and after activating the view. Then you have to Refresh/Check out the dependent views to see if it is reflecting.
    2)  We cannot update the existing field in the AT view i.e like name change if it is being already used in joins in any of the dependent views. It will not allow you to activate
    3) To remove the field which is not being used any where, we can make the changes in AT View and Refresh/Check out to see if the dependent views are reflecting the change
    Regards,
    Krishna Tangudu

  • What are the best practices to extend the overall lifespan of my MacBook Pro and its battery?

    In general what are the recomended practices to extend the lifespan of my batter and other general practice to extend the lifespan and characteristics(such as performance and speed) like new on my MacBook Pro which this past fall (2011)?

    About Batteries in Modern Apple Laptops
    Apple - Batteries - Notebooks
    Extending the Life of Your Laptop Battery
    Apple - Batteries
    Determining Battery Cycle Count
    Calibrating your computer's battery for best performance
    MacBook and MacBook Pro- Mac reduces processor speed when battery is removed while operating from an A-C adaptor
    Battery University
    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility.  For situations DU cannot handle the best third-party utilities are: Disk Warrior;  DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. Drive Genius provides additional tools not found in Disk Warrior.  Versions 1.5.1 and later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep.  Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced since Tiger.  These utilities have limited or no functionality with Snow Leopard or Lion and should not be installed.
    OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems. For more about malware see Macintosh Virus Guide.
    I would also recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc.
    For emergency repairs install the freeware utility Applejack.  If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line.  Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. There is no confirmation that this version also works with Lion.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    Carbon Copy Cloner
    Data Backup
    Deja Vu
    SuperDuper!
    SyncTwoFolders
    Synk Pro
    Synk Standard
    Tri-Backup
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at CNet Downloads or MacUpdate.
    Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity as free space.

  • What is the best practice to deploy the SharePoint site from test to production environment?

    We are beginning to start a new SharePoint 2010 and 2013 development projects, soon developing new features, lists, workflows, customizations to the SharePoint site, customization to list forms and would like to put good practice (that will help in deployment)
    in place before going ahead with development.
    What is the best way to go about deploying my site from Development to Production?
    I am using Visual Studio 2012 and also have Designer 2013...
    I have already read that this can be done through powershell, also through visual studio and also via designer. But at this point I am confused as to which are best practices specifically for lists, configurations; workflows; site customizations; Visual studio
    development features; customization to list forms etc. You can also provide me reference to links/ebook covering this topic.
    Thanks in advance for any help.

    Hi Nachiket,
    You can follow below approach where the environments has been built in similar fashion
    http://thesharepointfarm.com/sharepoint-test-environments/
    if you have less data then you can use  http://spdeploymentwizard.codeplex.com/
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/b0bdb2ec-4005-441a-a233-7194e4fef7f7/best-way-to-replicate-production-sitecolletion-to-test-environment?forum=sharepointadminprevious
    For custom solutions like workflows etc you can always build the WSP packages and deploy across the environments using powershell scripts.
    Hope this helps.
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful
    Hi, can you answer me specifically with regards to the foll:-
    lists
    configurations
    workflows
    site customizations like changes to css/masterpage
    Visual studio webparts
    customization to list forms
    Thanks.

  • Is VHDX for data drive considered good practice on a client PC?

    Hi!
    I don't like putting user's data files (documents, etc.) inside the user's Documents directory on C:. Instead I prefer having them on a D: disk, separate from the OS. On the other hand I don't want to create a fixed size partition as I consider it a waste
    of space, especially when everything is on a rather small SSD.
    Therefore, I consider creating a virtual hard disk (VHDX) on my C: drive and making it dynamically expanding. This would allow me to store data on that "separate" disk which is actually an expanding VHDX file on C:. One problem is that for some
    unknown reason Windows 8.1 is not able to auto-attach such disks on startup, but I have seen some workarounds to auto-mount them through tasks.
    My question is the following: Is it considered good practice to put all data files on such a dynamic VHDX instead on a separate partition? Reading the VHDX explanations it looks like this file format is very stable (even in case of power loss) and is widely
    used in virtual servers. Performance should be also very good. Therefore I don't see any reason to not use it for my data drive. Or are there any drawbacks?
    Thanks in advance for any help.
    Best regards,
    Anguel

    Hi,
    Since the VHDX is created on C which should be the system partition, I don’t think it is more safety than separate partition.
    Please consider that once the system corrupted and we have to format the C to reinstall the System, it may be difficult for us to recovery the date. But the separated partition will be easily stayed without changes.
    You can try to shrink the C volume in Disk management to create a new partition.
    Just my thought.  
    Kate Li
    TechNet Community Support

  • What is a good practice to handle LOV in jdev apps?

    Hi, experts,
    In jdev 11.1.2.3,
    In our projects, there are many LOVs which the value are stored in a common dictionary table, for example in table refcode:
    refcode(id, low_value,high_value,meaning,domain_no),
    Different LOVs will retrieve value pairs(low_value,meaning) , or (high_value,meaning) from refcode table by using domain_no as the filtering criteria.
    In the end user's UI, the code/number field values should be displayed by a meaning word from refcode,
    To accomplish this goal, I will create numberous associations between different tables with refcode,
    and create VOs to have refcode entity view as a secondary entity view.
    I feel some odd in doing so(because so many associations with the same refcode table),
    Is that a good practice to handle LOV this way ?
    Thanks.

    On Fusion Developer's Guide for Oracle Application Development Framework
    10.3 Defining a Base View Object for Use with Lookup Tables
    (http://docs.oracle.com/cd/E37975_01/web.111240/e16182/bclookups.htm#BABIBHIJ)
    10.3.3 How to Define the WHERE Clause of the Lookup View Object Using View Criteria
    There are valuable information and suggestions on implement lookup features, especially by using view criteria
    (the View Criteria and View accessor is one of important and great idea in ADF)
    I think, by using of view criteria, the derivative attribute to display fk information can be implemented in a convinent way without FK associations definition.

  • JAR files for SQLJ and JDBC drivers: what is the best practice?

    starting a migration from IAS 10 to WebLogic 11g.
    Apparently the jar files for SQLJ are not on the classpath by default.
    java.lang.NoClassDefFoundError: sqlj/runtime/ref/DefaultContextwhich is the better practice: putting the SQLJ runtime jar into the lib subdirectory of the domain directory, or using a shared library reference? (usage of SQLJ is pretty prevalent in our apps, though we may be getting away from it)
    are the Oracle JDBC drivers on the classpath by default?
    if not, then the same question: put them into the lib subdirectory of the domain directory, or use a shared library reference?

    I'm looking at the setDomainEnv, especially the big note at the top:
    >
    # WARNING: This file is created by the Configuration Wizard.
    # Any changes to this script may be lost when adding extensions to this configuration.
    >
    and am getting squeamish about editing it...
    http://www.bea-weblogic.com/how-do-i-disable-wls-automatically-adding-to-classpath.html looks like the default behaviour is for WebLogic to put $DOMAIN/lib;$WL_HOME/common/lib/ext;$WL_HOME/server/lib/ext on the classpath; there is also a reference to setting weblogic.ext.dirs= when starting weblogic (which means I set the WEBLOGIC_EXTENSION_DIRS environment variable).
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/programming/libraries.html#wp1067450 also refers at the bottom to using the domain /lib subdirectory.
    so am I correct that a good practice is to just put the jars I think I will globally need into $DOMAIN/lib, rather than putting them in $WL_HOME/common/lib/ext, $WL_HOME/server/lib/ext, or fiddling with the WEBLOGIC_EXTENSION_DIRS environment variable?
    Edited by: user8652010 on Feb 10, 2011 1:08 PM

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • JTable: RFC on good practice (SQL queries from cell editor)

    I usually add/remove/edit JTable data from an external panel. But in this scenario, my client would like to be able to click on the first column of an empty row and enter a product number. Within the cell editor, I must make an SQL query to the database in order to determine if the product number is valid and if so, use part of the SQL data to populate other cells of the current row (like product description).
    My problem is that this just doesn't seem right! Isn't the cell editor executed on the only Swing thread? Also, if the product number is not valid, I correctly implement the stopCellEditing() method but for some reason, you can still navigate the table (click on any other cell or press the TAB key, etc)... weird!!
    Does anyone have a good practice on how to perform the SQL query in a better place and force a cell to be selected until you enter a valid number or press the CANCEL key?
    I was looking at implementing the TableModelListener's tableChanged(...) method but I'm not sure if that would be a better place either.
    I personally would edit outside of the table, but good practice seems hard when the requirement is to edit from a cell editor!!
    Any suggestion would be greatly appreciated!
    Thanks!

    maybe you could write an input verifier for the column that does the query and rejects invalid entries.
    maybe you could send the query off in a worker thread.
    as far as making the table so you can't select any cells, hmm. not sure.
    you could disable
    .setEnabled(false);the table until the query comes back, something like that.

  • Are the Authorized Practice Test Suppliers supplying identical Materials?

    There appear to be certain concerns that the authorized Practice Test Providers Self Test Software and Transcender are supplying identical materials (at least for recent exams).
    Both organizations are now, to my best understanding, part of Kaplan IT Learning
    By identical materials I mean identical Practice Tests (or where a substantial number of Questions are similar or one is a subset of the other).
    The problem arises when a candidate, understandably wishing to maximize the number of practice questions they have, and trying to stay on the right path by using authorized providers, understandably buys practice tests from both only to find they have bought a lot of duplicate material.
    This in my opinion a very bad outcome from trying to use the authorized practice test providers.
    While the unauthorized sites in the+illegal+ end of he practice exam market are forever supplying the same materials it is bad if the authorized suppliers doing the same. (It is not enough IMHO for one to supply 20% more than the other but the rest of the questions in common and with a little extra additional notes beside).
    I have emailed Transcender and Self Test Software to confirm this is the case or to explain/refute any allegations.
    Please note all due acknowledgements for the poster of Re: OCP practice exam question for bringing this to attention.
    PLEASE BE AWARE I HAVE NO FIRST HAND EVIDENCE THIS DUPLICATION OF MATERIAL SUPPLIED HAS IN FACT OCCURRED. It is actually possilble this poster may have accidently obtained material from a spoof site by mistake. But I certainly believed they were acting in good faith.
    I have started a new question on this as the purpose if the other thread was which is the better supplier for 1z0-007.
    Edited by: bigdelboy on 02-Oct-2009 09:52 : Overwrite Placehold with question.
    Edited by: bigdelboy on 02-Oct-2009 10:16 (Minor alterations to add clarity, correct some typos and to make gender neutral).

    Well I have received a email frm Kaplan IT Selftestsoftware saying according to their recording my complaint has been resolved.
    and I have received a email frm Kaplan IT Transcender saying according to their recording my complaint has been resolved.
    Both were from the same person.
    They did give a link for their discussion board: my.kaplanit.com/Support/IT/itsweb/Lists/Service Desk complaints and suggestions/AllItems.aspx (requires username/password).
    I think I can reasonably make the following statement:-
    If you buy practice tests from both SelfTestSoftware and Transcender for a particular exam you risk disappointment in finding a lot of the tests and answers you recieve are duplicated.
    IMHO Probably if you want slightly fewer answers and cheaper, go for SelfTestSoftware; higher quality and slightly more questions go for Transcender.
    I think there's a big trap here for the unwary. There's nothing strictly wrong in Oracle saying they have two providers and not indicating they may have a comon provision source. Anyway there's some marketing guy somewhere dining well on some certification candidate's disappointments :-(
    The would DISHONOURABLE springs to mind.
    Edited by: bigdelboy on 19-Oct-2009 12:56 (Compliments to actions taken by Certification forum moderator about two posts down on this issue and truthful answers by Kaplan and actions being taken to advise Candidates to only purchase one set of testo only. I dont want to add a thankyou post at the end of thread as it obscures the correct answer slightly)

  • What are the best practices to create surrounding borders?

    Good day everyone,
    I was wondering what is the best practices to create a look in my iOS app like the one below? How are they accomplishing the creation of the borders, is there a tool in Xcode IB to do that?
    Thank you in advance

    Once again thanks for your input, however I am still not clear how you have accomplish the rounded corners, you do not mention that in your reply.
    I did some research on my end and I was able to accomplish what I want with a UIView using the code below in an outlet:
    redView.layer.cornerRadius = 10;
    redView.layer.borderColor = [UIColor greenColor].CGColor;
    redView.layer.borderWidth = 5;
    However, I cannot do the same for the UITableView or UITableView cell.
    Thanks

  • What is the Best Practice for A1000 LUN Configuration

    I have a fully populated 12 X 18GB A1000 array, What is the optimal LUN configuration for a A1000 Array
    running RAID5 in a read intensive oracle financials environment.
    1. 1 (10 X 18GB + 2 X 18GB HS ) (Use format to split at OS level) - Current Setting
    2. 1 (10 X 18GB + 2 X 18GB HS ) (Use RM6 to split into 3 LUNS)
    3. 3 (3 X 18GB + 3 X 18GB HS )
    I would like to know if option 2 or 3 will buy me anything more than 3 queues?
    Thanks
    F.A

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • What is the best practice for genereating seq in parent

    I'm wondering what the best practice is for generating seq in parent.
    I have the following tables:
    invoice(id, date, ...)
    invoice_line(invoice_id, seq, quantity, price ...)
    There are shown in 1 uix displaying invoice in form layout and invoice lines in tabular layout.
    Now I like the seq automaticaly generated by ADF Business Components, not enterable by the user.
    The 1st record within each invoice should get seq 1.
    Regards,
    Marcel Overdijk

    Marcel,
    I think best practices is to create a database trigger on the tables that obtains the sequence number from a database sequence on insert.
    In JDeveloper assign DBSequence as a type to the attribute representing the sequence field in the EO. ADF BC then assigns a temporary value to it. The real sequence number then gets added on submit.
    If you want to have IDs that don't miss a single number, then database seuqences may not be a good option. In this case you would use the table trigger to implement your own sequencing.
    Frank

Maybe you are looking for

  • General question about storage, networking, and using large amounts of data

    I have been using a powermac tower G5 up until now. I bought my daughter a macbook pro 15" i7, talk about a zippy little machine. What a screamer. I can't believe how fast it is with final cut express, Aperture, and Photoshop. I tend to use higher en

  • How to pass custom column value that is created in Answers to its detail report

    Hi,     I have a custom column (in the below ex: 'Aging') which is calculated based on prompt date and arrived at the below aging buckets. I have a drill report from 'Balance' column. When I click on 100/200/125 corresponding 'Aging' value should be

  • Lost backups on Time Capsule

    I have setup my 1TB Time Capsule to backup my iMac, 1 MacBook, and 2 MacBook Airs. It works well for all of these except for the latest purchase a MacBook Air 11inch. For this it keeps redoing the initial backup. The initial backup is done, it then p

  • Need good examples to look at for e-government

    e-democracy, e-communities. We at www.kista.com (Kista Borough in the City of Stockholm, Sweden) are looking for new tools to further develop our e-government, e-democracy, discussion forums. Please give me what you think ar good examples using Oracl

  • Delete images after import

    I used to have this set up to 2.02 Now with 3.03 it doesn't work any more an I cannot find the setting to put it back on?