Great new resources on OTN: best practices and OPM project polishing tips

Two great new resources are now available on OTN.
Oracle Policy Modeling Best Practice Guide
A clearly laid out paper that walks through a series of valuable recommendations. It will help you to design and model rules that maximize the advantages of using Oracle Policy Automation's unique natural language approach. Leverages more than 10 years of practical experience in designing and delivering enterprise policy models using OPA. Highly recommended reading for all skill levels.
Tips for Polishing a Policy Modeling Project
This presentation contains dozens of useful tips for delivering rich and natural-feeling interactive interviews and other decision-making experiences with OPA.
See the links at the top of the New and Featured section on the OPA overview tab, and also at the top of the Learn more section.
http://www.oracle.com/technetwork/apps-tech/policy-automation/overview/index.html
Jasmine Lee has digested much of her 10 years experience into these fantastically useful new materials - and they're free!
Davin Fifield

Thanks Davin to posted this info!
Thanks Jasmine these material very nice.

Similar Messages

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Best Practices and Usage of Streamwork?

    Hi All -
    Is this Forum a good place to inquire about best practices and use of Streamwork? I am not a developer working with the APIs, but rather have setup a Streamwork Activity for my team to collaborate on our activities.
    We are thinking about creating a sort of FAQ on our team activity and I was thinking of using either a table or a collection for this. I want it to be easy for team members to enter the question and the answer (our team gets a lot of questions from many groups and over time I would like to build up a sort of knowledge base).
    Does anyone have any suggestions for such a concept in StreamWork? Has anyone done something like this and can share experiences?
    Please let me know if I should post this question in another place.
    Thanks and regards,
    Rob Stevenson

    Activities have a limit of 200 items that can be included.  If this is the venue you wish to use,  it might be better to use a table rather than individual notes/discussions.

  • Best practice for responsive projects

    Does anyone have tips on best practices for responsive project?
    I understand that 3 different layouts can be created. What happens if a learner is not using one of the 3 devices that were set up in a responsive project, and their screen size is different from any of those

    Jay,
    Dr. Pooja Jaisingh offered very valuable tips for good practice in responsive design last week in her webinar. 'Do's and Don'ts of creating Responsive Projects with Captivate 8'. I don't see the recording yet On Demand, but keep an eye on it.
    Did you test a responsive project with F11 (Preview in Browser)? You will be able to change the resolution of the browser window and see that the content, if well designed (you can have absolute positioning, size as well) will move, shrink to adapt. The break points (3 devices) allow you to make more invasive changes at those points: dragging some objects out of the stage in the scratch area because they take up too much space for phones is one example. Or replacing a big screenshot with many details by a zoomed in detail screenshot for the mobile breakpoint. That is my way of explaining, responsive is not just have the three layouts for devices, it is also adapting between those breakpoints.

  • OIM best practice and E-Business

    I have the business requirement to provision different types of users in EBS. There are different applications developed within EBS for which the user provisioning flow may vary slightly.
    what is the best practice with regards to creating resource objects and forms ? should I create a separate RO and set of Forms for each set of users

    EBS, and SAP, implementations with complex and varying approval work flows is clearly one of the most challenging applications of OIM. There are a number of design patterns but without a lot of detail about your specific implementation it is very hard to say which pattern is the most appropriate.
    (Feel free to contact me on [email protected] if you want to discuss this in more detail but don't want to put all the detail in a public forum.)
    Best regards
    /M

  • SAP best practice and ASAP methodology

    Hi,
            Can any body please explain me
                                                          1. What is SAP best practice?
                                                           2. What is ASAP methodology?
    Regards
    Deep

    Dear,
    Please refer these links,
    [SAP best practice |http://www12.sap.com/services/bysubject/servsuptech/servicedetail.epx?context=0DFA5A0C701B93893897C14DC7FFA7D62DC24E6E9A4B8FFC77CA0603A1ECCF58A86F0DCC6CCC177ED84EA76F625FC1E9C6DCDA90C9389A397DAB524E480931FB6B96F168ACE1F8BA2AFC61C9F8A28B651682A04F7CEAA0C4%7c0E320720D451E81CDACA9CEB479AA7E5E2B8164BEC98FE2B092F54AF5F9035AABA8D9DDCD87520DB9DA337A831009FFCF6D9C0658A98A195866EC702B63C1173C6972CA72A1F8CB611798A53C885CA23A3C0521D54A19FD1B3FD9FF5BB48CFCC26B9150F09FF3EAD843053088C59B01E24EA8E8F76BF32B1DB712E8E2A007E7F93D85AF466885BBD78A8187490370C3CB3F23FCBC9A1A0D7]
    [ASAP methodology|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/asap%2bfocus%]
    ASAP methodology is one methodlogy used in implementing SAP .
    The ASAP methodology adheres to a specific road map that addresses the following five general Phases:
    Project Preparation, in which the project team is identified and mobilized, the project Standards are defined, and the project work environment is set up;
    Blueprint, in which the business processes are defined and the business blueprint document is designed;
    Realization, in which the system is configured, knowledge transfer occurs, extensive unit testing is completed, and data mappings and data requirements for migration are defined;
    Final Preparation, in which final integration testing, stress testing, and conversion testing are conducted, and all end users are trained; and
    Go-Live and Support, in which the data is migrated from the legacy systems, the new system are activated, and post-implementation support is provided.
    ASAP incorporates standard design templates and accelerators covering every functional area within the system, as well as supporting all implementation processes. Complementing the ASAP accelerators, the project manager can create a comprehensive project plan, covering the overall project, project staffing plan, and each sub-process such as system testing, communication and data migration. Milestones are set for every work path, and progress is carefully tracked by the project management team.
    Hope it will help you.
    Regards,
    R.Brahmankar

  • New mac - what is best practice for accounts?

    I am about to get a new mac (imac g5), and would like it to work well (ie file transfer and backup to from) my existing powerbook.
    Is there a best practice for account set up? should I use the same accounts between the two or can I set up a new account on the new mac?
    Related to this: what will key change sync give me? does that only work with the same accounts on two macs?
    thanks
    John

    With Tiger there is a migration assistant that will move everything over from your Powerbook to your new iMac G5. All you need is a firewire cable and when prompted in your first start-up select migration assistant and connect the two computers. You will need to boot up your Powerbook holding down the 't' key before you connect the two together. Good luck, Jack.

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

  • IronPort best practices and configuration guide

    Hi there,
    I manage a Cisco IronPort ESA appliance for my organisation and made a quick blog post last night about things I thought should be a best practice for a new ESA appliance.
    The reason I wrote this is because some of these things are not configured from the start or are configured poorly by default.
    Take a look and let me know what you think - I plan to make a part 2 because there are some things I did not have time to go through and it was quite long already!
    Remember that your environment will be different from mine so you should understand the things I say before blindly implementing them!
    http://emtunc.org/blog/06/2014/cisco-ironport-e-mail-security-appliance-best-practices-part-1/

    First of all, I think your question is related to the WebCenter (Framework) as such, not just OUCSS.
    As for JDev. vs. run-time, this question is well discussed in Yannick Ongena's tutorial: http://www.yonaweb.be/webcenter_tutorial/part1_configure_webcenter_portal_application
    "Let me first talk a bit about the architecture of WebCenter and the runtime customizations. ADF (and WebCenter) has an additional component since 11g called the MDS (MetaDataServices). The MDS is a repository that stores all the customizations. The page we just created at runtime is not stored in the project folder of JDeveloper but is instead stored in the MDS."
    I guess the answer when to use which methods depends on the situation what page you want to create.
    I am surprised, however, that you state that
    Pages created in JDeveloper are not searchable online. It is possible to link it to a Navigation Model but the path needs to be manually entered.Could you elaborate on your use case?
    As for navigation models, you can check another tutorial: http://docs.oracle.com/cd/E21764_01/webcenter.1111/e10148/jpsdg_navigation.htm#BABJHFCE
    Maybe, what your are looking for is the way how to create a navigation model according to your needs?

  • JSP Best Practices and Oracle Report

    Hello,
    I am writing an application that obtains information from the user using a JSP/HTML form and then submitted to a database, the JSP page is setup using JSP Best Practices in which the SQL statments, database connectivity information, and most of the Java source code in a java bean/java class. I want to use Oracle Reports to call this bean, and generate a JSP page displaying the information the user requested from the database. Would you please offer me guidance for setting this up.
    Thank you,
    Michelle

    JSP Best Practices.
    More JSP Best Practices
    But the most important Best Practice has already been given in this thread: use JSP pages for presentation only.

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • DFS Best Practice and domain structures

    We currently have built a new domain structure for our client which consist of domains in separate forests with the approapriate trusts in place. The domains being:
    user, resource, legacy resource  and Test.
    It is assumed that the resources will be migrated out of the legacy domain to the new resource domain in the medium term.
    Given we have circa 650 file shares, which causes problems with logon scripts, I want to present those shares via a DFS. Internally our view is that the DFS should be built in the user forest to reference file shares in the resource domain(s).
    Our client, however believes the DFS is a resource, and therfore should be presented in the new resource domain.
    What are the pros / cons of each and what would be the Microsoft recommendation and why ?

    Hi,
    As the purpose is to migrate shares to new domain (in new forest), it seems DFS is not required to do this job.
    From your description, it seems that DFS is not created yet.
    If so, you can simply migrate files to new domain. Then you could export the following key and import to new file server:
    HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Shares 
    Note: If folder location is changed, you need to edit the key before importing to new server. 
    In addition, if DFS is already existing, you could export it, edit configuration files and import to new domain. See:
    Migrating your DFS Namespaces in three (sorta) easy steps
    http://blogs.technet.com/b/askds/archive/2008/01/15/migrating-your-dfs-namespaces-in-three-sorta-easy-steps.aspx
    Note: If you need to keep NTFS permissions, firstly you need to create a one-way trust (source domain trusts the target domain) and use ADMT to migrate domain users/groups to new domain so that all NTFS permissions could be recognized. 
    Note2: To copy files with NTFS permission, you could use Robocopy. 
    If you have any feedback on our support, please send to [email protected]

  • Subversion best practices and assumptions?

    I am using SQL Developer 3.0.04, accessing Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production. I am taking over a project and setting up version control using Subversion. The project uses 4 schema for its tables, PLSQL objects, etc. When saving a PLSQL object (a package specification for example) in SQL Developer (using the File->Save As menu option) the default name is PACKAGE_NAME.sql. The schema name is not automatically set up as part of the file name. In looking at the SQL Developer preferences, I do not see a way to change this.
    In viewing the version control OBE, which uses files from the HR schema, there is an implicit assumption that the files all affect the same schema. Thus the repository directory only contains files from that one schema. Is this the normative/best practice for using Subversion with Oracle and SQL Developer? I want to set up our version-control environment to minimize the likelihood of "user(programmer) error".
    Thus, in our environment, should I :
    1) set up Subversion sub-directories for each schema within my Subversion project, given that each release (we are an Agile project, releasing every 2 weeks) may contain objects from multiple schema?
    2) rename each object to include the schema name in the object?
    Any advice would be gratefully appreciated.
    Vin Steele
    Edited by: Vin Steele on Aug 8, 2011 11:13 AM
    Edited by: Vin Steele on Aug 8, 2011 11:20 AM
    Edited by: Vin Steele on Aug 8, 2011 11:22 AM

    Hi
    It makes sense to have the HCM system in the same system as rest of the components because
    1) We are able to make use of the tight integration between various components, most importantly Payroll - Finance.
    2) We can manage without tiresome ALE/interface development and management.
    3) lesser hardware cost (probably)
    It makes sense to have HCM in different systems because
    1) because of different sequence of HRSP/LCP compared to other systems, we can have a separate strategy for HRSP application independent of other components. We can save a lot of effort in regression testing as only HR needs to be tested after patch application.
    2) IN many countries there are strict data protection laws, and having HR in a separate system ensures that people from other functions do not have access to HR data even accidentally as they will not have user ids in the HR system.
    Hope this is enough to get you started.

Maybe you are looking for