Webcenter Enterprise Capture - MoS Articles

Hi ,
Couple of articles published on MoS for Enterprise Capture (formerly ODC) - 11.1.1.8.0 .
These articles are for HA / Cluster set up .
1587729.1 -  How to Configure Enterprise Capture for High Availability (HA) Environment
1585919.1 -  Changes to Node2 is not Reflected in Node1 of Cluster  Within WebCenter Enterprise Capture
Thanks,
Srinath

Due to this document PDF Searchable should be an option in General Settings train stop but i see only “PDF Image-Only” and “Tiff Multi-Page” options there.
Why ?

Similar Messages

  • How to get connected to OIPM from Oracle WebCenter Document Capture

    Hi,
    How we get connected from Oracle WebCenter Document Capture(ODC) to Oracle WebCenter Cintent:Imaging (OIPM). OIPM is 11g and ODC is 10g. Pls suggest any doc or link.
    Thanks and Warm Regards,
    RR.

    Reading your previous question once again, now I think you have actually asked a simpler question than I answered to. Is it that you just need to anyhow send data from ODC to IPM?
    If so, it is a standard functionality of ODC, called Commit Profile. I have never worked with IPM, but there is one to UCM and it works the way that you use administrator's login (such as sysadmin in 10g or weblogic in 11g) for authentication and then you map ODC user to a metadata field (in UCM there is a mandatory field called dDocAuthor and there should be value of an existing user; but you can you the administrator here as well, if ODC user info is not important for you). I believe IPM will behave similar.
    What you have to check, whether commit profiles are available for IPM 11g, but I believe they are.
    As for the documentation, the link I mentioned before contains both installation and configuration manual, so you should find all the info there.

  • Webceneter content patch and Webcenter capture patches

    Hi Expert,
    There is latest patch for webcenter capture is :
    Patch for Bug 19856709  Date:  12/08/2014   CUMULATIVE Oracle WebCenter Enterprise Capture PATCH 11.1.1.8.0 NUMBER 7
      and there is an another latest bundle patch for webcenter content is :
    Patch 20022599: WEBCENTER CONTENT BUNDLE PATCH 11.1.1.8.9-01/20/2015
    my question is : Does webcenter content budle patch released on 01/20/2015 included webcenter capture patch or not ?

    Hi,
    The answer is NO. The WebCenter Content patch includes only WebCenter Content bug fixes. You may refer to the readme file or search for the patch number in support site to know the list of bugs fixed. I have verified the list and it does not include the WebCenter Capture bug 19856709.
    HTH
    - Anand

  • Oracle Web Centre Recognition Tool

    Dear All,
    I am working on oracle web center capture and oracle IPM. I have no idea about oracle recognition tool like installation / configuration / integration with oracle WCC capture/ Oracle WCC IPM.
    I''ve different mime type document like .tiff .pdf images. I don know how to define OCR/BAR code etc. I tried to find case studies/ success stories/ tutorial for beginners but I could not get
    easy book on for it . Please advice.
    Riaz Ahmed

    Here is a self-study video series on installing, configuring and using WebCenter Enterprise Capture: 
    https://apex.oracle.com/pls/apex/f?p=44785:24:111971865312920::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:10004,16

  • Configure Webcenter Imaging domain in installed Webcenter JSK.

    Dear all,
      I am new to Webcenter. I am preparing an automated processing solution using webcenter (imaging & capture) and EBS. We have installed Webcenter Jumpstart Kit and we need to configure webcenter imaging domain. How do we perform this action. do we need to install imaging seperately or my set of software download's approach is wrong ?
    appreciate your guidance or any pointer towards step by step process configuring the same would of great help.
    Thanks and Regards,
    Ranganath.

    Hi Vikrant,
    I tried the following steps :
    1. Created a directory in Linux , under a certain mount point - eg : /u90/ipm/input
    2. Gave it read/write access , got the UNC path as \\<hostname>\\ipminput
    3. I could access it from the Windows workstation where OFR is installed.
    4. Before, setting the new directory as the input folder for IPM, I wanted to test if OFR can export it directly to this new folder created.
    5. Next, I created map to this network drive in the Windows workstation, like Z:\
    6. When I enter this path as the export folder for OFR, it fails to export and gives "Invalid export directory".
    My question, can we use a mapped network drive as an export directory for OFR?
    Cheers
    Peter

  • Problem with oracle webcenter 11g installation and configuration

    I have installed weblogic and webcenter successfully.After that I have configured the webcenter by creating new domain using configuration wizard . Three managed servers created for spaces, services and portlets and able to bring up. The problem is webcenter and other service related applications are in failed state. In server stage no ear or war files related to these applications. So unable to access webcenter spaces. Only weblogic console and webcenter enterprise manager can be accessed.No clue from installation document.
    Is anything to do to make it to active state?
    Edited by: user10303069 on Oct 7, 2009 12:18 AM

    Problem solved. Simply two more steps required. First, In the nodemanager.properties Edit as StartScriptEnabled=true and start the node manager and then Admin server. Second step is start the WLS_Spaces managed server from webcenter Enterprise Manager. Now spaces is accessible.

  • Enterprise verses Standard Edition

    Hi All,
    What is the difference between Oracle 10gAS Enterprise & Standard Editions?
    Thanks.

    From the SQL Server installation guide:
    ● SQL Server 2000/2005: Enterprise and Standard Edition
    ● SQL Server 2008: Enterprise (capture or delivery) or Standard Edition (delivery only)
    ● SQL Server 2008 R2: Enterprise (capture or delivery) or Standard Edition (delivery only)

  • Seems like a very clear MOS copyright voilationn

    On this thread,
    ASM
    The following link was posted which seems to have nothing but the exact cut/paste of the content of the MOS articles. I am not sure but what can be done for it though but thought that its best to bring it in notice.
    http://www.myoraclesupports.com
    Aman....

    I've been reading some docs for Oracle EBS(R12,11i). I can clearly see that most of the blogs,websites contents are from Metalink.
    They just put the Note ID in the heading and paste the contents. I don't know how that is legal?
    BTW glad to see,new forum.. Oops Space can embed images in thread/reply.Seems to be disabled now.
    Regards!

  • Sql server 2008 standard edition verses enterprise edition

    I heard on a call today that microsoft has different logging capture in these two editions. Also because of this difference that at least in the past GG could not support some replication approaches from SQL Server Standard Edition beacuse of this. Can anyone point me to a resource or certification matrix that references standard verses enterprise versions of SQL Server.

    From the SQL Server installation guide:
    ● SQL Server 2000/2005: Enterprise and Standard Edition
    ● SQL Server 2008: Enterprise (capture or delivery) or Standard Edition (delivery only)
    ● SQL Server 2008 R2: Enterprise (capture or delivery) or Standard Edition (delivery only)

  • How to learn UCM 11g from scratch

    From where should I learn UCM 11g I am new to this product.WebCenter Content

    Hemant,
    You will need a test system, use VirtualBox as you can undo any disasters by taking snapshots. If you don't know how to configure and administer managed servers in Weblogic Server then learn that first - just how to configure UCM and Node Manager so you can start and stop it from the Admin Server console.
    It is worth learning to install Webcenter Content on Linux ( I recommend Oracle 6.6) and also on Windows if you want to use Enterprise Capture with Recognition but if you are impatient to start with UCM, Oracle have a ready-made VM with just about everything you need installed.  Oracle WebCenter Portal 11.1.1.8 Virtual Machine | Oracle Technology Network | Oracle. You may need to patch the installation, I don't know what patch level it is set to.
    When reading the Oracle documentation Make sure you get the WCC UCM documentation not the IPM documentation. Make lots of bookmarks and when you see "for more information on..." open the link in a new tab or you will get lost seven levels down when you realise you are back to the same page you started from.
    Learn the security model, a group in Weblogic Server with the same names as a Role in UCM will be granted the Role's access, how to use content types, rules and profiles. if you have experience with any other DM systems, know that metadata fields in UCM are not attributes of a document class, they are attributes of the system and present for all documents unless you filter them out with Rules.
    Know also that there are two Web UIs so when you read the latest documentation the screenshots are from the latest Web UI. I don't know if it has been installed in the Portal VM but if you want to install it, read the Support article 1618305.1, it is not simple.
    Martin

  • Replicate Tables as Staging Tables in a DWH

    Hi
    We are in process of building a Data Mart, for the staging area we are considering to use GG as the Change Data Capture tool.
    The idea is in the source (OLTP Database) we have a table called T1, this table has no PK and it's partitioned by day, generates around 20 million of rows per day and a few hundreds of thousands of changes such as Updates and Deletes. In the staging area we will setup a table called STG_T1 with same structure as T1 plus a few columns such as
    @GETENV("TRANSACTION" , "CSN")
    @GETENV("GGHEADER", "COMMITTIMESTAMP")
    @GETENV("GGHEADER", "LOGRBA")
    @GETENV("GGHEADER", "LOGPOSITION")
    @GETENV("GGHEADER", "OPTYPE")
    @GETENV("GGHEADER", "BEFOREAFTERINDICATOR")
    All the changes will be comverted to INSERTS using INSERTALLRECORDS in the replicat. This has a problem, since we dont have PK in the source we dont know how to identify a row's change history in the source in STG_T1.
    Has anyone got experience replicating OLTP to a Staging Area using OGG and the ETL basics to propage the changes to the Fact Tables from the Staging Area?
    Thanks

    If there is no primary key on the source, when you do the ADD TRANDATA, all columns will be supplementally logged. This is probably what you want so that you will have all columns when you apply the operation as an insert on the target.
    Even if you don't have a primary key on the target table, you can give Replicat a KEYCOLS on the MAP of one of the target columns - it won't really make any difference what column you pick since you are only going to only be applying inserts so Replicat does not have to format a WHERE clause. However, with no primary key on the target side, you do want to make sure you have enough information on the record to make each row unique.
    I would suggest you take a look at the following MOS articles to help guide you:
    What Tokens need to included in the transaction to make it unique for Insertallrecords to be used in the replicat [ID 1340823.1]
    Oracle GoldenGate - Best Practice: Creating History Tables [ID 1314698.1]
    Oracle GoldenGate Best Practice - Oracle GoldenGate for ETL Tools [ID 1371706.1]
    Let us know if you still have further questions.
    Best regards,
    Marie

  • 关于Portlet 和 taskflow

    经常会有客户问到WebCenter中到底是要用portlet来做页面 还是用ADF task flow? 找到一篇比较好的文章,供大家参考。
    原文连接:https://blogs.oracle.com/ATEAM_WEBCENTER/entry/adf_task_flows_versus_portlets
    If there's a question that we get more often than "When [INSERT_NEXT_PRODUCT_RELEASE_HERE] is going to be available?", it is "Should I use Task Flows or Portlets?".
    I can't remember of a single WebCenter engagement that we on the A-Team have worked on that this question hasn't been asked. Let's see what's currently written up on the internet on the subject and build on top of that.
    What's on Task Flow vs Portlets on the interwebs?
    Not surprisingly, if you search for this topic you're bound to follow some very good information on the subject. I recommend you to also read the following links:
    George's "Task Flow or Portlet: what to choose?"
    Yannick Ongena's "Difference between ADF portlets and task flows"
    Portlets vs Taskflows discussion at the WebCenter Enterprise Methodology Group (EMG) mail list
    ADF Task Flows versus Portlets cheat sheet table
    The table below tries to capture some common points when developing ADF and WebCenter based applications, and how these points reflect in each of the three options of developing and deploying a reusable UI flow.
    How to read this table
    To be straight with you, this is not a decision table. You should use it for reference when you want to know how each technology maps to a specific requirement or feature. This will give you a better idea of what you need to be aware of and some reference information on how to do it.
    Local Task Flow      Remote Task Flow     ADF-based Portlet
    User Interface Rendering / Skinning      
    Inherit ADF Faces geometry management from the parent container. Uses the same skin as the consuming application automatically.
    Rendering of TFs is sequential and can add up to the consumer's rendering time depending on the TFs performance.
    Do not support geometry managment. Usually remote TFs are opened as browser popup windows or take over the current browser window until it navigates back to the caller. Doesn't know about the skin being used by the caller application.
    Rendering time is dependent on the server where the remote TF is running. An external URL request is being issued.
    Renders inside an iframe, so there's no geometry based support for the portlet, but it is somewhat manageable from the showDetailFrame component that surrounds it (here). There is support for skin detection and synchronization but the skin needs to be deployed together with the portlet. TIP: never use inline (rich) popups in a portlet.
    Overall JSF rendering is still sequential, but because portlets are rendered as iframes browsers can usually request between 4 to 8 iframes in parallel from the same domain which can sometimes translate to a faster page load time. Please check here.
    Interaction Support     
    Contextual Events here , here
    Supports Transactions and Savepoints (when used with ADF BC). Here.
    IN/OUT parameters
    No transactional support.
    Inter-portlet communication and auto-wiring
    No transactional support
    Deployment     Locally within the application's EAR file or as a WLS shared library. Updates require a redeployment and restart of the consuming applications.     Deployed as separate application (EAR) or Web Application.     Deployed as a separate application to a server configured to run as a Portlet Producer. Here
    Memory Scope     Uses the application's memory scope and can fully leverage application, session, request, view scopes.     Runs on a different memory scope     Runs on a different memory scope
    Resource Consumption     It can slow down the containing up if it has a processing bottleneck. Likewise, it can cause an out of memory if the code called by the task flow has a memory leak      Does not impact the calling application. Does not offer a time-out mechanism out of the box     Does not impact the calling application. It does offer time out and caching configurations.
    Architectural Coupling     Tightly coupled modules, composite apps.     Loosely-coupled, but still application-oriented     Loosely-coupled, heterogeneous and legacy application integration.
    Security     
    Fully leverages ADF Security Context for ADF and WebCenter Task Flows.
    Requires single sign-on (here) or identity federation (here and here) for authentication and correct mapping of enterprise to application roles for seamless authorization.     
    WS-Security with OWSM is used for authentication. Correct mappings between enterprise and application roles are needed for fine-grained authorization.
    More info here, here, and here.
    Runtime Configuration Features      None, but can be dynamically added to a page using Oracle Composer and the Resource Catalog.     None     Provides support for personalization (user preferences) and runtime management of Portlet Producer connections.
    Design Time Features     ADF Library with consumed TFs needs to be in the classpath. Integration tests can be run locally.      No importing required but remote TF information is required: TF URL, IN/OUT parameters. The remote TF needs to be available on a remote server for integration tests. Security infrastructure should be taken into consideration when doing proper integration testing.      No importing required. Producers are configured as connections and portlets define their own metadata/service definition through WSRP (here). Connections can be modified on a post-deployment process from EM or WLST (here).
    What should I consider to pick one over the other?
    Portals, as the name implies, are gateways to other applications. With that in mind, portlets consumed in the portal offer a glimpse - one could say a glassdoor - to take a quick peek into the application that exposes it. Once it grabs your attention, it should offer you a way of going into that system to check for more detailed information and to take an action - again, using the same analogy glassdoor analogy, you now opened the door and are inside a specific room. Once inside that room, you're not interested in what's going around in the other rooms.
    Ideally, the portlet producer is hosted on the same environment as the application they are exposing. This is so because that application is responsible for managing what is exposing, much like a web service, and can assign the adequate resources to run the portlets without having too much impact on their production environment. For example, you might want to expose a "User Profile" portlet but you want to limit the amount of information and access to your HR system providing it.
    You should use portlets whenever you are aggregating a set of heterogeneous (UI-based) services onto a common view, and these heterogeneous services are isolated, or at least don't have a lot of interdependence between them. If you need more info, or you need to work on a task, you are taken to the real application. In this scenario, the HR team exposed a "User Profile" info while the Sales team exposed a "Your latest sales numbers" portlet, both driven say, by the user id, but they don't know about each other.
    Composite applications, on the other hand, are assembled by collaborative development of TFs - the TFs are still designed to be modular, but also work together, and often cases to depend on each other, specially when we consider transactions and shared scopes. Not only they will tightly interact, they will leverage the same execution resources. Ultimately, they are contained in the same business domain.
    As you probably noticed, my approach to decide to use one over the other is from a pure application integration perspective. Oftentimes I found other decision point that although are technically valid, I don't quite agree with, or at least I don't find them to be able to weigh in as much as my application integration approach. So let's go over some of these points and I will try to explain why I don't find them so important.
    "If I use portlets I don't need to stop my application when I release a new version or a fix"
    Yes you do. You need to stop the portlet container no matter what. And that will show up as a portlet timeout on your portal/application. Granted, the main application is still running, but if the portlet is there, it is because it is important. And people will complain.
    If you need 24/7 availability (and, believe, you don't most of the time in this scenarios), portlets will not help you. A good infrastructure and the correct deployment process will.
    Using a portlet as a patch delivery channel is not a good approach either. If you find yourself in need constantly redeploying portlets than you should review your QA process.
    "Because portlets render as iframes I can make my portal run faster"
    This is often times not true. Yes, in some situations you could have the main page consuming the portlets to render faster, but you still could have portlets taking time to render, and that's not good from a user experience perspective - showing a "loading..." message or empty placeholder box is not the best solution. I'd rather engineer locally running Task Flows to meet the required performance numbers than rely on portlets and iframes as my performance boots option.
    Conclusion
    I hope I was able to provide you with enough ammunition to make and informed decision when choosing one technology over the other. Please feel free to follow up with your comments; I'm definitely very interested in your experiences and considerations.

    文章非常不错啊,很有指导意义。

  • Psvccrt_retail.msi not available in OVM templates

    Hi All,
    I am new to PeopleSoft. I installed Peoplesoft instance using OVM for FSCM 9.1 (shipped with PT 8.51.07). I am able to bring up the Database OVM as well as the PIA-AppBatch OVM.
    As the per the installation guide to use Application Designer I should copy /opt/oracle/psft/pt/tools/toolsclient.zip from PIA-AppBatch virtual machine to a Windows machine. After copying toolsclient.zip when i launch pside.exe it gives me below error
    "The Application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem."
    When i search for this issue I came to know via MOS article 950813.1 that I need to run psvccrt_retail.msi from PS_HOME\setup\psvccrt.
    But in my PIA-AppBatch Linux virtual machine there is no folder psvccrt in PS_HOME\setup. Does this mean i need to install PeopleSoft Enterprise PeopleTools 8.51 on my Windows machine.
    Please guide.
    Regards,
    Vishal.

    Old thread, but maybe out of interest to read more :
    Do we need have a code to install the virtual image for peoplesoft image(HCM,FSCM,CRM or CS Suite)
    Nicolas.

  • Performance View V$OSSTAT

    I am trying to understand the busy_time column in the v$osstat view.
    I took a before and snap of this view.
    VALUE STAT_NAME
    64 NUM_CPUS
    7776667365 IDLE_TIME
    357220150 BUSY_TIME
    159550984 USER_TIME
    197669166 SYS_TIME
    0 IOWAIT_TIME
    121468398 AVG_IDLE_TIME
    5539550 AVG_BUSY_TIME
    2451063 AVG_USER_TIME
    3046655 AVG_SYS_TIME
    0 AVG_IOWAIT_TIME
    11140800 OS_CPU_WAIT_TIME
    0 RSRC_MGR_CPU_WAIT_TIME
    5.10546875 LOAD
    8 NUM_CPU_CORES
    1 NUM_CPU_SOCKETS
    6.8585E+10 PHYSICAL_MEMORY_BYTES
    1.6960E+10 VM_IN_BYTES
    8192 VM_OUT_BYTES
    49152 TCP_SEND_SIZE_DEFAULT
    1048576 TCP_SEND_SIZE_MAX
    49152 TCP_RECEIVE_SIZE_DEFAULT
    1048576 TCP_RECEIVE_SIZE_MAX
    After :
    VALUE STAT_NAME
    64 NUM_CPUS
    7814976860 IDLE_TIME
    360357230 BUSY_TIME
    160430595 USER_TIME
    199926635 SYS_TIME
    0 IOWAIT_TIME
    122066763 AVG_IDLE_TIME
    5588356 AVG_BUSY_TIME
    2464596 AVG_USER_TIME
    3081702 AVG_SYS_TIME
    0 AVG_IOWAIT_TIME
    11206700 OS_CPU_WAIT_TIME
    0 RSRC_MGR_CPU_WAIT_TIME
    5.21484375 LOAD
    8 NUM_CPU_CORES
    1 NUM_CPU_SOCKETS
    6.8585E+10 PHYSICAL_MEMORY_BYTES
    1.7078E+10 VM_IN_BYTES
    8192 VM_OUT_BYTES
    49152 TCP_SEND_SIZE_DEFAULT
    1048576 TCP_SEND_SIZE_MAX
    49152 TCP_RECEIVE_SIZE_DEFAULT
    1048576 TCP_RECEIVE_SIZE_MAX
    I had elapsed time of 5Mins
    I now attempted to calculate the system CPU Utilization over this period.
    I am using hyperthreading which gives me 64 CPUs
    My calculation is:
    substract After busy_time - Before busy_time
    36035723 - 357220150 = 3137080 hundredeths of a sec
    available CPU time 64 * 5*60 = 19200 secs
    now I am trying to calculate CPU Utilization
    U=R/C 31370.80/19200 = 1.63 or 163%.
    Is this possible or is my calculation incorrect ?
    Thank you in adavnce
    u

    Jonathan,
    Nice sanity check.
    Elapsed time between the capture start and stop times, based on the BUSY_TIME and IDLE_TIME statistics:
    SELECT
      ( (360357230 - 357220150) + (7814976860 - 7776667365) ) / 100 / 64 / 60 MINUTES
    FROM
      DUAL;
       MINUTES
    107.933789The OP stated that the elapsed time was roughly 5 minutes, so the statistics are inconsistent.
    The OS_CPU_WAIT_TIME statistic seems to indicate that processes spent almost 11 minutes in this 5 minute time period waiting to be scheduled to run - that seems to be inconsistent with the 7.6% average CPU utilization, unless the OS nice utility were used, or the processes were caged to a small number of CPUs, or a reporting bug was encountered.
    SELECT
      (11206700 - 11140800) / 100 / 60 OS_CPU_WAIT_TIME_MINUTES
    FROM
      DUAL;
    OS_CPU_WAIT_TIME_MINUTES
                  10.9833333Using just the USER_TIME and SYS_TIME statistics, if the elapsed time was 5 minutes, the server's CPUs would have to be 163.4% busy:
    SELECT
      ( (160430595 - 159550984) + (199926635 - 197669166) ) / 100 / 64 / 300 * 100 AVG_CPU_BUSY_PER
    FROM
      DUAL;
    AVG_CPU_BUSY_PER
          163.389583The OP might want to verify that the actual elapsed time between the statistics is 5 minutes. After verifying that, the OP might want to check out the following Metalink (MOS) articles:
    Bug 7430365: INCORRECT VALUES FOR USER_TIME IN V$OSSTAT (3.79 hours per CPU per elapsed hour)
    Bug 3574504: INCORRECT STATISTICS IN V$OSSTAT IN HP-UX
    Bug 5933195: NUM_CPUS VALUE IN V$OSSTAT IS WRONG
    Bug 5639749: CPU_COUNT NOT SHOWN PROPERLY FROM THE DATABASE
    Bug 10427553: HOW DOES V$OSSTAT GET IT'S INFORMATION ON AIX
    Bug 9228541: CPU TIME REPORTED INCORRECTLY IN V$SYSMETRIC_HISTORY (3.75 hours per CPU per elapsed hour)
    Doc ID 889396.1: Very large value for OS_CPU_WAIT_TIME FROM V$OSSTAT / AWR Report
    Bug 7447648: OS_CPU_WAIT_TIME VALUE FROM V$OSSTAT IS INCORRECT
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • EM 12c Email Notification Setup

    Hello,
    I've got EM 12c up and running.  I've got some notifications already setup.  They are for the database, critical/fatal/warning, that I set up through the incident manager.  Not sure if this was the right way to go about that but anyways it seems to be working.
    I'm trying to get it setup so I get notified if say the filesystem is running out of space, say if /u02 is 85% used to send an email.  If I go into Enterprise, Monitoring, Monitoring Templates. I've created a new template for the hosts, with all the metrics that I want monitored.  Now how do I get that to email me those alerts?  Is there a document I'm not seeing that will help me set this up?
    Thanks!
    ETA:  I can go to the host homepage, and see that there metrics that I'm monitoring are alerting.  I just can't seem to figure out how to email based on those alerts.

    Hello,
    To answer your question, here are the high level steps:
    1) Setup your SMTP gateway.   However, if you are already getting email, then perhaps you've done this step already?
    For reference, here is the link to the doc for details:
    http://docs.oracle.com/cd/E24628_01/doc.121/e24473/notification.htm#sthref154
    2) Set up your email addresses. 
    Under your username drop-down menu,  you should specify the email addresses to which  notifications should be sent.
    Here is the link to the doc for details:
    http://docs.oracle.com/cd/E24628_01/doc.121/e24473/notification.htm#autoId2
    3) Set up rules (incident rule sets) that specify the metric alerts for which you want emails to be sent
    The rule sets were designed to provide a lot of flexibility in covering the different types of events that EM12c supports as well as the many types of notification requirements that many data center needs.
    Think of a 'metric alert'  as a type of event.  
    For your specific requirements (sending email for host metric alerts)  you can do this:
    3.1) Create a rule set and specify the targets for the rule.   In your case you can specify all targets of type 'host'  (if you want the email to be sent for all host targets' alerts) or choose the specific hosts you're interested in.   
    3.2) In the Rules tab,   create an event rule (this is the first option in the choice of rules to create).   When asked for a 'Type', choose 'metric alert' as the event type.   Then choose 'Specific events of type Metric Alert',   then choose the filesystem metric and other metrics for which you want notifications to be sent.
    3.3) In the Actions part of the rule, specify the EM user(s) in the 'Email to' field of the Notifications section.
    3.4) Save the rule
    This should get you up and running with email notifications. 
    However, I would also recommend looking into these other areas to optimize your use of EM12c monitoring:
    1)   If this notification requirement applies not to all hosts but a subset of host targets,  consider creating a group of those targets and specifying the group as the target of the rule set  (step #3.1 above).  Once you have a group, you can perform operations on the group instead of individual targets, makes it easier to manage later on.  As you add more targets to EM, if they need to be monitored in the same way as the other targets in the group,  then simply add the new target to the group,  the rule will automatically apply to the newly added target in the group without requiring changes to the rule.
    2) Use incident management features
    In EM12c, we introduce incident management as a way to focus and better manage the more important issues that impact your datacenter.   Hence when an important event comes in, consider creating an incident for it.  Once you  have an incident, you can leverage features such as specifying an incident owner,  setting its priority,  resolution status, etc.    Using Incident Manager will provide you in-context diagnostic links and access to MOS articles to  help further diagnose/resolve the event.
    Here is the link to the doc for more details on incident management:
    Using Incident Management
    3) Leverage other rule options
    The rule has an option to apply to all metric alerts of specified severities.   So  if you want to receive email for all critical and warning alerts for the host targets,  then instead of choosing the specific metrics, you can change the rule (step #3.2 above) and choose the option 'severity in critical, warning'  instead of choosing individual metrics.     If you do so,  then say later on you want to add thresholds to new metrics (which will generate metric alerts).  You won't have to change the rule to add in the new metrics since your rule already covers all metric alerts with critical and warning severities.  However, since it will cover all metric alerts for your chosen targets, you need to make sure that you have the appropriate thresholds for the metrics to avoid unwanted email notifications from unwanted alerts.
    The Incident Manager chapter that I referenced above has an Advanced Topics section that covers other scenarios.   If you have other monitoring requirements, you can take a look at that section to get ideas on how these can be implemented in Enterprise Manager.     Or you can reply back to this post if you have additional questions.
    Finally, you can take a look at this monitoring best practices paper that describes how you can leverage EM12c to set up monitoring in a scalable way:
    http://www.oracle.com/technetwork/oem/sys-mgmt/wp-em12c-monitoring-strategies-1564964.pdf
    Regards,
    Ana

Maybe you are looking for