Some Doubts about Event Handlers

Hi,
I had some doubts on Event handlers in OIM 11.1.1.5 ........
1) I want to use the same event handler for both Post Insert and Post update task.... Can I use the same event handler for that... If yes then how can I do that....
2) Can I create the single Plugin.xml class and add the all jar files in single say lib folder and zip them all together.. if yes then What changes I need to do?? Need to add only the plugin tags for different class files in plugin.xml file? OR need to do some thing extra also...?
3) If i need to change any thing in any class of event handler.. Is there need to unregister the plugin and again register...??
If yes.... Is there need to delete the event handler using the weblogicDeleteMetadata command???
4) As we Import the event handler from path like event handler/db/... If we add all the evetn handler.xml files in that folder..... As During Import weblogicImportMetadata recursively call all the files in that folder.... Now if i need to change anything in any one of event handler class... then if we import from the same folder event handler/db/... What will it do............Create the duplicate copy of all the eventhandlers????? OR i need to add only those Eventhandler.xml files for those class files i made the changes.....
5) As I need to create email on user creation during recon and also email id get updated as first name or last name updates..... What I had to use in Event handler.xml (entity-type="User" operation="CREATE") or Some thing else....
Help me clarify my doubts...

Anil Bansal wrote:
Hi,
I had some doubts on Event handlers in OIM 11.1.1.5 ........
1) I want to use the same event handler for both Post Insert and Post update task.... Can I use the same event handler for that... If yes then how can I do that....Yes, you can have the same. Just have two event handlers in the same MDS file and the operation should be CREATE for one while MODIFY for another. The class and version and name remains the same.
2) Can I create the single Plugin.xml class and add the all jar files in single say lib folder and zip them all together.. if yes then What changes I need to do?? Need to add only the plugin tags for different class files in plugin.xml file? OR need to do some thing extra also...?Yes, in the single plugin xml you can define multiple eventhandlers and the jar will contain multiple event handlers class.
3) If i need to change any thing in any class of event handler.. Is there need to unregister the plugin and again register...??
If yes.... Is there need to delete the event handler using the weblogicDeleteMetadata command???No, if you are just changing the class, then you need to update the class only in the plugin. For this, first delete plugin and then update plugin and then purgecache.
4) As we Import the event handler from path like event handler/db/... If we add all the evetn handler.xml files in that folder..... As During Import weblogicImportMetadata recursively call all the files in that folder.... Now if i need to change anything in any one of event handler class... then if we import from the same folder event handler/db/... What will it do............Create the duplicate copy of all the eventhandlers????? OR i need to add only those Eventhandler.xml files for those class files i made the changes.....If won't create duplicate copies but would overwrite the ones which are there in MDS at the same location. So effectively, if the xml is not changing you should not be worried about overwritting.
5) As I need to create email on user creation during recon and also email id get updated as first name or last name updates..... What I had to use in Event handler.xml (entity-type="User" operation="CREATE") or Some thing else....For recon and event handler, you will need to have post process event handler on User CREATE and UPDATE. On Create construct the email address and populate it in the email field. For update check if the firstname/lastname are changing and if yes, then update the email id on the profile.
>
Help me clarify my doubts...

Similar Messages

  • Hi all,i have some doubts about dis scenario

    plz explain step by  step about this scenario·
         1)Involved in two activities Monitoring and Production Support.
    ·     2)Actively and regularly involved in Load Monitoring of Daily, Weekly, and Monthly, Data Loads using Process Chains
    ·     3)Actively involved in Rectification of Load Failure Errors like Master Data Loads, Transaction Loads.
    ·     4)Supported the Client by providing Long-term solutions to the Tickets by doing Root Cause Analysis.
    ·     5)Monitoring of Info Packages and analyzing the reasons for frequent failures of Info Packages.

    Hi,
    Here, there are some responsibilities in Production System.
    Answer for fist question is maintaining documentation (in ms word ) for all support related activities like
    a) frequently occurred data load monitoring errors as part of process chains
    b) how to analyze whether load is running for long time
    c) what ever the tickets with proper priority we have to raise. ex: if a load cause affecting more reports , we have to raise p2 ticket . Depends upon the impact of loads on report availability.
    Answer for second question :
    In prod.support, we have to monitor info package groups or process chains because loads are automatized using pchains or ipgroups. . whenever a load failed as part of pchains or ipg's , we have to analyse the load why it is failed. Loads may be failed due to reasons like memory issues, attribute change run locking issues and sid issues etc.
    Answer for third questions:
    In perigreen ticketing tool, we can check tickets like system generated tickets and customer generated tickets . For example,. Tivoli tickets are system generated tickets because whenever pchain is not triggered or any process of pchain has failed, system automatically will raise the tickets.
    Custom generated tickets means as part of daily loads, any failures happened, with proper priority ,we will raise the ticket to appropriate resolution groups. ( Incident management tickets )
    Rms means Request management tickets are requests raised by client. For example if they want adhoc loads on the part of mexico markets, they will raise the Rm and sent it to the offshore queue.
    Answer for fourth question:
    We can maintain excel sheet, it comprises of what are the loads have failed with the error information as part of process chains or ipg's and maintain the logs for manual loads
    daily how many loads are failed ,successful and what are the total loads , can check in RSMO.That information we can maintain in excel sheet.
    Root cause Analysis is nothing but giving permanent solution to the frequent errors.
    Ali.

  • Some doubt about use of

    Hello everyone,
    I am newbie in java programming and need some help. Recently i was just viewing some code of existing open source java projects and i found that there was a use of the < > braces while defining a class, interface, constructor, and objects. As i am a student i do not know that much about this syntax and am not very friendly with data structures if they have any concern with it. i use Java 2 complete reference for learning java and have not found use of such syntax until now but would like to know its use what is it called as and anything more if you know about it such as advantage etc hence am seeking forward for helpfrom you all
    if anybody has any links to tutorials that teach such syntax or any books please dont forget to tell it
    thanks in advance

    by the way one thing i would like to know is that for
    learning generics do i have to have knowledge of data
    structures, because i am not familiar with data
    structures and i found the reference of collections
    framework i just thought i would have to learn DS
    anyways plz tell me what you think
    thanks in advanceno it is not necessary to know about data structures at all while using generics.

  • Some doubt about Bdb XML Normal Recovery

    Hi, everyone
    I have read the document Getting Started with Transaction Processing for Java shipped with Bdb XML 2.4.13. In the book, there is something about Normal Recovery:
    Normal recovery is run only against those log files created since the time of your last checkpoint.To test this, I have designed a scenario as below:
    The ENVIRONMENT directory is under E:/bdb-xml/environment, and the BACKUP directory is under E:/bdb-xml/backup, the CONTAINER name is entry.dbxml, and there is already a document 1.xml in this container.
    1. run db_recover against ENVIRONMENT.
    2. copy entry.dbxml to BACKUP.
    3. create a document 2.xml.
    4. run checkpoint against ENVIRONMENT.
    5. modify document 1.xml.
    6. run checkpoint against ENVIRONMENT.
    7. copy log.0000000001(there is only one log file in ENVIRONMENT) to BACKUP, Note that I didn't copy the entry.dbxml in ENVIRONMENT.
    8. run db_recover agaist BACKUP(now there are 2 files : entry.dbxml, log.log.0000000001).After that, I used the BACKUP as environment directory, and try to query 2.xml. And I retrieved the document correctly, which I feel very curious. As the document says, the last checkpoint is created by step 6, after that, there is no other modifications happens, so the modification happened at step 3 and step 5 will not take effect when db_recover executed. But the two changes have committed to entry.dbxml.
    So, which is the last checkpoint. And what is the those log files created since the time of your last checkpoint.
    I also want to know where the checkpoint be writen, in the db files or the log files.
    thanks advance.
    Regards,
    John Kao.

    Jhon,
    You really do want to know the gory details don't you? :-)
    Running recovery in your backup directory will cause the container there to pick up all changes from the log file that it does not yet have. The checkpoint on the original container doesn't mean anything to the backup container.
    Let me point you to even more interesting documentation that is in the Berkeley DB documentation set. This page has all of the BDB documentation, including links that are not included in the BDB XML doc:
    http://www.oracle.com/technology/documentation/berkeley-db/db/index.html
    The "Getting Started with Transaction Processing" documents on that page have the sort of information you seem to want.
    Regards,
    George

  • Some doubts about XMLP 5.6.2

    hi,
    a question : it can or must be installed on separate $ORACLE_HOME ? i have a 10g instance (**no** APPS)
    and a problem : i have installed it on a separate OHOME , create a user with all XML Roles, except admin, and it occurs a "security violation error" when i try to create a folder.
    any tip ?
    thanks,
    Sandro.

    Hi Brett,
    yes, i understand that is the first version without APPS dependency.
    like i said, i installed it in a separated OHOME, but the install guide is not very clear about this, but i believe that is no problem.
    any hints about the error ?
    thanks,
    Sandro.

  • Some doubts about RAID 5

    Hello Joe,
    Yes, I have a backup!
    I'm not really sure if there was a hot-spare disk or not. I assumed there wasn't because I looked all the controller logs and I didn't see any reference to a hot-spare drive. But, I have to say, the logs of this cabinet lack a lot of information.
    So, if you tell me that RAID5 doesn't have any weird "rebuild" functionality when a drive is missing (except in case there's a hot spare drive), then probably there was a hot-spare disk. I will try to contact the former IT guy in case he remembers how he configured that cabinet.
    Thanks for your reply!

    Hello everyone!I started a new job in a company who spent 6 months without any IT worker. I'm still trying to discover how the entire infrastructure was built. Obviously, there's a terrible lack of documentation, but at least I have no pressure from my workmates.Anyway, the thing is that the biggest storage cabinet (the one that holds the user's home directories and other scientific data) is based on a RAID 5 (Bad!) on a Infortrend cheap hardware (Very bad!). The web-interface sucks, as usual, and it looks that like a blackbox where you can't touch anything. Just a week after my arrival, one of the drives failed. I removed that drive and already ordered a new one (actually two, just in case). The weird thing is, once I removed the bad drive, and without adding the new one (still not delivered), the controller started to "Rebuild" the...
    This topic first appeared in the Spiceworks Community

  • Some doubts about Combo Drivers onto T60 and T61.

    Hello guys,
    I am own of T60 (200742u) and T61 (7663b93) ThinkPad and I would like to know if I can use combo driver (code 39T2737) that it came with my ThinkPad T60 to newer ThinkPad T61. Is it compatible?
    Thanks in advance.
    Ricardo
    ThinkPad T60 and ThinkPad R61
    Dell Vostro 1400 and HP Pavilion DV2100t
    Atom Life and HTC 710
    Nokia 5610 and SE 750i

    R6x use Ultrabay Enhanced drive which is a thicker version of the Ultrabay Slim drive in the T4x, T6x systems, they all have the same shape and connector layout, just different thickness. You can use ultrabay slim drive in R6x system, but not the other way around (the enhanced drive won't physically fit into the slim slot). If you don't use a Ultrabay slim to Enhanced drive adapter, there will be a small space left between the slim drive and the Ultrabay enhanced drive bay in the R61, while if you don't move the laptop it will be okay, otherwise the slim drive may fall out.

  • Some doubts about automatically created search feature

    Hi everybody,
    if I create a report page apex provides me a basic search feature (a single textbox the user can use to search in various fields).
    The code automatically created is (in apex 2.1.x) something like:
    SELECT ...field lists...
    FROM tablename
    WHERE
    instr(upper("FIRST_NAME"),upper(nvl(:P1_REPORT_SEARCH,"FIRST_NAME"))) > 0 or
    instr(upper("LAST_NAME"),upper(nvl(:P1_REPORT_SEARCH,"LAST_NAME"))) > 0 or
    instr(upper("CITY"),upper(nvl(:P1_REPORT_SEARCH,"CITY"))) > 0 or
    ...and so on, one statement for each field we want to search for.
    I don't understand
    1) why apex uses instr instead of a standard LIKE '%..%'..the choice depends on performance issues?
    2) instead of using the nvl "trick" IMHO should be enough:
    :P1_REPORT_SEARCH IS NULL OR
    instr(upper("FIRST_NAME"),upper(:P1_REPORT_SEARCH,"FIRST_NAME")) > 0 or
    instr(upper("LAST_NAME"),upper(:P1_REPORT_SEARCH,"LAST_NAME")) > 0 or
    instr(upper("CITY"),upper(:P1_REPORT_SEARCH,"CITY")) > 0 or
    which is clearer and maybe faster; am I right?
    3) the nvl "trick" doesn't work for empty fields: if a record has null values for all the fields you search for in the query that record won't be displayed in the report even if the user leave the search box blank and click on "go"...this seems to me a bug, has it been resolved in the last release of APEX?
    4) Last question: this is not actually related to "basic" search...if I need to verify is a search field (imagine an advanced search form) is left blank I can just use ":FIELD_NAME IS NULL"...this work for textboxes but doesn't work for listboxes, which needs something like ":FIELD_NAME = '%null%'" because %null% seems to be the value returned by those kind of fields when left blank...am I right? Which is the meaning of %null%?
    Thank you very much and sorry for the long post.
    Ciao.
    Eugenio

    Hi Eugenio,
    Firstly, you are welcome to alter the generated query however you wish. If you don't like what is created by APEX, change it to suit your purposes.
    To answer your questions:
    1. It's easier for customers to understand INSTR than LIKE. With LIKE, you have to be careful to include the wildcard characters. Performance should be roughly equivalent.
    2. The NVL is necessary. If it were not, then if no search criteria was specified, then no rows would ever be returned. Try your suggested SQL out yourself.
    3. You're correct - this won't work if you have null values for all fields. That's not a bug in APEX, though. I don't know too many application requirements where you would want to maintain null values for all fields. Do you?
    4. You are correct. For List Manager and other items which use a List of Values, if you don't specify a string to use for NULL, then '%null%' will be used. You can override this in the LOV settings for the item, but keep in mind - you have to specify something, otherwise, '%null%' will be used. Patrick Wolf posted a nice, generic solution to this: Re: Null value handling in LOVs
    I hope this helps.
    Joel

  • Record mode - some doubts

    Good morning … bom dia …
    I have some doubts about RECORDMODE … follow ...
    Is it necessary development some ABAP code in transformation ? … or in Initial / Final routine ?
    In DTP from DSO to Cube, is mandatory choice the option “change log” ? Or make some filter to select only valid records ?
    Is There some configuration on query ?
    Or the goal is only know how recordmode works, based on the table ROOSOURCE ?
    Thanks in advantage …. Obrigado …
    Kokeny, Marcio
    P.S.: I already read this some documents:
    http://scn.sap.com/people/swapna.gollakota/blog/2007/12/27/how-does-a-datasource-communicates-delta-with-bw
    (How does a datasource communicates "DELTA" with BW?)
    http://scn.sap.com/docs/DOC-54330
    (Recordmode Importance in SAP BI along with Delta Handling)
    http://scn.sap.com/docs/DOC-29927
    (Record mode Concept in SAP BI)
    - Record Mode Concept in Delta Management
    - 0RECORDMODE and Delta type Concepts in Delta Management

    Hi,
    Is it necessary development some ABAP code in transformation ? … or in Initial / Final routine ?
    Why you got above doubt?
    Routines are used as per needs. Not based on 0recordmode.
    In DTP from DSO to Cube, is mandatory choice the option “change log” ? Or make some filter to select only valid records ?
    Not mandatory. if we run first time dtp  with delta mode option then no need to select change log option.
    if we perform dtp as init and later delta then we need to choose changlog option for delta.
    Is There some configuration on query ?
    No. 0RECORDMODE is upto DSO level only to track changes.
    Or the goal is only know how recordmode works, based on the table ROOSOURCE ?
    0record mode works based on data source delta type.
    Data source delta type you cam see from table ROOSOURCE and field deltyp - ABB.ABD.AIE...etc.
    Thanks

  • Doubts about BP number in SRM and SUS

    Hello everyone,
    I have some doubts about the BP number, especially for Vendors.
    I am working with the implementation of SRM 5.0 with SUS in an extended classic scenario. We will use one server for SRM and other for SUS. We will use the self registration for vendor (in SUS). My questions are:
    - Can I have the same BP number in SRM and SUS?? Or is it going to be different??
    - When a vendor accesses at the site to make a self registration in SUS, the information is sent to SRM as prospect (by XI) and there the prospect is changed as vendor? After that, is it necessary to send something from SRM to SUS again? (to change the prospect to vendor)
    - When is it necessary to replicate vendors from SRM to SUS??
    Thanks
    Ivá

    Dear Ivan,
    Here is answer to all your questions. Follow these steps for ROS configuration:
    Pls note:
    1. No need to have seperate clients for ROS and SUS. Create two clients for EBP and (SUS+ROS).
    2. No need of XI to transfer new registered vendor from ROS to EBP
    Steps to configure scenario:
    1. Make entries in SPRO --> "Define backend system" on both clients.
        You will ahev specify logical systems of both the clients (ROS as well as EBP)
    2. Create RFCs on both clients to communicate with each other
    3. In ROS client create Service User for supplier registration service with roles:
        SAP_EC_BBP_CREATEUSER
        SAP_EC_BBP_CREATEVENDOR
        Grant u201CS_A.SCONu201D profile to the user.
    4. Maintain service user in u201CLogon Datau201D tab of service : ros_self_reg in ROS client
    5. Create Purchasing and vendor Organizational Structure in EBP client and maintain necessary
        attributes. create vendor org structure in ROS client
    6. Create your ROS registration questionnaires and assign to product categories- in ROS client
    7. To transfer suppliers from registration system to EBP/Bidding system, Supplier pre-screening has to be
        defined as supplier directory in SRM server - EBP client.
        Maintain your prescreen catalog in IMG --> Supplier Relationship Management u2192 SRM Server u2192
        Master Data u2192 Define External Web Services (Catalogs, Vendor Lists etc.) 
    8. Maintain this catalog Id in purchasing org structure under attribure "CAT" - in EBP client
    9. Modify purchaser role in EBP client:
        Open node for u201CROS_PRESCREENu201D and maintain parameter "sap-client" and ROS client number
    10.Maintain organizational data in make settings for business partner
    Supplier Relationship Management -> Supplier Self-Services -> Master Data -> Make Settings for the Business Partners. This information is actually getting getting stored in table BBP_MARKETP_INFO.
    11. Using manage Business partner node with purchasers login (BBPMAININT), newly registsred vendors are pulled from Pre-screen catalog and BP is created in EBP client. If you you have SUS scenario, ensure to maintain "portal vendor" role here.
    I hope this clarifies all your doubts.
    Pls reward points for helpful answers
    Regards,
    Prashant

  • Doubt about uses of OBIEE

    I have some doubts about the possible uses of OBIEE. It happens that using OBIEE sometimes users demand report of an "analytical" type, that is aggregated analysis through OBIEE’s Answers, selecting data from dimension tables and measures from fact tables. That’s the ordinary purpose of business intelligence tools!!!
    Some other times though, users demand to perform through Answers analyses of an "operating" type, that is simple extractions of some fields belonging to dimension tables, linked between each other through joins, (hence without querying fact tables): that happens because some of the tables brought in the datawarehouse are not directly linked to any fact table. In this way users want to use Answers to visualize data even for this kind of extractions (or operating reports).
    Is this a correct use of the tool or is it just a “twisted” way of using it, always leading eventually to incorrect extractions? If that’s the case, is it possible to use instead BI Publisher, extracting the dataset through the "Sql Query" mode in a visual manner? The problem of the latter solution, in my case, relies in the fact that users are not enough skilled from the technical point of view: they would prefer to use Answers for every extraction, belonging both to the first type (aggregations) and the second one (extractions), that I just described. Can you suggest a methodology to clarify this situation?

    Hi,
    I understand your point... But I think OBIEE doesn't allow having dimension "on their own", they must be joined to a fact table somehow. This way, when you do a query in answers using fields of two dimension tables a fact table should be always involved. When dimensions are conformed, several fact tables may be used, and OBIEE uses the "best" one in terms of performance. However, there are some tricks that you can do to make sure a particular fact table is used, like using the "implicit fact column" in the presentation layer.
    So back to your point, using OBIEE for "operational" reporting as you call it is a valid option in my experience, but you have to make sure that the underlaying star schema supports the logic that your end users expect when they use just dimension fields.
    Regards,

  • Doubts about use of REPORTS_SERVERMAP with Forms11g HA

    Hi,
    I'm configuring a Linux 64bits Forms/Reports 11g HA environment, the point is that i have two nodes, each one with its Forms and Reports servers, let's say FormsA and ReportsA for the first node and FormsB and ReportsB for the seconde node.
    i want FormsA to be able to call reports from ReportsB and FormsB to be able to call reports from ReportsA.
    I've been reading about REPORT_SERVERMAP
    http://docs.oracle.com/cd/E12839_01/bi.1111/b32121/pbr_conf003.htm#autoId5
    But i have some doubts about its use:
    1. I will not use a shared cluster file system or any way of cache solution, i will only have my rdf files on each node, and i'm wondering if just by configuring this parameter i will be able to get the effect mentioned above ??
    2. The link provided says "Using RUN_REPORT_OBJECT. If the call specifies a Reports Server cluster name instead of a Reports Server name, the REPORTS_SERVERMAP environment variable must be set in the Oracle Forms Services default.env file"
    In fact i'm using RUN_REPORT_OBJECT but
    what is the Reports Server cluster name ?? and where do i find that name ??
    3. Is this configuration well defined:
    REPORTS_SERVERMAP=clusterReports:ReportsA;clusterReports:ReportsB
    4. At forms applications when using RUN_REPORT_OBJECT, can i assume that the report server name will be the cluster name specified at the REPORTS_SERVERMAP ??
    5. Which files should i modify rwservlet.properties or default.env ??
    Hope you can help me :)
    Regards
    Carlos

    Hi,
    1. I will not use a shared cluster file system or any way of cache solution, i will only have my rdf files on each node, and i'm wondering if just by configuring this parameter i will be able to get the effect mentioned above ??
    --> In such case what could go wrong is
    Suppose Run_report_object executed jobs successfully to ReportsA
    But web.show_document command for getjobid failed ( as ReportsA went down by this time)
    --> You will not get the output shown ( though job was successful)
    If shared cache was enabled, then Even if ReportsA is down, other cluster member ( say ReportsB)
    will respond back to web.show_document.
    Point 2,
    --> Under HA is it highly recommended to use web.show_document ( a servlet call) to execute reports. This is to help use all HA features at the HTTP , Webcache or load balancer level.
    However if there is migrated code or Run_report_object is must, then the recommendations as you see in the pointed document is must.
    REPORTS_SERVERMAP setting needs to be configured in rwservlet.properties file and also in default.env Forms configuration file to map the Reports Server cluster name to the Reports Server running on the mid-tier where the Load Balancer forwarded the report request.
    For example FormsA, ReportsA, cluster name say rep_cluster
    default.env file
    REPORTS_SERVERMAP=rep_cluster:ReportsA
    Where "rep_cluster" is the Reports Server cluster name and "ReportsA" is the name of the Reports Server running on the same machine as FormsA
    rwservlet.properties file
    <reports_servermap>rep_cluster:ReportsA</reports_servermap>
    At default.env this is not a valid entry
    REPORTS_SERVERMAP=clusterReports:ReportsA;clusterReports:ReportsB
    what is the Reports Server cluster name ?? and where do i find that name ??
    --> This is created via EM on the report server side.
    Would recommend to refer following documents at the myoracle support repository
         How to Setup Reports HA (High Availability - Clusters) in Reports 11g [ID 853436.1]
         REP-52251 and REP-56033 Errors When Calling Reports From Forms With RUN_REPORT_OBJECT Against a Reports Cluster in 11g. [ID 1074804.1]
    Thanks

  • Event handlers inheritence

    Do You plan to add some inheritence into event handlers. Example: I have event defined in UIX page and i have same event registred at global level through PageBroker.
    Do You know how can I handle event with same name before globally defined?

    This isn't the way UIX currently works; if you need functionality of this sort, you'll want to implement a
    custom registration mechanism. One thing you could
    do is add a <javaClass> element to your pages:
    <page ...>
      <javaClass name="yourPackage.YourClass"/>
    </page>...where YourClass extends DefaultUINodePageDescription and overrides getEventHandler() to return a custom EventHandler that first calls through to super.getEventHandler().handleEvent(...)), then looks
    up a globally defined event handler. That's a pretty
    sketchy description, but it would give you what you're
    looking for.

  • Doubt About ASM

    Hi All,
    I have some doubts about ASM.
    I have installked ASM on my linux box and created two databases which are totaly
    using ASM. I created controlfile on local file system only.
    1. Suppose I want to remove my one database. I just shut abort the database and
    remove the controlfile. Now I need to remove all files belong to DB_1 from ASM.
    I am using one single Diskgroup for both Databases. So, can not drop Diskgroup.
    Now, How can I identify files of DB_1 database ?
    2. If I create a normal redundancy diskgroup using two failgroup. It shows me total_mb
    and free_mb space as available. Whenever my database takes 1 extent of 100mb
    it reduces 200mb from free_mb column which is also understandable.
    But I am not able to figureout about col required_mirror_free_mb. What it is saying.
    I have read oracle documentation but still not able to understand.
    Can some please shed some light on this ?
    Regards,
    Js

    If you are using 10.2 then just use asmcmd .
    It is very similar to a regular Unix command prompt interface, when in fact it runs sql commands against ASM instance.
    If you are using 10.1 then you have ti run those sql commands manually.

  • Can any body tell me about event container in workflows

    hi Experts,
                      could you please tell me some details about event container concept. is there a way to see event container in SWO1 transaction. how can we create event container when we create new events.

    Hi Praveen,
    Event container is the container which stores the values of the BO attaribures which get assigned to that instance of the BO.
    If you go to SWO1 transaction and check the BO for which the event is triggered, you would see all the possible attributes than can be assigned.
    To pass the values of the BO event container to the workflow, you need to maintain the necessary binding between the BO event container and the workflow container. This can be done in the 'Basic Data', Start Events tab. If you click on automatic binding, the system would prompt for a default binding. You could opt to use that or create your own binding. Make sure that the datatypes match.
    After mapping the event container to workflow container, you can check in the workflow log for the values passed to the workflow container. If you like to directly check the values stored in the event container, go to SWO1 and simulate an instance of the BO. You can then check the values there.
    Hope this helps!
    Regards,
    Saumya

Maybe you are looking for

  • How to back up files from a MacBook to an XP PC

    Hi I am a non-professional but reasonably competent PC fixer for friends and family, within the MS Windows product set thus far. A friend's son has dropped his MacBook and now has the '?' question mark on boot.  He has taken it to an Apple Genius, wh

  • OMWB transaction key Issue

    Dear All, As per our customer's requirements we have created a new transaction/event key & assigned the same in existing Cal.Schema. The G/L A/C for the new trans key is also defined in OBYC. After all the these when we Checked in OMWB transaction -

  • ESS Who'sWho OrgChart

    Dear Guru's We use the OrgChart link in Who is Who application. The OrgChart shows both the name and the position of a person. Our customer however only wants to display the name of an employee. Is this possible using customizing/Iview parameters? Ki

  • Adding a description to a user when user creating

    Hi All, Is there any possibility of adding a Description to a user at the time of creation Thanks

  • ASM files not in database

    Hi all, we have oracle 11.2.0 2 node cluster database. there are 20 database files for that database. but when i checked in that location appears 23 datafiles. looks like 3 datafiles are not related to the database. result is like below. i think firs