Replace mappings in Process Flows

Hi,
Is there an easy way of replacing an existing mapping ina PF with new version of the same?
thanks
mahesh

OK, here is some sample code you can play with. What it does is drop the mapping activity from the process flow and then replace it with a fresh version, rebuilding parameters and transitions.
I used an additional DB connection to query for problem activities. You can see the logic in the $v_mapquery query: alter this if you need something different. This is much faster then using pure scripting: there would problems of RAM and speed while scanning all process flows using OMBPlus.
So I cache all the needed info in some lists and then print/reconcile it.
Having an extra open connection can cause some concurrency problems, but this was designed to run batch and send back an e-mail with results.
Some extra notes:
1-if a mapping is dropped I print the unbound activity but can't reconcile it
2-activities are replaced so they will be out of position when you enter the process flow editor
2-there was a bug in scripting related to SQLLoader mappings. I don't know if this has been fixed; so you can have problems with these.
The following 3 procedures cache the process flow info.
# Retrieve transitions of a proc. flow activity
proc get_map_pf_transitions {\
     p_conn \
     p_process_flow_id \
     p_map_activity_id \
     p_pf_act_trans
     upvar $p_pf_act_trans v_pf_act_trans
     set v_query "select
     cast(decode (tr.source_activity_id, ?, 'OUTGOING','INCOMING') as varchar2(50)) TRANSITION_DIRECTION,
process_id, process_name,
  transition_id, transition_name, business_name, description, condition,
                       source_activity_id, source_activity_name,
                        target_activity_id, target_activity_name
from all_iv_process_transitions tr
       where tr.process_id = ?
       and (tr.source_activity_id = ?
              or tr.target_activity_id = ?)
            order by source_activity_id";     
     set v_stmt [ $p_conn prepareCall $v_query ]
     $v_stmt {setLong int long} 1 $p_map_activity_id
     $v_stmt {setLong int long} 2 $p_process_flow_id
     $v_stmt {setLong int long} 3 $p_map_activity_id
     $v_stmt {setLong int long} 4 $p_map_activity_id
     set v_resultset [ $v_stmt  executeQuery ]
     # set to -1 so it goes to 0 when entering loop
     set v_transindex -1
     set v_temptran [list ]
     while { [ $v_resultset next ] == 1  } {
         incr v_transindex;
         lappend v_temptran \
              [ list \
                   [$v_resultset getString TRANSITION_NAME] \
                   [$v_resultset getString DESCRIPTION] \
                   [$v_resultset getString CONDITION] \
                   [$v_resultset getString TRANSITION_DIRECTION] \
                   [$v_resultset getString SOURCE_ACTIVITY_NAME] \
                   [$v_resultset getString TARGET_ACTIVITY_NAME] \
    lappend v_pf_act_trans $v_temptran
# Retrieve parameters of a proc. flow mapping
proc get_map_pf_parameters {\
     p_conn \
     p_map_activity_id \
     p_pf_act_parameters
     upvar $p_pf_act_parameters v_pf_act_parameters
     set v_query "select
     parameter_owner_id, parameter_owner_name,
     PARAMETER_OWNER_ID, PARAMETER_NAME, DATA_TYPE, DEFAULT_VALUE parameter_value,
     BUSINESS_NAME, DESCRIPTION
     from ALL_IV_PROCESS_PARAMETERS pa
       where pa.parameter_owner_id = ?";     
     set v_stmt [ $p_conn prepareCall $v_query ]
     $v_stmt {setLong int long} 1 $p_map_activity_id
     set v_resultset [ $v_stmt  executeQuery ]
     # set to -1 so it goes to 0 when entering loop
     set v_paramindex -1
     set v_tempparam [list ]
     while { [ $v_resultset next ] == 1  } {
         incr v_paramindex;
         lappend v_tempparam \
              [ list \
                   [$v_resultset getString PARAMETER_NAME] \
                   [$v_resultset getString DATA_TYPE] \
                   [$v_resultset getString PARAMETER_VALUE] \
                   [$v_resultset getString DESCRIPTION] \
    lappend v_pf_act_parameters $v_tempparam
# Retrieve and cache all info needed to upgrade process flows
# all parameters are lists which are appended, except the connection
proc get_map_pf_unbound { \
     p_conn \
     p_upd_types \
     p_maps \
     p_pf_paths \
     p_pf_proc_names \
     p_pf_act_names \
     p_pf_act_parameters \
     p_pf_act_trans
     upvar $p_upd_types v_upd_types
     upvar $p_maps v_maps
     upvar $p_pf_paths v_pf_paths
     upvar $p_pf_proc_names v_pf_proc_names
     upvar $p_pf_act_names v_pf_act_names
     upvar $p_pf_act_parameters v_pf_act_parameters
     upvar $p_pf_act_trans v_pf_act_trans
# query to retrieve unbound mappings (actually, I use views in the DB ... )
     set v_mapquery "with proc_maps as (
      select
   '/'||md.project_name||'/'|| information_system_name || '/' ||
                            pk.package_name pf_fqual_procpath,
   md.project_id pf_project_id,
   md.project_name pf_project_name,
   md.information_system_id pf_module_id,
   md.information_system_name pf_module_name,
   pk.package_id pf_package_id,
   pk.package_name pf_package_name,
   pr.process_id pf_process_id,
   pr.process_name pf_process_name,
   a.activity_id pf_activity_id,
   a.activity_name pf_activity_name,
   a.business_name pf_act_business_name,
   a.description pf_act_description,
   a.activity_type pf_act_activity_type,
   a.bound_object_id pf_act_bound_object_id,
   a.bound_object_name pf_act_bound_object_name
    from all_iv_process_activities a,
                                 all_iv_processes pr,
                            all_iv_packages pk,
                            all_iv_process_modules md
             where
             a.activity_type in (
                'PlSqlMapProcessNoteTag', /* type for PLSQL mappings */
                'SqlLdrProcessNoteTag' /* SQLLOADER mappings */)
             and a.process_id = pr.process_id
             and pk.package_id = pr.package_id
             and md.INFORMATION_SYSTEM_ID = pk.schema_id
  maps as (
       select
     '/'||md.project_name||'/'||md.information_system_name||
     '/'||  mp.MAP_NAME mp_fqual_mapname,
       md.PROJECT_ID mp_project_id,
       md.PROJECT_NAME mp_project_name,
       md.INFORMATION_SYSTEM_ID mp_module_id,
       md.INFORMATION_SYSTEM_NAME mp_module_name,
       mp.MAP_ID mp_map_id,
       mp.MAP_NAME mp_map_name,
       mp.BUSINESS_NAME MP_BUSINESS_NAME ,
       mp.DESCRIPTION MP_DESCRIPTION
        from all_iv_xform_maps mp,
                  all_iv_information_systems md
                where mp.INFORMATION_SYSTEM_ID = md.INFORMATION_SYSTEM_ID
select * from (
/* case 1: mapping name has changed */
select
  '1-CHANGEDNAME' changetype,
a.*,m.* from proc_maps a, maps m
                where a.pf_act_bound_object_id = m.mp_map_id
                  and a.pf_act_bound_object_name <> m.mp_map_name
                  union all
/* case 2: there's a new mapping with the old name... I'll reconcile only if
   the old mapping was dropped: otherwise you'll be in case 1:
   IMPORTANT- NOTE: I'll reconcile with a new mapping with the same name even
                  if found in a different module.
select
  '2-REPLACED' changetype,
a.*,mnew.* from proc_maps a,
                                 maps mnew,
                            maps mold
                where a.pf_act_bound_object_id <> mnew.mp_map_id
             and a.pf_act_bound_object_name = mnew.mp_map_name
             /* verify that mapping is in the current project */
             and mnew.mp_project_id = a.pf_project_id
             and a.pf_act_bound_object_id = mold.mp_map_id (+)
             and mold.mp_map_id is null
                     union all
/* case 3: no matching mapping. I'll warn the user that the activity is not bound nor bindable */
   select
   '3-MISSING' changetype,
   a.*,mnew.* from proc_maps a, maps mnew, maps mold
                where
                 a.pf_act_bound_object_name = mnew.mp_map_name (+)
             and a.pf_project_id = mnew.mp_project_id (+)
             and a.pf_act_bound_object_id = mold.mp_map_id (+)
             and mnew.mp_map_id is null
             and mold.mp_map_id is null)
             order by changetype, pf_fqual_procpath, pf_process_name, pf_activity_name";
# query to retrieve connections between pflow activities
     set v_transquery "select
process_id, process_name,
  transition_id, transition_name, business_name, description, condition,
                       source_activity_id, source_activity_name,
                        target_activity_id, target_activity_name
from all_iv_process_transitions tr
       where tr.process_id = ?
       and (tr.source_activity_id = ?
              or tr.target_activity_id = ?)
            order by source_activity_id";
     set v_mapstmt [ $p_conn prepareCall $v_mapquery ]
     set v_resultset [ $v_mapstmt  executeQuery ]
     # set to -1 so it goes to 0 when entering loop
     set v_mapindex -1
     while { [ $v_resultset next ] == 1  } {
         incr v_mapindex;
         lappend v_upd_types [$v_resultset getString CHANGETYPE]
         set v_fqualmapname [$v_resultset getString MP_FQUAL_MAPNAME]
         lappend v_maps $v_fqualmapname
         set v_pf_activity_id [$v_resultset getLong PF_ACTIVITY_ID]
         set v_pf_process_id [$v_resultset getLong PF_PROCESS_ID]
          lappend v_pf_paths [$v_resultset getString PF_FQUAL_PROCPATH]
          lappend v_pf_proc_names [$v_resultset getString PF_PROCESS_NAME]
          lappend v_pf_act_names [$v_resultset getString PF_ACTIVITY_NAME]
          puts "Retrieving activity parameters...";
          get_map_pf_parameters $p_conn $v_pf_activity_id $p_pf_act_parameters
          puts "Retrieving activity transitions...";
          get_map_pf_transitions $p_conn $v_pf_process_id $v_pf_activity_id $p_pf_act_trans
#          lappend v_pf_act_properties
#          lappend v_pf_act_trans
           puts "All data retrieved for activity:";
           puts "[lindex $v_pf_paths $v_mapindex]/[lindex $v_pf_proc_names $v_mapindex]/[lindex $v_pf_act_names $v_mapindex]";     
           puts "Type: [lindex $v_upd_types $v_mapindex]";
}And here's some example client code to load problem activities info, print it and reconcile (replace) activities.
# open extra connection to access OWB public views
set v_connstr "OWBREP/[email protected]:1521:ORCL"
set v_jdbcconnstr "jdbc:oracle:thin:$p_connstr"
java::call java.sql.DriverManager registerDriver [java::new oracle.jdbc.OracleDriver ]
set v_conn [java::call java.sql.DriverManager getConnection $v_jdbcconnstr ]
# retrieve and cache activity data
set v_upd_types [list ]
set v_maps [list ]
set v_pf_paths [list ]
set v_pf_proc_names [list ]
set v_pf_act_names [list ]
# activity parameters - will be a nested list
set v_pf_act_parameters [list ]
# activity transitions - will be a nested list
set v_pf_act_trans [list ]
get_map_pf_unbound $v_conn \
v_upd_types \
v_maps \
v_pf_paths \
v_pf_proc_names \
v_pf_act_names \
v_pf_act_parameters \
v_pf_act_trans \
$v_conn close
#print results
foreach \
     v_upd_type $v_upd_types \
     v_map $v_maps \
     v_pf_path $v_pf_paths \
     v_pf_proc_name $v_pf_proc_names \
     v_pf_act_name $v_pf_act_names \
     v_pf_act_parameterz $v_pf_act_parameters \
     v_pf_act_tranz $v_pf_act_trans \
     puts "*** Reconcile type: $v_upd_type";
     puts "Activity: $v_pf_path/$v_pf_proc_name/$v_pf_act_name"
     puts "Candidate mapping: $v_map"
# types of activities I can reconcile
set v_reconc_possible_types [ list "1-CHANGEDNAME" "2-REPLACED" ]
set v_currentpath ""
OMBCONN $v_connstr
#reconcile
foreach \
     v_upd_type $v_upd_types \
     v_map $v_maps \
     v_pf_path $v_pf_paths \
     v_pf_proc_name $v_pf_proc_names \
     v_pf_act_name $v_pf_act_names \
     v_pf_act_parameterz $v_pf_act_parameters \
     v_pf_act_tranz $v_pf_act_trans \
     if { [lsearch $v_reconc_possible_types $v_upd_type ] == -1 } {
     # skip non-reconcilable activities
          continue;
     puts "Reconciling  $v_pf_path/$v_pf_proc_name/$v_pf_act_name "
     puts "with mapping $v_map ..."     
     if { $v_pf_path != $v_currentpath } {
          OMBCC '$v_pf_path'
          set v_currentpath $v_pf_path     
     # drop and replace activity
     puts "Dropping activity...";
     OMBALTER PROCESS_FLOW '$v_pf_proc_name' \
          DELETE ACTIVITY '$v_pf_act_name';
     puts "Re-creating activity...";     
     # don't change activity name - maybe should inherit mapping name (if no collisions)
     OMBALTER PROCESS_FLOW '$v_pf_proc_name' ADD MAPPING ACTIVITY '$v_pf_act_name' \
               SET REF MAPPING '$v_map';
     # add transitions
     puts "Adding transitions...";          
     foreach v_tran $v_pf_act_tranz {
          set v_TRANSITION_NAME [lindex $v_tran 0 ]
         set v_DESCRIPTION [lindex $v_tran 1 ]
         set v_CONDITION  [lindex $v_tran 2 ]
          set v_SOURCE_ACTIVITY_NAME [lindex $v_tran 4 ]
          set v_TARGET_ACTIVITY_NAME     [lindex $v_tran 5 ]     
          OMBALTER PROCESS_FLOW '$v_pf_proc_name' ADD TRANSITION '$v_TRANSITION_NAME' \
               FROM ACTIVITY '$v_SOURCE_ACTIVITY_NAME' \
               TO '$v_TARGET_ACTIVITY_NAME' \
               SET PROPERTIES (TRANSITION_CONDITION, DESCRIPTION) VALUES \
                    ('$v_CONDITION','$v_DESCRIPTION');
     # set parameters
     puts "Setting parameters...";
     foreach v_param $v_pf_act_parameterz {
          set v_PARAMETER_NAME [lindex $v_param 0 ]
         set v_PARAMETER_VALUE [lindex $v_param 2 ]
         set v_DESCRIPTION [lindex $v_param 3 ]
         OMBALTER PROCESS_FLOW '$v_pf_proc_name' MODIFY ACTIVITY '$v_pf_act_name' \
              MODIFY PARAMETER '$v_PARAMETER_NAME' SET \
               PROPERTIES (VALUE,DESCRIPTION) VALUES ('$v_PARAMETER_VALUE','$v_DESCRIPTION') ;
     puts "Reconcile complete for $v_pf_path/$v_pf_proc_name/$v_pf_act_name";          
}I hope I haven't lost too much in cutting and pasting! Anyway if something is not clear, ask freely. I don't use OWB built-in process flows any more, but I should remember enough to explain my own code.
Antonio

Similar Messages

  • Scheduling Mappings and Process Flows

    Can you please tell me how to schedule mappings and process flows in a little detail, if any one has done that. I think OEM can be used for this but dont have any clear cut idea of doing it. Can the mappings/process flows be scheduled without using OEM console ?
    Waiting for you suggestions.
    rgds
    -AP

    Hi
    You can schedule your mapping and process flows with OEM or database job.
    You will find script templates in the OWB HOME/owb/rtp/sql directory
    and you can scheldule them.
    If you want to do it in OEM use the oem_exec_template.sql file createing an OEM job. Read the OWB documentation about it. If you have any question about it ask it, I have done this procedure many times.
    Ott Karesz
    http://www.trendo-kft.hu

  • How to schedule mappings to process flows?

    Hi,
    I have shceduled a calendar (Job) which is referring to a process flow. But how can I make sure that the mappings are referring to the same process flow?
    E.g. I have scheduled job at 10 AM , I have created the process flow for 10 AM referring to the same scheduled job.
    My understanding here is there is a hierarchy : Scheduled jobs > Process Flows > Mappings
    I have configured the process flow to run it at a scheduled job, now I want the mappings to understand to run at the same time as that of the schedule.
    And also when I start the process flow all the mappings should get executed.
    Is there any parameter to tell the process flow that all these mappings falls under you.
    Hope I have made myself clear.
    Can anyone please look into this query?
    Thnks in adv..

    When I double click and open my process flow I am not able to see any mapping. We have stored procedures written:
    ln_exists NUMBER;
    LS_ERROR VARCHAR2(200);
    LD_START_PERIOD_DT DATE;
    LD_END_PERIOD_DT DATE;
    EX_PF_NOT_VALID EXCEPTION ;
    EX_SUB_PF_NOT_VALID EXCEPTION ;
    EX_LAYER_NOT_VALID EXCEPTION ;
    EX_MODULE_NOT_VALID EXCEPTION ;
    EX_DATE_FORMAT_ERR EXCEPTION ;
    BEGIN
    --1: Check the Process Flow parameter value
    IF IP_PF IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_process_flow_par
    where process_flow = IP_PF;
    IF ln_exists =0 THEN
    RAISE EX_PF_NOT_VALID;
    END IF;
    END IF;
    --2: Check Sub Process Flow Parameters value
    IF IP_SUB_PF IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_sub_pf_par
    where sub_pf_code = IP_SUB_PF;
    IF ln_exists = 0 then
    RAISE EX_SUB_PF_NOT_VALID;
    END IF;
    END IF;
    --3:Check Layer Code Parameter Value
    IF IP_LAYER IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_lookup_code
    where lookup_type='LAYER_CODE'
    and lookup_code= IP_LAYER;
    IF LN_EXISTS =0 THEN
    RAISE EX_LAYER_NOT_VALID;
    END IF;
    END IF;
    --4: Check Module Code Parmeter Value
    IF IP_MODULE IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_lookup_code
    where lookup_type IN ('SOURCE_SYSTEM','SUBJECT_CODE')
    and lookup_code= IP_MODULE;
    IF LN_EXISTS =0 THEN
    RAISE EX_MODULE_NOT_VALID;
    END IF;
    END IF;
    --5: Check start Period date & End Period Date Format
    BEGIN
    IF IP_START_PERIOD_DT IS NOT NULL THEN
    LD_START_PERIOD_DT := TO_DATE(IP_START_PERIOD_DT,'YYYY-MM-DD');
    END IF;
    IF IP_END_PERIOD_DT IS NOT NULL THEN
    LD_END_PERIOD_DT := TO_DATE(IP_END_PERIOD_DT,'YYYY-MM-DD');
    END IF;
    EXCEPTION
    WHEN OTHERS THEN
    RAISE EX_DATE_FORMAT_ERR;
    END;
    EXCEPTION
    WHEN EX_DATE_FORMAT_ERR THEN
    LS_ERROR := 'Date Format is not valid ,please check (FORMAT: YYYY-MM-DD HH24 /YYYYMMDDHH24)';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20002,LS_ERROR);
    WHEN EX_PF_NOT_VALID THEN
    LS_ERROR := 'The Process Flow Value is not valid ,please check table adm_process_flow_par';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20002,LS_ERROR);
    WHEN EX_SUB_PF_NOT_VALID THEN
    LS_ERROR := 'The Sub Process Flow Value is not valid ,please check table adm_sub_pf_par';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20003,LS_ERROR);
    WHEN EX_LAYER_NOT_VALID THEN
    LS_ERROR := 'The Layer Code Value is not valid ,please check adm_lookup_code(lookup_type="LAYER_CODE")';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20004,LS_ERROR);
    WHEN EX_MODULE_NOT_VALID THEN
    LS_ERROR := 'The Layer Code Value is not valid ,please check adm_lookup_code(lookup_type IN ("SOURCE_SYSTEM","SUBJECT_CODE")';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20005,LS_ERROR);
    END;
    Can anyone throw some light on this issue?
    Edited by: user11001347 on May 11, 2010 11:46 PM

  • Input Parameter in process flow

    Hi, I can't pass a parameter in a process flow, I'm following these step:
    To accomplish your task you should do following steps:
    1. Define input parameters for all your mappings
    2. Collect mappings within process flow
    3. Define process parameters.
    To do this withing Process editor and point somewhere on process diagram - don't select any activities In bottom-left window you will see the list of all process' activities. Select "Start" activity and press Add button below. New parameter line should appear under Start activity. Edit name and type of parameter and repeat creating input process parameters.
    4. Select Mapping with input parameters. In bottom-left window selectted activity will appear. Click on plus sign to expand mapping parameters. For each parameter set its Binding to the Process parameter. Continue with other mappings.
    The process flow recibe the input parameter, but it's not passing through the mappings
    if I run the maps one by one, the result is Ok
    Thanks and regards

    Hi
    How do you know the process flow is receiving the input, and that the mapping is not?
    So you followed the following
    http://blogs.oracle.com/warehousebuilder/2009/01/process_flow_parameters_1.html
    but with a map as the activity?
    Cheers
    David

  • ORA-01017 invalid username/password on execution of process flow

    Hi, there are one or two similar issues mentioned in the foum but none are resolved I think.
    Basically we have target repositories setup with a single user (not the schema owner) used for the location credentials eg.
    REP1_LOCATION pointing to schema REP1 but username OWBRT
    REP2_LOCATION pointing to schema REP2 but also username OWBRT
    The user OWBRT is defined as a user within OWB and is also defined as a target repository user.
    The objects deploy ok without errors but when running any process flow or mapping we get an ORA-01017 invalid username/password.
    If the we set the location details to use the schema owner instead of OWBRT mappings and process flows work for that schema.
    Is there any special privileges / grants that need to be setup for the username attached to a location in order for this to work ?
    We tried giving DBA privileges as a test (which you would think would work), but it didnt.
    Thanks
    Paul

    Hi Paul,
    under what user you tried to start processflows/mappings? Did you use configuration with spitted design ant runtime OWB repositories (did you use different DB for design and runtime OWB repositories)?
    You should start processflows/mappings under OWB user (created/registered in target DB) and
    you need register target locations (database and workflow locations) under this OWB user.
    Other variant - you can enable preferences options "Persist location password in metadata" and "Share location password during run time" and then register locations under any OWB user.
    I had similar problem and resolved it, also look at this thread
    Re: Complex condition don't work with variables in OWB 10.2
    Regards,
    Oleg

  • Passing parameters from shell script to OWB process flow

    Hi all,
    I am running OWB process flow (using the template script provided by oracle) and i want to pass two date parameters as shown below:
    sqlplus -s $SQL_USER/$SQL_PWD@$ORACLE_SID @$HOME_DIR/src/vmc_oem_exec_script.sql OWB_OWNER VMC_POST_DW_PF_LOC_SIT PROCESS VMC_NM1_PORT_MAIN "," "P_DATE_FROM=$DATE_FROM,P_DATE_TO=$DATE_TO"
    How do i catch those values in process flow and pass those to mappings in Process flow?
    Do i need to create PF variables with same names or any name will do?
    Thanks in advance

    This document is explaining how to pass data between activities in process flow.
    I am passing parameters from a shell script.
    Any ideas,how to pass parameters from shell script and then initialize the process flow variables based on those values and then pass them further to mappings.
    Thanks

  • Error time of running process flow.

    Hi All,
    I am new to datawarehousing . I have installed 11gR2 on linux machine without any error .
    Now i have imported all object of ocdm_sys and ocdm_sample_sys .Now when i am trying to start the process flow .
    Its throwing me error RTC-5190 There is no location associate with current module operation abandon .
    Can u please tell also the steps to running the ETL process . I am trying using ocdm_sample_sys user with all sample data.
    Please tell me the steps i need to do after installation to run ETL process also like any changes need to be done.?
    At the time of workflow creation i got this error is that above error related to this ..
    WorkflowCA: Workflow component container deployed in OC4J: /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -Doc4j.autoUnpackLockCount=-1 -Doracle.security.jazn.config=/u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/OC4J_Workflow_Component_Container/config/jazn.xml -Djava.security.properties=/u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/config/jazn.security.props -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/oc4j.jar -userThreads -config /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/OC4J_Workflow_Component_Container/config/server.xml
    WorkflowCA: Executing :/u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/admin.jar ormi://v-agilent29:6041 oc4jadmin welcome -application WFALSNRSVCApp -testDataSource -location jdbc/WorkflowDS -username OWF_MGR
    WorkflowCA: :nullError: Could not connect to the remote server. Please check if the server is down or the client is using invalid host, ORMI port or password to connect: Connection refused for app:WFALSNRSVCApp
    WorkflowCA: Executing: /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/admin.jar ormi://v-agilent29:6041 oc4jadmin welcome -application WFALSNRSVCApp -installDataSource -url jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=v-agilent29)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ocdm))) -username OWF_MGR -password ->pwForOwfMgr -className com.evermind.sql.DriverManagerDataSource -location jdbc/WorkflowDS -xaLocation jdbc/xa/WorkflowDS -ejbLocation jdbc/WorkflowDS -connectionDriver oracle.jdbc.driver.OracleDriver
    WFCA OUT: Error: Could not connect to the remote server. Please check if the server is down or the client is using invalid host, ORMI port or password to connect: Connection refused
    WorkflowCA: Exit Val: 2
    WorkflowCA: Created a redirected data source with application WFALSNRSVCApp :
    WorkflowCA: Executing :/u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/admin.jar ormi://v-agilent29:6041 oc4jadmin welcome -application WFMLRSVCApp -testDataSource -location jdbc/WorkflowDS -username OWF_MGR
    WorkflowCA: :nullError: Could not connect to the remote server. Please check if the server is down or the client is using invalid host, ORMI port or password to connect: Connection refused for app:WFMLRSVCApp
    WorkflowCA: Executing: /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/admin.jar ormi://v-agilent29:6041 oc4jadmin welcome -application WFMLRSVCApp -installDataSource -url jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=v-agilent29)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ocdm))) -username OWF_MGR -password ->pwForOwfMgr -className com.evermind.sql.DriverManagerDataSource -location jdbc/WorkflowDS -xaLocation jdbc/xa/WorkflowDS -ejbLocation jdbc/WorkflowDS -connectionDriver oracle.jdbc.driver.OracleDriver
    WFCA OUT: Error: Could not connect to the remote server. Please check if the server is down or the client is using invalid host, ORMI port or password to connect: Connection refused
    WorkflowCA: Exit Val: 2
    WorkflowCA: Created a redirected data source with application WFMLRSVCApp :
    WorkflowCA: Executing: /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -Djava.security.properties=/u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/config/jazn.security.props -Doracle.security.jazn.config=/u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/OC4J_Workflow_Component_Container/config/jazn.xml -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/jazn.jar -user oc4jadmin -password welcome -adduser jazn.com pwForOwfMgr <WFCA WF PASSWORD>
    The specified user already exists in the system.
    WorkflowCA: Created obfusticated password for redirect datasource:
    WorkflowCA: Executing: /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java -jar /u01/app/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/home/admin.jar ormi://v-agilent29:6041 oc4jadmin welcome -shutdown
    WFCA OUT: Error: Could not connect to the remote server. Please check if the server is down or the client is using invalid host, ORMI port or password to connect: Connection refused
    WorkflowCA: Exit Val: 2
    WorkflowCA: Executed OC4J Admin script to shut down the OC4J instance :
    WorkflowCA: Mon Jun 21 18:03:30 IST 2010
    WorkflowCA: Workflow Configuration has completed with error.
    WorkflowCA: Terminating...
    I drop the owf_mgr schema 3 4 times and tried it but still i am getting the error.

    Hi
    If you check out the post here you will see the script sqlplus_exec_template.sql mentioned, this can be used to execute mappings and process flows from the command like;
    http://blogs.oracle.com/warehousebuilder/2008/11/using_3rd_party_schedulers_with_owb_1.html
    Cheers
    David

  • Problems scheduling process flow

    Using OWB I have create a number of mappings and also a process flow using the defined mappings. All the mappings and the process flow are successfully deployed and can be executed without problems from Deployment Manager. I can even successfully execute my process from OS command line using the sqlplus_exec_template.sql file :
    sqlplus vlad_proiect/vlad@proiect @F:\Oracle\OraWB92\owb\rtp\sql\sqlplus_exec_template.sql vlad_runtime ORACLE_WORK_FLOW PROCESS INCARC "," ","
    where :
    vlad_proiect/vlad my user name and passwd
    proiect my database SID
    vlad_runtime is the Runtime Repository owner
    ORACLE_WORK_FLOW is the location of the process flow
    INCARC is the process name.
    I tried to do the same thing using OEM. I followed the example
    http://otn.oracle.com/products/warehouse/htdocs/oem_scheduling_viewlet_swf.html
    and the indication from WB User guide chapter Scheduling Mappings and Process Flows (13-19) but the scheduled job fails with the following reason :
    VNI-2015 : The Node preferred credentials for the target node are either invalid or do not have sufficient privileges to complete the operation.
    On Windows platforms, the Node credentials specified for the Windows target should have the "Logon as a batch job" privilege.
    I have created the job as follows:
    Tab General
     Job name: TEST
    Selected target: Proiect
    Tab Tasks
    Tasks: Run SQL*Plus script
    Tab Parameters
    Parameters: vlad_runtime ORACLE_WORK_FLOW PROCESS INCARC "," ","
    Override preferred credentials: checked
    User name: vlad_runtime_acc the Runtime Repository access user
    Password: vlad the Runtime Repository acc user passwd
    I have imported the script oem_exec_template.sql
    And in the Preferred credentials (from Configuration->Preferences->Administrator preferences) I set :
    Database : Proiect with user vlad_proiect and my passwd.
    Node : localhost with user vlad_runtime and his passwd.
    Somewhere something is wrong set (or not set at all?) or …?

    Razvan,
    When OEM executes a scheduled job, it will logon to the node (i.e. your machine) as the user that is specified in the preferred credentials of OEM. That user (provided there is one... which is not necessarily default) is an operating system user and must have the 'logon as batch job' privilege. You can set this option (on Windows 2000) by going into control panel, administrative tools, local security policy, local policies, user rights assignment. Look for 'logon as batch job' and make sure the user being used to logon to the node has the privilege.
    This setup is actually provided in the installation guide as well.
    Hope this helps,
    Mark.

  • OWB Process Flows

    Hi,
    We have lot of process flows already created. They used Transformation object in many process flows. But they didn't write any PL/SQL code for this transfomation. They wrote shell scripts. They are giving lot of problems. Here my question is can we write shell scripts for transformation? What I know is we have to write PL/SQL code for transformation objects in process flows. If anybody have PL/SQL code for process flows transformation can you share?
    Thanks
    SS

    HI SS,
    sounds like you've got some issues there.
    You say they did not write PL/SQL But an OWB tranformation is PL/SQL. For example, it could be an entry point in a package, like WB_RT_CONVERSIONS.to_date(date). It could simply be a procedure, like WB_Disable_All_Constraints(p_table VARCHAR2 ). In either case, it is complied PL/SQL. And of course mappings become compiled packages in the target schema location.
    I'm puzzled. If there's no PL/SQL, then what does the actual work?
    Which leads me on to my next point. On the process flow pallete, there is a transformation object. You say they used many transformation objects. Then you say "they wrote shell scripts for transformation". But that's operating system code. You don't run OS code inside process flows (except in some unusual circumstances)..
    What do these shell scripts do? What do they execute?
    Finally, you write: "we have to write PL/SQL code for transformation objects in process flows" But surely, those are your mappings, no? And you don't write them, you generate them, no? And you wouldn't begin working on process flows until your mapping streams were complete, would you? Sorry, but I'm baffled by this comment.
    I'd be obliged if you could describe your software stack in a lot more detail, please. For examples, please see this thread: 10gR2: How do you run OWB from Enterprise Manager (OEM) and Scheduler? It is a discussion on running Mappings from Process Flows from Scheduler.
    I have to confess that right now, I cannot even see the shape of the stack you are trying to run, let alone conceptualize a solution, nor even grasp the meaning of your questions. More, and more detailed descriptions, would help enormously.
    Cheers,
    Donna

  • How to Schedule OWB Process Flows through OEM

    Hi,
    I have successfully created mappings and process flows for my project.
    But now I want to schedule the process flows through OEM.
    Following are my DB configurations..
    OWB Version : 10g R1 (Oracle SID LVDSGDEV ) on windows
    Runtime Owner: ATS_RUN_OWNER (Oracle SID LVDSGDEV )
    Runtime User : ATS_RUN_USER (Oracle SID LVDSGDEV )
    Work Flow Schema: OWF_MGR (Oracle SID LVDSGDEV)
    Source & Target DB : 10g (Oracle SID LVRUNDEV) on windows
    Process which I want to Schedule is....
    Process Flow Location: ATS_LOC
    Process Flow Name: PKG_DM
    Process Name: BASE_LOAD
    Can someone please suggest me, the procedure to schedule this process flow.....

    You may choose just to schedule the mapping directly through OEM. Create an OEM SQLPlus job that calls the script:
    @c:\owb_home\owb\rtp\sqlplus_exec_template.sql;
    Check the beginning of the sqlplus_exec_template.sql. It will show examples of the parameters it accepts and it has some documentation. It works nicely.

  • Mappings not triggered from Process flow?

    Hi,
    I have some mappings under a process flow..they are triggered daily but today they were not triggered can anyone tell me how to go about this?

    Hi ,
    If you have deployed the schedule then it will be in Disable state .
    You can check that by logging into SQL*PLUS as the Target user and running
    select job_name, state from user_scheduler_jobs;
    Use the Schedule tab on the Control Center Jobs panel in the Control Center Manager to schedule
    the job. Highlight, right click the job and from the pop-up menu select "start". Use the same query to check the current status:
    select job_name, state from user_scheduler_jobs;
    The above query will show whether the scheduled job is SCHEDULED, RUNNING or SUCCEEDED.
    Thanks,
    Sutirtha

  • Export and Import of mappings/process flows etc

    Hi,
    We have a single repository with multiple projects for DEV/UAT and PROD of the same logical project. This is a nightmare for controlling releases to PROD and in fact we have a corrupt repository as a result I suspect. I plan to split the repository into 3 separate databases so that we have a design repos for DEV/UAT and PROD. Controlling code migrations between these I plan to use the metadata export and subsequent import into UAT and then PROD once tested. I have used this successfully before on a project but am worried about inherent bugs with metadata export/imports (been bitten before with Oracle Portal). So can anyone advise what pitfalls there may be with this approach, and in particular if anyone has experienced loss of metadata between export and import. We have a complex warehouse with hundreds of mappings, process flows, sqlldr flatfile loads etc. I have experienced process flow imports that seem to lose their links to the mappings they encapsulate.
    Thanks for any comments,
    Brandon

    This should do the trick for you as it looks for "PARALLEL" therefore it only removes the APPEND PARALLEL Hint and leaves other Hints as is....
    #set current location
    set path "C:/TMP"
    # Project parameters
    set root "/MY_PROJECT"
    set one_Module "MY_MODULE"
    set object "MAPPINGS"
    set path "C:/TMP
    # OMBPLUS and tcl related parameters
    set action "remove_parallel"
    set datetime [clock format [clock seconds] -format %Y%m%d_%H%M%S]
    set timestamp [clock format [clock seconds] -format %Y%m%d-%H:%M:%S]
    set ext ".log"
    set sep "_"
    set ombplus "OMBplus"
    set omblogname $path/$one_Module$sep$object$sep$datetime$sep$ombplus$ext
    set OMBLOG $omblogname
    set logname $path/$one_Module$sep$object$sep$datetime$ext
    set log_file [open $logname w]
    set word "PARALLEL"
    set i 0
    #Connect to OWB Repository
    OMBCONNECT .... your connect tring
    #Ignores errors that occur in any command that is part of a script and moves to the next command in the script.
    set OMBCONTINUE_ON_ERROR ON
    OMBCC "'$root/$one_Module'";      
    #Searching Mappings for Parallel in source View operators
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Searching for Loading/Extraction Operators set at Parallel";
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Searching for Loading/Extraction Operators set at Parallel";
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
    foreach mapName [OMBLIST MAPPINGS] {
    foreach opName [OMBRETRIEVE MAPPING '$mapName' GET TABLE OPERATORS] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT)] {
    if { [ regexp $word $prop1] == 1 } {
    incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET DIMENSION OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET CUBE OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET VIEW OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    if { $i == 0 } {
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Not found any Loading/Extraction Operators set at Parallel";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Not found any Loading/Extraction Operators set at Parallel";
         } else {
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Fixed $i Loading/Extraction Operators set at Parallel";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Fixed $i Loading/Extraction Operators set at Parallel";
    close $log_file;
    Enjoy!
    Michel

  • Process Flow - ordering of mappings

    hello group,
    i've developed a simple process flow which loads several mappings in a sequence.
    it has to be stated that mapping A has to be loaded before mapping B.
    when i am running this process flow i get a litte bit confused.
    in the job details mapping B is loaded before,
    in repository browser the same information.
    does this really mean, that mapping B is loaded before mapping A?
    or how can i check if the ordering was ok?
    are there some mechanism inside the process flow editor to check this?
    thanks for your infos,
    s.v.e.n

    Hi,
    the owb has a problem when you place a new operator (e. g. a mapping) into an existing workflow or change the transitions. After this for the source object of this new object the outgoing transitions will get a wrong and duplicate number.
    You can find these problems as rep_owner with the following script:
    select *
      from all_iv_process_transitions
    where (source_activity_id, transition_order) in
        select source_activity_id, transition_order 
         from all_iv_process_transitions
        group by source_activity_id, transition_order having count(*) > 1
      order by source_activity_id, transition_order;You must change the number for these transitions.
    PS: To control the correct order you can use the workflow manager. There you can see the ordering at runtime of a workflow.
    Regards,
    Detlef

  • How to check mappings execution time in Process flow

    Hi All,
    We created one process flow and scheduled it. It is successfully completed after 30 Minutes.
    Process flows contains 3 mappings, First mapping complete sucessfully, Second mapping will start, after completing successfully second mapping. Third mapping will start and complete sucessfully. Success emails will generate.
    I would like to know which mapping is taking long time execution.
    Could you please suggest how can we find which mapping is taking long time execution.
    I dont like to run each mapping indiviual and see the execution time.
    Regards,
    Ava.

    Execute the below query in OWB owner or User schema
    In place of '11111' give the execution id from control center.
    select Map_run.NUMBER_RECORDS_INSERTED,
    map_run.NUMBER_RECORDS_MERGED ,
    map_run.NUMBER_RECORDS_UPDATED ,exe.execution_audit_id, Exe.ELAPSE_TIME,exe.EXECUTION_NAME,exe.EXECUTION_AUDIT_STATUS,map_run.MAP_NAME
      from ALL_RT_AUDIT_MAP_RUNS Map_run,ALL_RT_AUDIT_EXECUTIONS Exe
    where   exe.EXECUTION_AUDIT_ID=map_run.EXECUTION_AUDIT_ID(+)
            and exe.execution_audit_id > '11111'
            order by  exe.execution_audit_id descCheers
    Nawneet
    Edited by: Nawneet on Feb 22, 2010 4:26 AM

  • Tools for running process flows and mappings

    The operations/production area is responsible for running process flows and mappings in a day by day base. As developer, I need to implement a solution that allow them run these artifacts. For this purpose, is there any tool apart from Control Center ????
    Thnks

    The scripts you mentioned (sqlplus_exec_background_template.sql and sqlplus_exec_template.sql) can be used for command line execution of mappings.
    We do not run these in Oracle Workflow, as we already have an enterprise scheduling platform, Redwood Cronacle in our case. (Also one finds AppWorx and others in this area, see e.g. http://www.bmc.com/USA/Corporate/attachments/BMC_Article2.pdf)
    Regards, Erik Ykema

Maybe you are looking for

  • HELP: Trying to return TM backups to the original external HD but unable. "Not enough space" error.

    **** I posted this in the wrong OS section, so I have reposted it in the Mountain Lion section. Sorry for the mix-up. Here is the original post. Thanks for looking. **** Hi everyone, I really hope you can help. I've spent a good 3 days with this issu

  • Java reflection (interesting question)

    hi folks, class A { void foo() { Class B overrides method foo() in A class B extends A { void foo() { Now i create a object of class B and assign to A A aref = new B(); aref.foo() will call method foo()of class B. //polymorphism Using reflection, is

  • GRC 5.3: CUP risk analysis VS. RAR risk analysis

    I've installed and configured RAR and CUP.  When I do a risk analysis simulation in RAR on a user for adding a role, it comes back with no conflicts.  When I go into CUP and make a new request for adding the same role to the same user, it comes back

  • TS3694 error 6 - what to do?

    What to do when you see error 6 when uploading iOS 7?

  • Reporting for PIRs: actual consumption against plan

    Hi all, can you recommend what standard reports can show actual consumption figures against plan in PIRs. Collective report is needed, looks like rows with columns for material code, plan figure, consumed figure and this is for given periods. And pla