Process Flow - ordering of mappings

hello group,
i've developed a simple process flow which loads several mappings in a sequence.
it has to be stated that mapping A has to be loaded before mapping B.
when i am running this process flow i get a litte bit confused.
in the job details mapping B is loaded before,
in repository browser the same information.
does this really mean, that mapping B is loaded before mapping A?
or how can i check if the ordering was ok?
are there some mechanism inside the process flow editor to check this?
thanks for your infos,
s.v.e.n

Hi,
the owb has a problem when you place a new operator (e. g. a mapping) into an existing workflow or change the transitions. After this for the source object of this new object the outgoing transitions will get a wrong and duplicate number.
You can find these problems as rep_owner with the following script:
select *
  from all_iv_process_transitions
where (source_activity_id, transition_order) in
    select source_activity_id, transition_order 
     from all_iv_process_transitions
    group by source_activity_id, transition_order having count(*) > 1
  order by source_activity_id, transition_order;You must change the number for these transitions.
PS: To control the correct order you can use the workflow manager. There you can see the ordering at runtime of a workflow.
Regards,
Detlef

Similar Messages

  • How to check mappings execution time in Process flow

    Hi All,
    We created one process flow and scheduled it. It is successfully completed after 30 Minutes.
    Process flows contains 3 mappings, First mapping complete sucessfully, Second mapping will start, after completing successfully second mapping. Third mapping will start and complete sucessfully. Success emails will generate.
    I would like to know which mapping is taking long time execution.
    Could you please suggest how can we find which mapping is taking long time execution.
    I dont like to run each mapping indiviual and see the execution time.
    Regards,
    Ava.

    Execute the below query in OWB owner or User schema
    In place of '11111' give the execution id from control center.
    select Map_run.NUMBER_RECORDS_INSERTED,
    map_run.NUMBER_RECORDS_MERGED ,
    map_run.NUMBER_RECORDS_UPDATED ,exe.execution_audit_id, Exe.ELAPSE_TIME,exe.EXECUTION_NAME,exe.EXECUTION_AUDIT_STATUS,map_run.MAP_NAME
      from ALL_RT_AUDIT_MAP_RUNS Map_run,ALL_RT_AUDIT_EXECUTIONS Exe
    where   exe.EXECUTION_AUDIT_ID=map_run.EXECUTION_AUDIT_ID(+)
            and exe.execution_audit_id > '11111'
            order by  exe.execution_audit_id descCheers
    Nawneet
    Edited by: Nawneet on Feb 22, 2010 4:26 AM

  • How to schedule mappings to process flows?

    Hi,
    I have shceduled a calendar (Job) which is referring to a process flow. But how can I make sure that the mappings are referring to the same process flow?
    E.g. I have scheduled job at 10 AM , I have created the process flow for 10 AM referring to the same scheduled job.
    My understanding here is there is a hierarchy : Scheduled jobs > Process Flows > Mappings
    I have configured the process flow to run it at a scheduled job, now I want the mappings to understand to run at the same time as that of the schedule.
    And also when I start the process flow all the mappings should get executed.
    Is there any parameter to tell the process flow that all these mappings falls under you.
    Hope I have made myself clear.
    Can anyone please look into this query?
    Thnks in adv..

    When I double click and open my process flow I am not able to see any mapping. We have stored procedures written:
    ln_exists NUMBER;
    LS_ERROR VARCHAR2(200);
    LD_START_PERIOD_DT DATE;
    LD_END_PERIOD_DT DATE;
    EX_PF_NOT_VALID EXCEPTION ;
    EX_SUB_PF_NOT_VALID EXCEPTION ;
    EX_LAYER_NOT_VALID EXCEPTION ;
    EX_MODULE_NOT_VALID EXCEPTION ;
    EX_DATE_FORMAT_ERR EXCEPTION ;
    BEGIN
    --1: Check the Process Flow parameter value
    IF IP_PF IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_process_flow_par
    where process_flow = IP_PF;
    IF ln_exists =0 THEN
    RAISE EX_PF_NOT_VALID;
    END IF;
    END IF;
    --2: Check Sub Process Flow Parameters value
    IF IP_SUB_PF IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_sub_pf_par
    where sub_pf_code = IP_SUB_PF;
    IF ln_exists = 0 then
    RAISE EX_SUB_PF_NOT_VALID;
    END IF;
    END IF;
    --3:Check Layer Code Parameter Value
    IF IP_LAYER IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_lookup_code
    where lookup_type='LAYER_CODE'
    and lookup_code= IP_LAYER;
    IF LN_EXISTS =0 THEN
    RAISE EX_LAYER_NOT_VALID;
    END IF;
    END IF;
    --4: Check Module Code Parmeter Value
    IF IP_MODULE IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_lookup_code
    where lookup_type IN ('SOURCE_SYSTEM','SUBJECT_CODE')
    and lookup_code= IP_MODULE;
    IF LN_EXISTS =0 THEN
    RAISE EX_MODULE_NOT_VALID;
    END IF;
    END IF;
    --5: Check start Period date & End Period Date Format
    BEGIN
    IF IP_START_PERIOD_DT IS NOT NULL THEN
    LD_START_PERIOD_DT := TO_DATE(IP_START_PERIOD_DT,'YYYY-MM-DD');
    END IF;
    IF IP_END_PERIOD_DT IS NOT NULL THEN
    LD_END_PERIOD_DT := TO_DATE(IP_END_PERIOD_DT,'YYYY-MM-DD');
    END IF;
    EXCEPTION
    WHEN OTHERS THEN
    RAISE EX_DATE_FORMAT_ERR;
    END;
    EXCEPTION
    WHEN EX_DATE_FORMAT_ERR THEN
    LS_ERROR := 'Date Format is not valid ,please check (FORMAT: YYYY-MM-DD HH24 /YYYYMMDDHH24)';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20002,LS_ERROR);
    WHEN EX_PF_NOT_VALID THEN
    LS_ERROR := 'The Process Flow Value is not valid ,please check table adm_process_flow_par';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20002,LS_ERROR);
    WHEN EX_SUB_PF_NOT_VALID THEN
    LS_ERROR := 'The Sub Process Flow Value is not valid ,please check table adm_sub_pf_par';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20003,LS_ERROR);
    WHEN EX_LAYER_NOT_VALID THEN
    LS_ERROR := 'The Layer Code Value is not valid ,please check adm_lookup_code(lookup_type="LAYER_CODE")';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20004,LS_ERROR);
    WHEN EX_MODULE_NOT_VALID THEN
    LS_ERROR := 'The Layer Code Value is not valid ,please check adm_lookup_code(lookup_type IN ("SOURCE_SYSTEM","SUBJECT_CODE")';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20005,LS_ERROR);
    END;
    Can anyone throw some light on this issue?
    Edited by: user11001347 on May 11, 2010 11:46 PM

  • Process flows are not running in OWB

    I have a process flow which has a main process and this main process calls 3 subprocesses and 1 externel process. This workflow was running fine and it suddenly stopped working and when I try to run it from WF monitor it is giving the following error.
    Activity CALWINDW_INC_LOAD_PF_WE
    Result Exception
    Error Name -20001
    Error Message ORA-20001: Task CALWINDW_INC_DELETE_PF does not exist. Please check that the ProcessFlow has been deployed successfully. ORA-01403: no data found
    Error Stack Wf_Engine_Util.Function_Call(WB_RT_WORKFLOW_UTIL.EXECUTE_TASK, CLWNPKG, 200509200915, 37461, RUN).
    I tried running it from OWB deployment manager but it didn't work.
    I have checked all the process flows and the mappings all are deployed and all the mappings are running sepreatly. I tried dropping and deploying the process flow again and it deployed successfully without any error. We have tried rebooting the server, creating new PF location and new process flows, but nothing worked out. Becuase of this issue we can't run the whole process. Any help will be appriciated please.

    May be your chain was in the wait at the scheduled time, as there were no adequate processes available at the app server to take up your job.
    Check with your Basis ( and also your users ); about the optimal time to schedule the job and reschedule your jobs.
    Ravi Thothadri

  • Process flow stuck while execution - Fork being used

    Hi,
    We are using fork feature in the process flow. Three mappings are being forked; if all the 3 are successfully completed, then the 'END_SUCCESS' needs to be reached. This is done by routing the success transition from 3 mappings to the 'AND' process and then from 'AND' to 'END_SUCCESS'. If any one of the mapping fails, the 'END_FAILURE' needs to be reached. This is done by routing the success transition from 3 mappings to the 'OR' process and then from 'OR' to 'END_FAILURE'.
    When this process flow is executed, 2 mappings are successfully completed; the third one is completed with errors. While viewing from workflow monitor, the success routes are clearly shown but the failure route is not shown. In the runtime audit browser, the mapping details are correctly shown (2 mappings with success and the third one with failure). The process flow is shown having a status of 'BUSY'. The process flow keeps on waiting and does not get completed/aborted automatically.
    Is there something which I have missed in the process flow?
    Thanks in advance,

    Reposting as there was a mistake in my previous posting:
    Hi,
    We are using fork feature in the process flow. Three mappings are being forked; if all the 3 are successfully completed, then the 'END_SUCCESS' needs to be reached. This is done by routing the success transition from 3 mappings to the 'AND' process and then from 'AND' to 'END_SUCCESS'. If any one of the mapping fails, the 'END_FAILURE' needs to be reached. This is done by routing the failure transition from 3 mappings to the 'OR' process and then from 'OR' to 'END_FAILURE'.
    When this process flow is executed, 2 mappings are successfully completed; the third one is completed with errors. While viewing from workflow monitor, the success routes are clearly shown but the failure route is not shown. In the runtime audit browser, the mapping details are correctly shown (2 mappings with success and the third one with failure). The process flow is shown having a status of 'BUSY'. The process flow keeps on waiting and does not get completed/aborted automatically.
    Is there something which I have missed in the process flow?
    Thanks in advance,

  • Process flow?????

    helo ,
       Im new to Financial area..and currently im learning finance...
       I have got some theoretical knowledge about financial accounting.
       so Im in need of the process flow of finance.
       I also want the flow with some Practical scenario or Examples..
       were can i find dis document???
       Hope 2 get some tips..
         Thanking you.
    Regards,
    Pravi.

    You need to set up a process flow with your mappings defined as activities, and then configure the transitions between them to depend on success/fail.
    What you are asking is the most basic of things you can do with process flows, so might I suggest reading the manual?
    http://download-east.oracle.com/docs/cd/B31080_01/doc/owb.102/b28223/processflows.htm#i1158939

  • Process Flow log shows RPE-01003 on mapping execution

    Process Flow log shows this following error mappings execution:
    oracle.wh.runtime.platform.adapter.InfrastructureException: RPE-01003: An infrastructure condition prevented the request from completing.
    - no rows found for select into statement
    The Process Flow never recover from this error - owb_owner.all_rt_audit_executions shows the mapping in BUSY and I have to manually abort the process flow to recover.
    Steps to Reproduce Problem:
    We recently upgrade our test system from *10.2 into 11.1* and OWB client from *10.2.0.3 into 10.2.0.4*
    During the upgrade DBA update OWB location setting into 11.1
    After the upgrade, I re-deploy several Process Flows, but I did not re-deploy mappings that get call by the PF
    When I run the PF, the mapping was stuck in 'BUSY' status (owb_owner.all_rt_audit_executions)
    When i check the log it shows this following error:
    oracle.wh.runtime.platform.adapter.InfrastructureException: RPE-01003: An infrastructure condition prevented the request from completing.
    - no rows found for select into statement
    Full log:
    2009/04/23-12:33:10-CDT [1FA1BB6] Initializing execution for auditId= 2702852 parentAuditId= 2702837 topLevelAuditId=2702837 taskName= NIHUB_NI_ZWEB_PF:NIHUB_NIUP_ADDR_TYPES_ZWEB_MAP
    2009/04/23-12:33:10-CDT [1FA1BB6] oracle.wh.runtime.platform.adapter.InfrastructureException: RPE-01003: An infrastructure condition prevented the request from completing.
    - no rows found for select into statement
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.initialize(ExecutionContextImpl.java:1505)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.initialize(ExecutionController.java:32)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:50)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:23)
         at oracle.wh.runtime.platform.service.ExecutionManager.run(ExecutionManager.java:36)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: java.sql.SQLException: no rows found for select into statement
         at sqlj.runtime.error.Errors.raiseError(Errors.java:118)
         at sqlj.runtime.error.Errors.raiseError(Errors.java:60)
         at sqlj.runtime.error.RuntimeRefErrors.raise_NO_ROW_SELECT_INTO(RuntimeRefErrors.java:62)
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.initialize(ExecutionContextImpl.java:1482)
         ... 5 more
    java.sql.SQLException: no rows found for select into statement
         at sqlj.runtime.error.Errors.raiseError(Errors.java:118)
         at sqlj.runtime.error.Errors.raiseError(Errors.java:60)
         at sqlj.runtime.error.RuntimeRefErrors.raise_NO_ROW_SELECT_INTO(RuntimeRefErrors.java:62)
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.initialize(ExecutionContextImpl.java:1482)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.initialize(ExecutionController.java:32)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:50)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:23)
         at oracle.wh.runtime.platform.service.ExecutionManager.run(ExecutionManager.java:36)
         at java.lang.Thread.run(Thread.java:534)
    2009/04/23-12:33:10-CDT [1FA1BB6] Thread terminating due to fatal exception of type oracle.wh.runtime.platform.adapter.InfrastructureException
    2009/04/23-12:33:10-CDT [1FA1BB6] oracle.wh.runtime.platform.adapter.InfrastructureException: RPE-01003: An infrastructure condition prevented the request from completing.
    - null
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.createMessage(ExecutionContextImpl.java:3686)
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.reportMessage(ExecutionContextImpl.java:1084)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:116)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:23)
         at oracle.wh.runtime.platform.service.ExecutionManager.run(ExecutionManager.java:36)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: java.lang.NullPointerException
         at oracle.wh.runtime.platform.service.controller.AdapterContextImpl.getPlatformConnection(AdapterContextImpl.java:83)
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.createMessage(ExecutionContextImpl.java:3639)
         ... 5 more
    2009/04/23-12:33:10-CDT [1FA1BB6] {Cause Exception with null message}
    java.lang.NullPointerException
         at oracle.wh.runtime.platform.service.controller.AdapterContextImpl.getPlatformConnection(AdapterContextImpl.java:83)
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.createMessage(ExecutionContextImpl.java:3639)
         at oracle.wh.runtime.platform.service.controller.ExecutionContextImpl.reportMessage(ExecutionContextImpl.java:1084)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:116)
         at oracle.wh.runtime.platform.service.controller.ExecutionController.execute(ExecutionController.java:23)
         at oracle.wh.runtime.platform.service.ExecutionManager.run(ExecutionManager.java:36)
         at java.lang.Thread.run(Thread.java:534)

    Steps to Reproduce Problem:
    We recently upgrade our test system from 10.2 into 11.1 and OWB client from 10.2.0.3 into 10.2.0.4
    During the upgrade DBA update OWB location setting into 11.1
    After the upgrade, I re-deploy several Process Flows, but I did not re-deploy mappings that get call by the PF
    When I run the PF, the mapping was stuck in 'BUSY' status (owb_owner.all_rt_audit_executions)
    When i check the log it shows this following error:
    oracle.wh.runtime.platform.adapter.InfrastructureException: RPE-01003: An infrastructure condition prevented the >request from completing.
    - no rows found for select into statementFirst deploy alll dependent object of Process Flow.
    like tables ,mappings then try to execute PF.

  • ORA-12154: TNS:could not ... for imported pl-sql function in Process Flow

    Hi
    I included in a process flow a PL-SQL function as user defined transformation.
    This PL-SQL function resides in my source database, from where I imported it in OWB.
    When I am trying the run the Process Flow (from control center manager) I get the following message upon executing that function.
    ORA-12154: TNS:could not resolve the connect identifier specified
    When I don't include that function my process flow works. Mappings in my process flow read from the same source database as where the function is.
    So I don't think it is caused by a problem with tnsnames.ora..
    Thanks in advance for anybodies help.
    Hans

    Hi Oleg,
    Thanks for your reply. Indeed I didn't create a runtime repository.
    However don't like to do that to for the source DB.
    I wonder if I can get around that by executing a SQLplus script instead.
    I tried this but my script does never complete when it is started from the process flow. If I run it in SQLplus it runs fine and completes instantly.
    My script does:
    update 1 column in one record of a table.
    commit;
    exit;
    What do you think?
    Thanks,
    Hans

  • OWB : Runtime Repo for Target and Process Flows on Different M/C

    Hi,
    I have an environment where the Runtime Repositories for the Target Warehouse and Process Flows ( Workflows ) are on different machines. I have deployed :
    * the mappings and Target Tables on the Target Warehouse
    * Process Flows on the Workflow Server which resides on a seperate M/C.
    When I try executing the Process Flows for the Mappings , it errors out indicating that the objects may not have been deployed. Looks like the Process Flows cannot see the Mappings deployed.
    Can somebody help me here ?????

    I think target schema and runtime rep shuould be in same instance

  • How to execute a process flow?

    I create a process flow that contains mappings, it generate an xml code
    I want to know how to execute a process flow an how to know it sucess or fails ?
    have you any idea?
    thank you

    Hi
    You can also use 3rd party tools or batch scripts there is a SQL script for executing any OWB executable (sqlplus_exec_template.sql in owb/rtp/sql) see the post here;
    http://blogs.oracle.com/warehousebuilder/2008/11/using_3rd_party_schedulers_with_owb_1.html
    Cheers
    David

  • Replace mappings in Process Flows

    Hi,
    Is there an easy way of replacing an existing mapping ina PF with new version of the same?
    thanks
    mahesh

    OK, here is some sample code you can play with. What it does is drop the mapping activity from the process flow and then replace it with a fresh version, rebuilding parameters and transitions.
    I used an additional DB connection to query for problem activities. You can see the logic in the $v_mapquery query: alter this if you need something different. This is much faster then using pure scripting: there would problems of RAM and speed while scanning all process flows using OMBPlus.
    So I cache all the needed info in some lists and then print/reconcile it.
    Having an extra open connection can cause some concurrency problems, but this was designed to run batch and send back an e-mail with results.
    Some extra notes:
    1-if a mapping is dropped I print the unbound activity but can't reconcile it
    2-activities are replaced so they will be out of position when you enter the process flow editor
    2-there was a bug in scripting related to SQLLoader mappings. I don't know if this has been fixed; so you can have problems with these.
    The following 3 procedures cache the process flow info.
    # Retrieve transitions of a proc. flow activity
    proc get_map_pf_transitions {\
         p_conn \
         p_process_flow_id \
         p_map_activity_id \
         p_pf_act_trans
         upvar $p_pf_act_trans v_pf_act_trans
         set v_query "select
         cast(decode (tr.source_activity_id, ?, 'OUTGOING','INCOMING') as varchar2(50)) TRANSITION_DIRECTION,
    process_id, process_name,
      transition_id, transition_name, business_name, description, condition,
                           source_activity_id, source_activity_name,
                            target_activity_id, target_activity_name
    from all_iv_process_transitions tr
           where tr.process_id = ?
           and (tr.source_activity_id = ?
                  or tr.target_activity_id = ?)
                order by source_activity_id";     
         set v_stmt [ $p_conn prepareCall $v_query ]
         $v_stmt {setLong int long} 1 $p_map_activity_id
         $v_stmt {setLong int long} 2 $p_process_flow_id
         $v_stmt {setLong int long} 3 $p_map_activity_id
         $v_stmt {setLong int long} 4 $p_map_activity_id
         set v_resultset [ $v_stmt  executeQuery ]
         # set to -1 so it goes to 0 when entering loop
         set v_transindex -1
         set v_temptran [list ]
         while { [ $v_resultset next ] == 1  } {
             incr v_transindex;
             lappend v_temptran \
                  [ list \
                       [$v_resultset getString TRANSITION_NAME] \
                       [$v_resultset getString DESCRIPTION] \
                       [$v_resultset getString CONDITION] \
                       [$v_resultset getString TRANSITION_DIRECTION] \
                       [$v_resultset getString SOURCE_ACTIVITY_NAME] \
                       [$v_resultset getString TARGET_ACTIVITY_NAME] \
        lappend v_pf_act_trans $v_temptran
    # Retrieve parameters of a proc. flow mapping
    proc get_map_pf_parameters {\
         p_conn \
         p_map_activity_id \
         p_pf_act_parameters
         upvar $p_pf_act_parameters v_pf_act_parameters
         set v_query "select
         parameter_owner_id, parameter_owner_name,
         PARAMETER_OWNER_ID, PARAMETER_NAME, DATA_TYPE, DEFAULT_VALUE parameter_value,
         BUSINESS_NAME, DESCRIPTION
         from ALL_IV_PROCESS_PARAMETERS pa
           where pa.parameter_owner_id = ?";     
         set v_stmt [ $p_conn prepareCall $v_query ]
         $v_stmt {setLong int long} 1 $p_map_activity_id
         set v_resultset [ $v_stmt  executeQuery ]
         # set to -1 so it goes to 0 when entering loop
         set v_paramindex -1
         set v_tempparam [list ]
         while { [ $v_resultset next ] == 1  } {
             incr v_paramindex;
             lappend v_tempparam \
                  [ list \
                       [$v_resultset getString PARAMETER_NAME] \
                       [$v_resultset getString DATA_TYPE] \
                       [$v_resultset getString PARAMETER_VALUE] \
                       [$v_resultset getString DESCRIPTION] \
        lappend v_pf_act_parameters $v_tempparam
    # Retrieve and cache all info needed to upgrade process flows
    # all parameters are lists which are appended, except the connection
    proc get_map_pf_unbound { \
         p_conn \
         p_upd_types \
         p_maps \
         p_pf_paths \
         p_pf_proc_names \
         p_pf_act_names \
         p_pf_act_parameters \
         p_pf_act_trans
         upvar $p_upd_types v_upd_types
         upvar $p_maps v_maps
         upvar $p_pf_paths v_pf_paths
         upvar $p_pf_proc_names v_pf_proc_names
         upvar $p_pf_act_names v_pf_act_names
         upvar $p_pf_act_parameters v_pf_act_parameters
         upvar $p_pf_act_trans v_pf_act_trans
    # query to retrieve unbound mappings (actually, I use views in the DB ... )
         set v_mapquery "with proc_maps as (
          select
       '/'||md.project_name||'/'|| information_system_name || '/' ||
                                pk.package_name pf_fqual_procpath,
       md.project_id pf_project_id,
       md.project_name pf_project_name,
       md.information_system_id pf_module_id,
       md.information_system_name pf_module_name,
       pk.package_id pf_package_id,
       pk.package_name pf_package_name,
       pr.process_id pf_process_id,
       pr.process_name pf_process_name,
       a.activity_id pf_activity_id,
       a.activity_name pf_activity_name,
       a.business_name pf_act_business_name,
       a.description pf_act_description,
       a.activity_type pf_act_activity_type,
       a.bound_object_id pf_act_bound_object_id,
       a.bound_object_name pf_act_bound_object_name
        from all_iv_process_activities a,
                                     all_iv_processes pr,
                                all_iv_packages pk,
                                all_iv_process_modules md
                 where
                 a.activity_type in (
                    'PlSqlMapProcessNoteTag', /* type for PLSQL mappings */
                    'SqlLdrProcessNoteTag' /* SQLLOADER mappings */)
                 and a.process_id = pr.process_id
                 and pk.package_id = pr.package_id
                 and md.INFORMATION_SYSTEM_ID = pk.schema_id
      maps as (
           select
         '/'||md.project_name||'/'||md.information_system_name||
         '/'||  mp.MAP_NAME mp_fqual_mapname,
           md.PROJECT_ID mp_project_id,
           md.PROJECT_NAME mp_project_name,
           md.INFORMATION_SYSTEM_ID mp_module_id,
           md.INFORMATION_SYSTEM_NAME mp_module_name,
           mp.MAP_ID mp_map_id,
           mp.MAP_NAME mp_map_name,
           mp.BUSINESS_NAME MP_BUSINESS_NAME ,
           mp.DESCRIPTION MP_DESCRIPTION
            from all_iv_xform_maps mp,
                      all_iv_information_systems md
                    where mp.INFORMATION_SYSTEM_ID = md.INFORMATION_SYSTEM_ID
    select * from (
    /* case 1: mapping name has changed */
    select
      '1-CHANGEDNAME' changetype,
    a.*,m.* from proc_maps a, maps m
                    where a.pf_act_bound_object_id = m.mp_map_id
                      and a.pf_act_bound_object_name <> m.mp_map_name
                      union all
    /* case 2: there's a new mapping with the old name... I'll reconcile only if
       the old mapping was dropped: otherwise you'll be in case 1:
       IMPORTANT- NOTE: I'll reconcile with a new mapping with the same name even
                      if found in a different module.
    select
      '2-REPLACED' changetype,
    a.*,mnew.* from proc_maps a,
                                     maps mnew,
                                maps mold
                    where a.pf_act_bound_object_id <> mnew.mp_map_id
                 and a.pf_act_bound_object_name = mnew.mp_map_name
                 /* verify that mapping is in the current project */
                 and mnew.mp_project_id = a.pf_project_id
                 and a.pf_act_bound_object_id = mold.mp_map_id (+)
                 and mold.mp_map_id is null
                         union all
    /* case 3: no matching mapping. I'll warn the user that the activity is not bound nor bindable */
       select
       '3-MISSING' changetype,
       a.*,mnew.* from proc_maps a, maps mnew, maps mold
                    where
                     a.pf_act_bound_object_name = mnew.mp_map_name (+)
                 and a.pf_project_id = mnew.mp_project_id (+)
                 and a.pf_act_bound_object_id = mold.mp_map_id (+)
                 and mnew.mp_map_id is null
                 and mold.mp_map_id is null)
                 order by changetype, pf_fqual_procpath, pf_process_name, pf_activity_name";
    # query to retrieve connections between pflow activities
         set v_transquery "select
    process_id, process_name,
      transition_id, transition_name, business_name, description, condition,
                           source_activity_id, source_activity_name,
                            target_activity_id, target_activity_name
    from all_iv_process_transitions tr
           where tr.process_id = ?
           and (tr.source_activity_id = ?
                  or tr.target_activity_id = ?)
                order by source_activity_id";
         set v_mapstmt [ $p_conn prepareCall $v_mapquery ]
         set v_resultset [ $v_mapstmt  executeQuery ]
         # set to -1 so it goes to 0 when entering loop
         set v_mapindex -1
         while { [ $v_resultset next ] == 1  } {
             incr v_mapindex;
             lappend v_upd_types [$v_resultset getString CHANGETYPE]
             set v_fqualmapname [$v_resultset getString MP_FQUAL_MAPNAME]
             lappend v_maps $v_fqualmapname
             set v_pf_activity_id [$v_resultset getLong PF_ACTIVITY_ID]
             set v_pf_process_id [$v_resultset getLong PF_PROCESS_ID]
              lappend v_pf_paths [$v_resultset getString PF_FQUAL_PROCPATH]
              lappend v_pf_proc_names [$v_resultset getString PF_PROCESS_NAME]
              lappend v_pf_act_names [$v_resultset getString PF_ACTIVITY_NAME]
              puts "Retrieving activity parameters...";
              get_map_pf_parameters $p_conn $v_pf_activity_id $p_pf_act_parameters
              puts "Retrieving activity transitions...";
              get_map_pf_transitions $p_conn $v_pf_process_id $v_pf_activity_id $p_pf_act_trans
    #          lappend v_pf_act_properties
    #          lappend v_pf_act_trans
               puts "All data retrieved for activity:";
               puts "[lindex $v_pf_paths $v_mapindex]/[lindex $v_pf_proc_names $v_mapindex]/[lindex $v_pf_act_names $v_mapindex]";     
               puts "Type: [lindex $v_upd_types $v_mapindex]";
    }And here's some example client code to load problem activities info, print it and reconcile (replace) activities.
    # open extra connection to access OWB public views
    set v_connstr "OWBREP/[email protected]:1521:ORCL"
    set v_jdbcconnstr "jdbc:oracle:thin:$p_connstr"
    java::call java.sql.DriverManager registerDriver [java::new oracle.jdbc.OracleDriver ]
    set v_conn [java::call java.sql.DriverManager getConnection $v_jdbcconnstr ]
    # retrieve and cache activity data
    set v_upd_types [list ]
    set v_maps [list ]
    set v_pf_paths [list ]
    set v_pf_proc_names [list ]
    set v_pf_act_names [list ]
    # activity parameters - will be a nested list
    set v_pf_act_parameters [list ]
    # activity transitions - will be a nested list
    set v_pf_act_trans [list ]
    get_map_pf_unbound $v_conn \
    v_upd_types \
    v_maps \
    v_pf_paths \
    v_pf_proc_names \
    v_pf_act_names \
    v_pf_act_parameters \
    v_pf_act_trans \
    $v_conn close
    #print results
    foreach \
         v_upd_type $v_upd_types \
         v_map $v_maps \
         v_pf_path $v_pf_paths \
         v_pf_proc_name $v_pf_proc_names \
         v_pf_act_name $v_pf_act_names \
         v_pf_act_parameterz $v_pf_act_parameters \
         v_pf_act_tranz $v_pf_act_trans \
         puts "*** Reconcile type: $v_upd_type";
         puts "Activity: $v_pf_path/$v_pf_proc_name/$v_pf_act_name"
         puts "Candidate mapping: $v_map"
    # types of activities I can reconcile
    set v_reconc_possible_types [ list "1-CHANGEDNAME" "2-REPLACED" ]
    set v_currentpath ""
    OMBCONN $v_connstr
    #reconcile
    foreach \
         v_upd_type $v_upd_types \
         v_map $v_maps \
         v_pf_path $v_pf_paths \
         v_pf_proc_name $v_pf_proc_names \
         v_pf_act_name $v_pf_act_names \
         v_pf_act_parameterz $v_pf_act_parameters \
         v_pf_act_tranz $v_pf_act_trans \
         if { [lsearch $v_reconc_possible_types $v_upd_type ] == -1 } {
         # skip non-reconcilable activities
              continue;
         puts "Reconciling  $v_pf_path/$v_pf_proc_name/$v_pf_act_name "
         puts "with mapping $v_map ..."     
         if { $v_pf_path != $v_currentpath } {
              OMBCC '$v_pf_path'
              set v_currentpath $v_pf_path     
         # drop and replace activity
         puts "Dropping activity...";
         OMBALTER PROCESS_FLOW '$v_pf_proc_name' \
              DELETE ACTIVITY '$v_pf_act_name';
         puts "Re-creating activity...";     
         # don't change activity name - maybe should inherit mapping name (if no collisions)
         OMBALTER PROCESS_FLOW '$v_pf_proc_name' ADD MAPPING ACTIVITY '$v_pf_act_name' \
                   SET REF MAPPING '$v_map';
         # add transitions
         puts "Adding transitions...";          
         foreach v_tran $v_pf_act_tranz {
              set v_TRANSITION_NAME [lindex $v_tran 0 ]
             set v_DESCRIPTION [lindex $v_tran 1 ]
             set v_CONDITION  [lindex $v_tran 2 ]
              set v_SOURCE_ACTIVITY_NAME [lindex $v_tran 4 ]
              set v_TARGET_ACTIVITY_NAME     [lindex $v_tran 5 ]     
              OMBALTER PROCESS_FLOW '$v_pf_proc_name' ADD TRANSITION '$v_TRANSITION_NAME' \
                   FROM ACTIVITY '$v_SOURCE_ACTIVITY_NAME' \
                   TO '$v_TARGET_ACTIVITY_NAME' \
                   SET PROPERTIES (TRANSITION_CONDITION, DESCRIPTION) VALUES \
                        ('$v_CONDITION','$v_DESCRIPTION');
         # set parameters
         puts "Setting parameters...";
         foreach v_param $v_pf_act_parameterz {
              set v_PARAMETER_NAME [lindex $v_param 0 ]
             set v_PARAMETER_VALUE [lindex $v_param 2 ]
             set v_DESCRIPTION [lindex $v_param 3 ]
             OMBALTER PROCESS_FLOW '$v_pf_proc_name' MODIFY ACTIVITY '$v_pf_act_name' \
                  MODIFY PARAMETER '$v_PARAMETER_NAME' SET \
                   PROPERTIES (VALUE,DESCRIPTION) VALUES ('$v_PARAMETER_VALUE','$v_DESCRIPTION') ;
         puts "Reconcile complete for $v_pf_path/$v_pf_proc_name/$v_pf_act_name";          
    }I hope I haven't lost too much in cutting and pasting! Anyway if something is not clear, ask freely. I don't use OWB built-in process flows any more, but I should remember enough to explain my own code.
    Antonio

  • Mappings not triggered from Process flow?

    Hi,
    I have some mappings under a process flow..they are triggered daily but today they were not triggered can anyone tell me how to go about this?

    Hi ,
    If you have deployed the schedule then it will be in Disable state .
    You can check that by logging into SQL*PLUS as the Target user and running
    select job_name, state from user_scheduler_jobs;
    Use the Schedule tab on the Control Center Jobs panel in the Control Center Manager to schedule
    the job. Highlight, right click the job and from the pop-up menu select "start". Use the same query to check the current status:
    select job_name, state from user_scheduler_jobs;
    The above query will show whether the scheduled job is SCHEDULED, RUNNING or SUCCEEDED.
    Thanks,
    Sutirtha

  • Export and Import of mappings/process flows etc

    Hi,
    We have a single repository with multiple projects for DEV/UAT and PROD of the same logical project. This is a nightmare for controlling releases to PROD and in fact we have a corrupt repository as a result I suspect. I plan to split the repository into 3 separate databases so that we have a design repos for DEV/UAT and PROD. Controlling code migrations between these I plan to use the metadata export and subsequent import into UAT and then PROD once tested. I have used this successfully before on a project but am worried about inherent bugs with metadata export/imports (been bitten before with Oracle Portal). So can anyone advise what pitfalls there may be with this approach, and in particular if anyone has experienced loss of metadata between export and import. We have a complex warehouse with hundreds of mappings, process flows, sqlldr flatfile loads etc. I have experienced process flow imports that seem to lose their links to the mappings they encapsulate.
    Thanks for any comments,
    Brandon

    This should do the trick for you as it looks for "PARALLEL" therefore it only removes the APPEND PARALLEL Hint and leaves other Hints as is....
    #set current location
    set path "C:/TMP"
    # Project parameters
    set root "/MY_PROJECT"
    set one_Module "MY_MODULE"
    set object "MAPPINGS"
    set path "C:/TMP
    # OMBPLUS and tcl related parameters
    set action "remove_parallel"
    set datetime [clock format [clock seconds] -format %Y%m%d_%H%M%S]
    set timestamp [clock format [clock seconds] -format %Y%m%d-%H:%M:%S]
    set ext ".log"
    set sep "_"
    set ombplus "OMBplus"
    set omblogname $path/$one_Module$sep$object$sep$datetime$sep$ombplus$ext
    set OMBLOG $omblogname
    set logname $path/$one_Module$sep$object$sep$datetime$ext
    set log_file [open $logname w]
    set word "PARALLEL"
    set i 0
    #Connect to OWB Repository
    OMBCONNECT .... your connect tring
    #Ignores errors that occur in any command that is part of a script and moves to the next command in the script.
    set OMBCONTINUE_ON_ERROR ON
    OMBCC "'$root/$one_Module'";      
    #Searching Mappings for Parallel in source View operators
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Searching for Loading/Extraction Operators set at Parallel";
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Searching for Loading/Extraction Operators set at Parallel";
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
    foreach mapName [OMBLIST MAPPINGS] {
    foreach opName [OMBRETRIEVE MAPPING '$mapName' GET TABLE OPERATORS] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT)] {
    if { [ regexp $word $prop1] == 1 } {
    incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET DIMENSION OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET CUBE OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET VIEW OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    if { $i == 0 } {
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Not found any Loading/Extraction Operators set at Parallel";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Not found any Loading/Extraction Operators set at Parallel";
         } else {
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Fixed $i Loading/Extraction Operators set at Parallel";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Fixed $i Loading/Extraction Operators set at Parallel";
    close $log_file;
    Enjoy!
    Michel

  • Make to Order Scenario Process Flow

    Hi Friends -
    I am looking for simple Make to Order scenario with BOM Components with Routing.
    The process flow I want to execute -
    1. Create a Sales Order for a Finished Product (Product should have a BOM for Production)
    2. Do the product Materials Planning for the Sales Order through MD50
    3. Get all the dependent material requirements (both for products as Planned Order and also for BOM Components - all the BOM components should also be planned)
    4. Convert Planned Order for Product to Production Order
    5. Convert Planned Orders for BOM components to Purchase Requisition
    6. Confirm Production Order
    Pls correct me if I my process flow is wrong or requires modification.
    I will appreciate if you can validate the above process flow and guide me to execute the same in the system.
    Pls help me on configuring the system and setting up the materials and related master data for the same.
    Thanks in advance
    Purnendu

    Dear GSL - Pls check the following details
    1. I have created a Material Master data for Finished Product (FERT).
    2. I have assigned Item Category Group - 0004, and General Item Category group as 0004 ( Make to Order / Assembly).
    3. I have assigned MRP Type PD in MRP 1 View and Strategy Group 20 (Make to Order Production) in MRP 3 View.
    4. Now created a Universal (Type 3) BOM for the product with one material component.
    5. Now also created a Routing for the Material with two operations having PP01 a control key.
    6 When I try to create a Sales Order for the finished material it is giving message
    Configuration not possible for material 747 : Reason 3 --> Help
    Message no. V1360
    Diagnosis
    This may have been caused by one of the following:
    1. The configuration profile for the material allows or requires the bill of materials to be exploded during order processing. However, a plant has not been specified in the item.
    2. The configuration profile of the material allows or requires the bill of materials to be exploded during order processing, but the order quantity in the item must be greater than 0.
    3. A configuration profile has not been maintained
    or
    A configuration is not permitted for the material
    or
    4. The configuration profile of the material allows or requires the bill ofmaterials to be exploded during order processing. However, the system could not determine a date because important data is missing from the item (see incompletion log).
    This message is also coming in the Incompletion log f the sales order.
    I have not checked the Configurable Indicator for the material in the Basic Data 2 view of the materials master, but still the message related to Configuration is coming.
    Pls help e on this.
    Thanks and warm regards
    Purnendu

  • Internal Order process config details and process flow

    Hi All,
    Please help me,
    I need process flow and configuration for assets to buy from vendors and store it and sell to our stores.
    how to config Internal order configuration step by step and post to Asset management. (Instead of normal sales for revenue account).
    Thanks and regards

    Dear Ravi,
    1. first create an internal order through KO01. That internal order type should be for investment purpose.
    2. Give the relevant detials and go to investment tab and give investment profile.
    3 Then in toolbar go to extras and select "asset under construction". An asset under construction would be created and give the details of this asset.
    4. Then release the internal order in KO02.
    5. Then go to GB60 and post a invoice with this internal order as account assignment object.
    6. GO to KO02 and in extras see the order balance. that amount would be reflected in the order.
    7. now go to KO88 and execute the internal order through which your internal order will be credited and AUC which you created would be debited.
    8. Now go to KO02 and see the order balance it would show 0 as it is moved to AUC.
    9. Now to go to settement rule and give the asset to which u want to transfer this amount and take the category as FXA and give the detials.
    9. now go to KO02 and change the status of internal order to "TECO complete".
    10 Now go to KO88 and execute the order and now the AUC is credited and your asset is debited.
    This is the exhaustive steps which i think wil solve your purpose.
    But make sure the configuration about the order type and investment profile are properly maintained.
    Thanks
    sap firdo

Maybe you are looking for

  • Dc5850 cpu fan speeds to top speed sys then shutdown

    I have a Hewlett-Packard HP Compaq dc5850 Micro tower PC, now running windows 7 pro 64-bit. That was new in late 2008. Originally it came with vista 32-bit downgrade able to XP pro 32 bit, running on 2G RAM, and 160G C drive with a 2.30 gigahertz AMD

  • My ibook can handle which videoformat?

    First of all, please excuse me for any mistakes in my written English, as it is not my main language. So here I go: Now that I've upgraded my ibook tangerine with a 120GB harddrive, I want to store some movies on it in a format that my ibook can hand

  • Search for a tag in xml and modify the content

    Hi, I am quite new to using xml db and i want to know how to search a xml, modify a tag value inside xml and replace the content in xml. Please help me to find a solution for this. Regards, Sprightee

  • Subquery referencing column in outer query

    I have a simplified problem such as... select     a.year ,     (     select     b.column           from     table b           ,     (     select     c.year                     from     table c                     where     c.year = b.year  <= ORA-009

  • Mac not reading VOB files on some burnt DVD-R

    I want to use Mpeg Streamclip to convert VOB files to a quicktime file. I'm successful in doing this when I burn a DVD-R from Mac software. But when I put in a dvd-r into my Mac that was recorded "on-the-fly" on my Panasonic DVD recorder, it shows th