Scheduling Mappings and Process Flows

Can you please tell me how to schedule mappings and process flows in a little detail, if any one has done that. I think OEM can be used for this but dont have any clear cut idea of doing it. Can the mappings/process flows be scheduled without using OEM console ?
Waiting for you suggestions.
rgds
-AP

Hi
You can schedule your mapping and process flows with OEM or database job.
You will find script templates in the OWB HOME/owb/rtp/sql directory
and you can scheldule them.
If you want to do it in OEM use the oem_exec_template.sql file createing an OEM job. Read the OWB documentation about it. If you have any question about it ask it, I have done this procedure many times.
Ott Karesz
http://www.trendo-kft.hu

Similar Messages

  • How to schedule mappings to process flows?

    Hi,
    I have shceduled a calendar (Job) which is referring to a process flow. But how can I make sure that the mappings are referring to the same process flow?
    E.g. I have scheduled job at 10 AM , I have created the process flow for 10 AM referring to the same scheduled job.
    My understanding here is there is a hierarchy : Scheduled jobs > Process Flows > Mappings
    I have configured the process flow to run it at a scheduled job, now I want the mappings to understand to run at the same time as that of the schedule.
    And also when I start the process flow all the mappings should get executed.
    Is there any parameter to tell the process flow that all these mappings falls under you.
    Hope I have made myself clear.
    Can anyone please look into this query?
    Thnks in adv..

    When I double click and open my process flow I am not able to see any mapping. We have stored procedures written:
    ln_exists NUMBER;
    LS_ERROR VARCHAR2(200);
    LD_START_PERIOD_DT DATE;
    LD_END_PERIOD_DT DATE;
    EX_PF_NOT_VALID EXCEPTION ;
    EX_SUB_PF_NOT_VALID EXCEPTION ;
    EX_LAYER_NOT_VALID EXCEPTION ;
    EX_MODULE_NOT_VALID EXCEPTION ;
    EX_DATE_FORMAT_ERR EXCEPTION ;
    BEGIN
    --1: Check the Process Flow parameter value
    IF IP_PF IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_process_flow_par
    where process_flow = IP_PF;
    IF ln_exists =0 THEN
    RAISE EX_PF_NOT_VALID;
    END IF;
    END IF;
    --2: Check Sub Process Flow Parameters value
    IF IP_SUB_PF IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_sub_pf_par
    where sub_pf_code = IP_SUB_PF;
    IF ln_exists = 0 then
    RAISE EX_SUB_PF_NOT_VALID;
    END IF;
    END IF;
    --3:Check Layer Code Parameter Value
    IF IP_LAYER IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_lookup_code
    where lookup_type='LAYER_CODE'
    and lookup_code= IP_LAYER;
    IF LN_EXISTS =0 THEN
    RAISE EX_LAYER_NOT_VALID;
    END IF;
    END IF;
    --4: Check Module Code Parmeter Value
    IF IP_MODULE IS NOT NULL THEN
    select count(*)
    into ln_exists
    from adm_lookup_code
    where lookup_type IN ('SOURCE_SYSTEM','SUBJECT_CODE')
    and lookup_code= IP_MODULE;
    IF LN_EXISTS =0 THEN
    RAISE EX_MODULE_NOT_VALID;
    END IF;
    END IF;
    --5: Check start Period date & End Period Date Format
    BEGIN
    IF IP_START_PERIOD_DT IS NOT NULL THEN
    LD_START_PERIOD_DT := TO_DATE(IP_START_PERIOD_DT,'YYYY-MM-DD');
    END IF;
    IF IP_END_PERIOD_DT IS NOT NULL THEN
    LD_END_PERIOD_DT := TO_DATE(IP_END_PERIOD_DT,'YYYY-MM-DD');
    END IF;
    EXCEPTION
    WHEN OTHERS THEN
    RAISE EX_DATE_FORMAT_ERR;
    END;
    EXCEPTION
    WHEN EX_DATE_FORMAT_ERR THEN
    LS_ERROR := 'Date Format is not valid ,please check (FORMAT: YYYY-MM-DD HH24 /YYYYMMDDHH24)';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20002,LS_ERROR);
    WHEN EX_PF_NOT_VALID THEN
    LS_ERROR := 'The Process Flow Value is not valid ,please check table adm_process_flow_par';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20002,LS_ERROR);
    WHEN EX_SUB_PF_NOT_VALID THEN
    LS_ERROR := 'The Sub Process Flow Value is not valid ,please check table adm_sub_pf_par';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20003,LS_ERROR);
    WHEN EX_LAYER_NOT_VALID THEN
    LS_ERROR := 'The Layer Code Value is not valid ,please check adm_lookup_code(lookup_type="LAYER_CODE")';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20004,LS_ERROR);
    WHEN EX_MODULE_NOT_VALID THEN
    LS_ERROR := 'The Layer Code Value is not valid ,please check adm_lookup_code(lookup_type IN ("SOURCE_SYSTEM","SUBJECT_CODE")';
    SP_ERROR_REC(NULL,IP_PF,IP_SUB_PF,IP_MODULE,IP_LAYER,NULL,NULL,LS_ERROR,'SP_CHECK_PARAMETER_VALID',NULL);
    RAISE_APPLICATION_ERROR(-20005,LS_ERROR);
    END;
    Can anyone throw some light on this issue?
    Edited by: user11001347 on May 11, 2010 11:46 PM

  • Analysing Task Audit, Data Audit and Process Flow History

    Hi,
    Internal Audit dept has requested a bunch of information, that we need to compile from Task Audit, Data Audit and Process Flow History logs. We do have all the info available, however not in a format that allows proper "reporting" of log information. What is the best way to handle HFM logs so that we can quickly filter and export required audit information?
    We do have housekeeping in place, so the logs are partial "live" db tables, and partial purged tables that were exported to Excel to archive the historical log info.
    Many Thanks.

    I thought I posted this Friday, but I just noticed I never hit the 'Post Message Button', ha ha.
    This info below will help you translate some of the information in the tables, etc. You could report on it from the Audit tables directly or move them to another appropriate data table for analysis later. The concensus, though I disagree, is that you will suffer performance issues if your audit tables get too big, so you want to move them periodically. You can do this using a scheduled Task, manual process, etc.
    I personally just dump it to another table and report on it from there. As mentioned above, you'll need to translate some of the information as it is not 'human readable' in the database.
    For instance, if I wanted to pull Metadata Load, Rules Load, Member List load, you could run a query like this. (NOTE: strAppName should be equal to the name of your application .... )
    The main tricks to know at least for task audit table are figuring out how to convert times and determing which activity code corresponds to the user friendly name.
    -- Declare working variables --
    declare @dtStartDate as nvarchar(20)
    declare @dtEndDate as nvarchar(20)
    declare @strAppName as nvarchar(20)
    declare @strSQL as nvarchar(4000)
    -- Initialize working variables --
    set @dtStartDate = '1/1/2012'
    set @dtEndDate = '8/31/2012'
    set @strAppName = 'YourAppNameHere'
    --Get Rules Load, Metadata, Member List
    set @strSQL = '
    select sUserName as "User", ''Rules Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (1)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
    union all
    select sUserName as "User", ''Metadata Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (21)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + '''
    union all
    select sUserName as "User", ''Memberlist Load'' as Activity, cast(StartTime-2 as smalldatetime) as "Time Start",
          cast(EndTime-2 as smalldatetime) as ''Time End'', ServerName, strDescription, strModuleName
       from ' + @strAppName + '_task_audit ta, hsv_activity_users au
       where au.lUserID = ta.ActivityUserID and activitycode in (23)
            and cast(StartTime-2 as smalldatetime) between ''' + @dtStartDate + ''' and ''' + @dtEndDate + ''''
    exec sp_executesql @strSQLIn regards to activity codes, here's a quick breakdown on those ....
    ActivityID     ActivityName
    0     Idle
    1     Rules Load
    2     Rules Scan
    3     Rules Extract
    4     Consolidation
    5     Chart Logic
    6     Translation
    7     Custom Logic
    8     Allocate
    9     Data Load
    10     Data Extract
    11     Data Extract via HAL
    12     Data Entry
    13     Data Retrieval
    14     Data Clear
    15     Data Copy
    16     Journal Entry
    17     Journal Retrieval
    18     Journal Posting
    19     Journal Unposting
    20     Journal Template Entry
    21     Metadata Load
    22     Metadata Extract
    23     Member List Load
    24     Member List Scan
    25     Member List Extract
    26     Security Load
    27     Security Scan
    28     Security Extract
    29     Logon
    30     Logon Failure
    31     Logoff
    32     External
    33     Metadata Scan
    34     Data Scan
    35     Extended Analytics Export
    36     Extended Analytics Schema Delete
    37     Transactions Load
    38     Transactions Extract
    39     Document Attachments
    40     Document Detachments
    41     Create Transactions
    42     Edit Transactions
    43     Delete Transactions
    44     Post Transactions
    45     Unpost Transactions
    46     Delete Invalid Records
    47     Data Audit Purged
    48     Task Audit Purged
    49     Post All Transactions
    50     Unpost All Transactions
    51     Delete All Transactions
    52     Unmatch All Transactions
    53     Auto Match by ID
    54     Auto Match by Account
    55     Intercompany Matching Report by ID
    56     Intercompany Matching Report by Acct
    57     Intercompany Transaction Report
    58     Manual Match
    59     Unmatch Selected
    60     Manage IC Periods
    61     Lock/Unlock IC Entities
    62     Manage IC Reason Codes
    63     Null

  • OWB : Runtime Repo for Target and Process Flows on Different M/C

    Hi,
    I have an environment where the Runtime Repositories for the Target Warehouse and Process Flows ( Workflows ) are on different machines. I have deployed :
    * the mappings and Target Tables on the Target Warehouse
    * Process Flows on the Workflow Server which resides on a seperate M/C.
    When I try executing the Process Flows for the Mappings , it errors out indicating that the objects may not have been deployed. Looks like the Process Flows cannot see the Mappings deployed.
    Can somebody help me here ?????

    I think target schema and runtime rep shuould be in same instance

  • Ways of creating contract and process flow of contracts

    hi friends
    ways of creating contract and process flow of contracts
    thanks for ur help
    regards
    krishna

    hi,
    In the MM Purchasing component, a contract is a type of outline purchase agreement against which release orders (releases) can be issued for agreed materials or services as and when required during a certain overall time-frame.
    Contracts can take the following forms:
    Quantity contracts
    Use this type of contract if the total quantity to be ordered during the validity period of the contract is known in advance. The contract is regarded as fulfilled when release orders totaling a given quantity have been issued.
    Value contracts
    Use this type of contract if the total value of all release orders issued against the contract is not to exceed a certain predefined value. The contract is regarded as fulfilled when release orders totaling a given value have been issued.
    You can also set up corporate buying contracts with your vendors. These are valid for all plants and company codes within a client (see Centrally Agreed Contract).
    You can create a contract as follows:
    Manually
    You enter all data relating to the contract manually.
    Using the referencing technique
    As reference document (the document you copy from), you can use:
    Purchase requisitions
    RFQs/quotations
    Other contracts
    CREATION OF CONTRACT MANUALLY:
    Choose Outline agreement --> Contract --> Create(ME31K)
    The initial screen appears.
    Enter the necessary data. If you make any specifications under the group heading Default data, this data will appear as the default data in each item.
    In the Agreement type field, specify whether you are creating a quantity or value contract, for example.
    Press ENTER .
    The header data screen appears.
    Enter the contract validity period. Check the other fields on this screen and make any necessary changes (e.g. the terms of payment) and define the header conditions.
    Press ENTER .
    The item overview screen appears.
    On this screen, enter the information for each item (material number, target quantity, price, receiving plant, or account assignment, etc.) using the same procedure as with purchase orders.
    Material without a master record: leave the field for the material number empty and enter the following:
    u2013 Short description of the relevant material or service in the Short text field
    u2013 Material group to which the material belongs, in the Material group field
    u2013 Account assignment category
    You can enter u (unknown) or the category of an account assignment.
    u2013 The target quantity and the order unit
    If you specify an account assignment category other than U (field A), you must enter the relevant account assignment data for the item. To do so, choose Item ® Account assignments (see also Account Assignment).
    If necessary, review the details for each item. Select the item(s) to review. Then select Item -> Details.
    Enter the desired conditions.
    Enter further text for the item if any additional instructions to the vendor or Goods Receiving are necessary. Choose Item -> Texts -> Text overview.
    Save the contract.
    Hope it helps..
    Regards
    Priyanka.P
    AWARD IF HELPFULL
    Edited by: Priyanka Paltanwale on Aug 25, 2008 7:20 AM

  • TAS and process flow

    Dear All,
    We are trying to use TAS for existing imeplentation.
    We are not able to assign load id to Sales Order manually even config settings are done for manual one.
    Pls. guide whether the process flow is correct for this.
    Create Load ID --> Assign to SO --> Creat Outbound Delviery --> Create Bulk Shipment --> Create Loading Confirmation --> Create Delviery Confirmation.
    Information on SAP help and in PDF file of TAS is quite confusing.Process flow is not clear.
    Thanks and Regards,
    Vijay Mundke

    Hi Vijay,
    the process can have different variantions, but the document you want to send to the tas system has to have the load-id assigned. The load-id triggers the tas relevance.
    If you want to send the sales order to an external system, you need to assign the load-id to the sales order.
    usually, the sales order is sent to a TPI system (transportation planning system) and the transportation system sends back the delivery and bulk shipment (idoc OILSHl). This means that this order goes to this truck. The shipment represents the truck on which the delivery is scheduled. Each document that is posted after-event is a function module (one for the delivery and one for the bulk shipment).
    Then, you would send the bulk shipment to the terminal (refinery or depot). This means that a load-id needs to be assigned to the bulk shipment (idoc OILSHL). The TAS system receives the shipment and the loading must be done with reference to this shipment/load/id. Then, the TAS sends back the loading data (OILLDD), mainly the OILLDD file contains the reference, the material data, the load qty, the unit, the density, the temperature. The OILLDD idoc has many fields, but only a few are mandatory. I can send you a list of the mandatory fields if you can use it.
    Then, the system posts the load confirmation automatically based on these data. depending on the set-up, you can either have the system post the load confirmation only or the load confirmation and the delivery confirmation. Both documents have standard function modules. I strongly recommend to use the standard function modules. They work. I have used them on several projects and all you need are some exits. If you post only the load confirmation, the delivery confirmation can be posted manually. If the customers are invoiced based on arrival quantities, the delivery confirmation can have different quantities (temperature can be quite different, especially in India). If invoicing is based on loaded quantities, you can post both the load and delivery confirmation with the same data or set up the customizing that a load confirmation will be the end of the process (based on the incoterm).
    before, you can do the customizing, you need to check which processes you have. To know whether you have a shipment or a pick-up process triggers whether you should send a sales order or a shipment to the terminal. Most oil companies have their own fleet and deliver their fuel via a truck to the customer -> in this case, you need to send a bulk shipment to the terminal. But, often, the customers also pick-up the fuel at the refinery. Then, you only need a sales order and a delivery and goods issue is sufficient. However, if you need to manage the plate number at the refinery, you might want to process both of these variants via a bulk shipment. In all cases, the loadings will come back with an OILLDD idoc.
    Depending on the answer to this question, you can set-up the tas relevance customizing. If you choose to send bulk shipments, you set up tas relevance for shipments. This defines simply in which cases a load-id is required in the shipment. Say you have 2 refineries which are represented in the transportation planning point (ref1 and Ref2). If only one refinery will have an automatic interface, you set up only that one to require a load-id. If you have several product groups (say Bitumen, white fuels) and some of them are automated (typically fuel) and other not (bitumen) and you have set them up as bulk shipment types, then you can further differentiate. Then, you need one load-id type for each process variation.
    I like to use the following:
    PU-ORDER (you send an order and the tas posts the delivery and goods issue)
    SH-LC (you send a bulk shipment and the TAS posts the load confirmation only)
    SH-LCDC (you send a bulk shipment and the TSA posts load and dlv confirmation)
    SH-DC (the load confirmed shipment is sent to the arrival terminal and the arrival terminal sends back the delivery confirmation)
    The most commonly used is the SH-LC.
    After that, you call the control structures and the LID function groups the same name. (Types are A for pick-up, B for load confirmation and E for delivery confirmation). It makes the whole thing much clearer. Then, you just have to assign to each function group the inbound function modules of the standard.
    Standard load confirmation for SH-LC for example and standard delivery confirmation for SH-DC.  I hope this helps.
    Regards, Petra

  • OWB Error: Connection lost when assign a schedule to a process flow

    I have:
    - OWB 11Gr2
    - Oracle DB 10Gr2 Source/Target
    I have created a schedule module and set the location to be one of the target DB locations.
    Create a one of schedule and scheduled for daily execution.
    Open configuration of a process flow and assign with the schedule.
    After this, i tried to save the repository, and occurred the error message:
    "Repository Connection Error: The connection to the repository was lost, because of the following database error: Closed Connection
    Exit OWB without committing."
    The connection with the repository was lost and i can´t do more anything.
    I tried to create a separate location for the schedule, but it don´t make difference
    What´s happening?
    Thanks,
    Afonso

    Wea re running 11.2.0.2 and
    When looking at the trace log
    Dump continued from file: /data02/oramisc/diag/rdbms/dpmdbp/dpmdbp/trace/dpmdbp_ora_13503.trc
    ORA-00600: intern felkod, argument: [15570], [], [], [], [], [], [], [], [], [], [], []
    With this query "========= Dump for incident 16022 (ORA 600 [15570]) ========
    *** 2011-10-18 09:52:25.445
    dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
    ----- Current SQL Statement for this session (sql_id=7a76281h0tr2p) -----
    SELECT usage.locuoid, usage.locid, cal_property.elementid, cal_property.classname, schedulable.name || '_JOB' name, schedulable.name || '_JOB' logicalname, cal_property.uoid, cal_property.classname
    , cal_property.notm, cal_property.editable editable, cal_property.customerrenamable, cal_property.customerdeletable, cal_property.seeded, cal_property.UpdateTimestamp, cal_property.strongTypeName,
    cal_property.metadatasignature signature, 0 iconobject FROM Schedulable_v schedulable, CMPStringPropertyValue_v cal_property, CMPCalendarInstalledModule_v cal_mod, CMPPhysicalObject_v sched_phys, CM
    PCalendar_v cal, ( select prop.firstclassobject moduleid, prop.value locuoid, 0 locid from CMPPhysicalObject_v phys, CMPStringPropertyValue_v prop where prop.logicalname = 'SCHEDULE_MODULE_CONFIG.DE
    PLOYMENT.LOCATION' and prop.propertyOwner = phys.elementid and phys.namedconfiguration = 42636 union select installedmodule, '' locuoid, location locid from CMPLocationUsage_v locUse where deployme
    ntdefault = 1 and not exists ( select null from CMPPhysicalObject_v phys, CMPStringPropertyValue_v prop where prop.logicalname = 'SCHEDULE_MODULE_CONFIG.DEPLOYMENT.LOCATION' and prop.firstclassobjec
    t = locUse.installedmodule and prop.propertyOwner = phys.elementid and phys.namedconfiguration = 42636) ) usage WHERE cal_mod.elementid = usage.moduleid and cal_mod.elementid = cal.calendarmodule a
    nd substr(cal_property.value,0,32) = cal.uoid and cal_property.logicalname='SCHEDULABLE.PROPERTY' and (cal_property.firstclassobject = schedulable.elementid or cal_property.firstclassobject = sched
    _phys.elementid and sched_phys.logicalobject = schedulable.elementid) and cal_mod.owningproject = 42631 and cal_property.propertyowner = sched_phys.elementid and sched_phys.namedconfiguration=42636
    ORDER BY schedulable.name || '_JOB'"
    Coulöd be possible this bug
    Bug 12368039 ORA-600 [15570] with UNION ALL views with Parallel steps
    This note gives a brief overview of bug 12368039.
    The content was last updated on: 18-MAY-2011
    Click here for details of each of the sections below.
    Fixed:
    This issue is fixed in     
    * 11.2.0.3

  • Replace mappings in Process Flows

    Hi,
    Is there an easy way of replacing an existing mapping ina PF with new version of the same?
    thanks
    mahesh

    OK, here is some sample code you can play with. What it does is drop the mapping activity from the process flow and then replace it with a fresh version, rebuilding parameters and transitions.
    I used an additional DB connection to query for problem activities. You can see the logic in the $v_mapquery query: alter this if you need something different. This is much faster then using pure scripting: there would problems of RAM and speed while scanning all process flows using OMBPlus.
    So I cache all the needed info in some lists and then print/reconcile it.
    Having an extra open connection can cause some concurrency problems, but this was designed to run batch and send back an e-mail with results.
    Some extra notes:
    1-if a mapping is dropped I print the unbound activity but can't reconcile it
    2-activities are replaced so they will be out of position when you enter the process flow editor
    2-there was a bug in scripting related to SQLLoader mappings. I don't know if this has been fixed; so you can have problems with these.
    The following 3 procedures cache the process flow info.
    # Retrieve transitions of a proc. flow activity
    proc get_map_pf_transitions {\
         p_conn \
         p_process_flow_id \
         p_map_activity_id \
         p_pf_act_trans
         upvar $p_pf_act_trans v_pf_act_trans
         set v_query "select
         cast(decode (tr.source_activity_id, ?, 'OUTGOING','INCOMING') as varchar2(50)) TRANSITION_DIRECTION,
    process_id, process_name,
      transition_id, transition_name, business_name, description, condition,
                           source_activity_id, source_activity_name,
                            target_activity_id, target_activity_name
    from all_iv_process_transitions tr
           where tr.process_id = ?
           and (tr.source_activity_id = ?
                  or tr.target_activity_id = ?)
                order by source_activity_id";     
         set v_stmt [ $p_conn prepareCall $v_query ]
         $v_stmt {setLong int long} 1 $p_map_activity_id
         $v_stmt {setLong int long} 2 $p_process_flow_id
         $v_stmt {setLong int long} 3 $p_map_activity_id
         $v_stmt {setLong int long} 4 $p_map_activity_id
         set v_resultset [ $v_stmt  executeQuery ]
         # set to -1 so it goes to 0 when entering loop
         set v_transindex -1
         set v_temptran [list ]
         while { [ $v_resultset next ] == 1  } {
             incr v_transindex;
             lappend v_temptran \
                  [ list \
                       [$v_resultset getString TRANSITION_NAME] \
                       [$v_resultset getString DESCRIPTION] \
                       [$v_resultset getString CONDITION] \
                       [$v_resultset getString TRANSITION_DIRECTION] \
                       [$v_resultset getString SOURCE_ACTIVITY_NAME] \
                       [$v_resultset getString TARGET_ACTIVITY_NAME] \
        lappend v_pf_act_trans $v_temptran
    # Retrieve parameters of a proc. flow mapping
    proc get_map_pf_parameters {\
         p_conn \
         p_map_activity_id \
         p_pf_act_parameters
         upvar $p_pf_act_parameters v_pf_act_parameters
         set v_query "select
         parameter_owner_id, parameter_owner_name,
         PARAMETER_OWNER_ID, PARAMETER_NAME, DATA_TYPE, DEFAULT_VALUE parameter_value,
         BUSINESS_NAME, DESCRIPTION
         from ALL_IV_PROCESS_PARAMETERS pa
           where pa.parameter_owner_id = ?";     
         set v_stmt [ $p_conn prepareCall $v_query ]
         $v_stmt {setLong int long} 1 $p_map_activity_id
         set v_resultset [ $v_stmt  executeQuery ]
         # set to -1 so it goes to 0 when entering loop
         set v_paramindex -1
         set v_tempparam [list ]
         while { [ $v_resultset next ] == 1  } {
             incr v_paramindex;
             lappend v_tempparam \
                  [ list \
                       [$v_resultset getString PARAMETER_NAME] \
                       [$v_resultset getString DATA_TYPE] \
                       [$v_resultset getString PARAMETER_VALUE] \
                       [$v_resultset getString DESCRIPTION] \
        lappend v_pf_act_parameters $v_tempparam
    # Retrieve and cache all info needed to upgrade process flows
    # all parameters are lists which are appended, except the connection
    proc get_map_pf_unbound { \
         p_conn \
         p_upd_types \
         p_maps \
         p_pf_paths \
         p_pf_proc_names \
         p_pf_act_names \
         p_pf_act_parameters \
         p_pf_act_trans
         upvar $p_upd_types v_upd_types
         upvar $p_maps v_maps
         upvar $p_pf_paths v_pf_paths
         upvar $p_pf_proc_names v_pf_proc_names
         upvar $p_pf_act_names v_pf_act_names
         upvar $p_pf_act_parameters v_pf_act_parameters
         upvar $p_pf_act_trans v_pf_act_trans
    # query to retrieve unbound mappings (actually, I use views in the DB ... )
         set v_mapquery "with proc_maps as (
          select
       '/'||md.project_name||'/'|| information_system_name || '/' ||
                                pk.package_name pf_fqual_procpath,
       md.project_id pf_project_id,
       md.project_name pf_project_name,
       md.information_system_id pf_module_id,
       md.information_system_name pf_module_name,
       pk.package_id pf_package_id,
       pk.package_name pf_package_name,
       pr.process_id pf_process_id,
       pr.process_name pf_process_name,
       a.activity_id pf_activity_id,
       a.activity_name pf_activity_name,
       a.business_name pf_act_business_name,
       a.description pf_act_description,
       a.activity_type pf_act_activity_type,
       a.bound_object_id pf_act_bound_object_id,
       a.bound_object_name pf_act_bound_object_name
        from all_iv_process_activities a,
                                     all_iv_processes pr,
                                all_iv_packages pk,
                                all_iv_process_modules md
                 where
                 a.activity_type in (
                    'PlSqlMapProcessNoteTag', /* type for PLSQL mappings */
                    'SqlLdrProcessNoteTag' /* SQLLOADER mappings */)
                 and a.process_id = pr.process_id
                 and pk.package_id = pr.package_id
                 and md.INFORMATION_SYSTEM_ID = pk.schema_id
      maps as (
           select
         '/'||md.project_name||'/'||md.information_system_name||
         '/'||  mp.MAP_NAME mp_fqual_mapname,
           md.PROJECT_ID mp_project_id,
           md.PROJECT_NAME mp_project_name,
           md.INFORMATION_SYSTEM_ID mp_module_id,
           md.INFORMATION_SYSTEM_NAME mp_module_name,
           mp.MAP_ID mp_map_id,
           mp.MAP_NAME mp_map_name,
           mp.BUSINESS_NAME MP_BUSINESS_NAME ,
           mp.DESCRIPTION MP_DESCRIPTION
            from all_iv_xform_maps mp,
                      all_iv_information_systems md
                    where mp.INFORMATION_SYSTEM_ID = md.INFORMATION_SYSTEM_ID
    select * from (
    /* case 1: mapping name has changed */
    select
      '1-CHANGEDNAME' changetype,
    a.*,m.* from proc_maps a, maps m
                    where a.pf_act_bound_object_id = m.mp_map_id
                      and a.pf_act_bound_object_name <> m.mp_map_name
                      union all
    /* case 2: there's a new mapping with the old name... I'll reconcile only if
       the old mapping was dropped: otherwise you'll be in case 1:
       IMPORTANT- NOTE: I'll reconcile with a new mapping with the same name even
                      if found in a different module.
    select
      '2-REPLACED' changetype,
    a.*,mnew.* from proc_maps a,
                                     maps mnew,
                                maps mold
                    where a.pf_act_bound_object_id <> mnew.mp_map_id
                 and a.pf_act_bound_object_name = mnew.mp_map_name
                 /* verify that mapping is in the current project */
                 and mnew.mp_project_id = a.pf_project_id
                 and a.pf_act_bound_object_id = mold.mp_map_id (+)
                 and mold.mp_map_id is null
                         union all
    /* case 3: no matching mapping. I'll warn the user that the activity is not bound nor bindable */
       select
       '3-MISSING' changetype,
       a.*,mnew.* from proc_maps a, maps mnew, maps mold
                    where
                     a.pf_act_bound_object_name = mnew.mp_map_name (+)
                 and a.pf_project_id = mnew.mp_project_id (+)
                 and a.pf_act_bound_object_id = mold.mp_map_id (+)
                 and mnew.mp_map_id is null
                 and mold.mp_map_id is null)
                 order by changetype, pf_fqual_procpath, pf_process_name, pf_activity_name";
    # query to retrieve connections between pflow activities
         set v_transquery "select
    process_id, process_name,
      transition_id, transition_name, business_name, description, condition,
                           source_activity_id, source_activity_name,
                            target_activity_id, target_activity_name
    from all_iv_process_transitions tr
           where tr.process_id = ?
           and (tr.source_activity_id = ?
                  or tr.target_activity_id = ?)
                order by source_activity_id";
         set v_mapstmt [ $p_conn prepareCall $v_mapquery ]
         set v_resultset [ $v_mapstmt  executeQuery ]
         # set to -1 so it goes to 0 when entering loop
         set v_mapindex -1
         while { [ $v_resultset next ] == 1  } {
             incr v_mapindex;
             lappend v_upd_types [$v_resultset getString CHANGETYPE]
             set v_fqualmapname [$v_resultset getString MP_FQUAL_MAPNAME]
             lappend v_maps $v_fqualmapname
             set v_pf_activity_id [$v_resultset getLong PF_ACTIVITY_ID]
             set v_pf_process_id [$v_resultset getLong PF_PROCESS_ID]
              lappend v_pf_paths [$v_resultset getString PF_FQUAL_PROCPATH]
              lappend v_pf_proc_names [$v_resultset getString PF_PROCESS_NAME]
              lappend v_pf_act_names [$v_resultset getString PF_ACTIVITY_NAME]
              puts "Retrieving activity parameters...";
              get_map_pf_parameters $p_conn $v_pf_activity_id $p_pf_act_parameters
              puts "Retrieving activity transitions...";
              get_map_pf_transitions $p_conn $v_pf_process_id $v_pf_activity_id $p_pf_act_trans
    #          lappend v_pf_act_properties
    #          lappend v_pf_act_trans
               puts "All data retrieved for activity:";
               puts "[lindex $v_pf_paths $v_mapindex]/[lindex $v_pf_proc_names $v_mapindex]/[lindex $v_pf_act_names $v_mapindex]";     
               puts "Type: [lindex $v_upd_types $v_mapindex]";
    }And here's some example client code to load problem activities info, print it and reconcile (replace) activities.
    # open extra connection to access OWB public views
    set v_connstr "OWBREP/[email protected]:1521:ORCL"
    set v_jdbcconnstr "jdbc:oracle:thin:$p_connstr"
    java::call java.sql.DriverManager registerDriver [java::new oracle.jdbc.OracleDriver ]
    set v_conn [java::call java.sql.DriverManager getConnection $v_jdbcconnstr ]
    # retrieve and cache activity data
    set v_upd_types [list ]
    set v_maps [list ]
    set v_pf_paths [list ]
    set v_pf_proc_names [list ]
    set v_pf_act_names [list ]
    # activity parameters - will be a nested list
    set v_pf_act_parameters [list ]
    # activity transitions - will be a nested list
    set v_pf_act_trans [list ]
    get_map_pf_unbound $v_conn \
    v_upd_types \
    v_maps \
    v_pf_paths \
    v_pf_proc_names \
    v_pf_act_names \
    v_pf_act_parameters \
    v_pf_act_trans \
    $v_conn close
    #print results
    foreach \
         v_upd_type $v_upd_types \
         v_map $v_maps \
         v_pf_path $v_pf_paths \
         v_pf_proc_name $v_pf_proc_names \
         v_pf_act_name $v_pf_act_names \
         v_pf_act_parameterz $v_pf_act_parameters \
         v_pf_act_tranz $v_pf_act_trans \
         puts "*** Reconcile type: $v_upd_type";
         puts "Activity: $v_pf_path/$v_pf_proc_name/$v_pf_act_name"
         puts "Candidate mapping: $v_map"
    # types of activities I can reconcile
    set v_reconc_possible_types [ list "1-CHANGEDNAME" "2-REPLACED" ]
    set v_currentpath ""
    OMBCONN $v_connstr
    #reconcile
    foreach \
         v_upd_type $v_upd_types \
         v_map $v_maps \
         v_pf_path $v_pf_paths \
         v_pf_proc_name $v_pf_proc_names \
         v_pf_act_name $v_pf_act_names \
         v_pf_act_parameterz $v_pf_act_parameters \
         v_pf_act_tranz $v_pf_act_trans \
         if { [lsearch $v_reconc_possible_types $v_upd_type ] == -1 } {
         # skip non-reconcilable activities
              continue;
         puts "Reconciling  $v_pf_path/$v_pf_proc_name/$v_pf_act_name "
         puts "with mapping $v_map ..."     
         if { $v_pf_path != $v_currentpath } {
              OMBCC '$v_pf_path'
              set v_currentpath $v_pf_path     
         # drop and replace activity
         puts "Dropping activity...";
         OMBALTER PROCESS_FLOW '$v_pf_proc_name' \
              DELETE ACTIVITY '$v_pf_act_name';
         puts "Re-creating activity...";     
         # don't change activity name - maybe should inherit mapping name (if no collisions)
         OMBALTER PROCESS_FLOW '$v_pf_proc_name' ADD MAPPING ACTIVITY '$v_pf_act_name' \
                   SET REF MAPPING '$v_map';
         # add transitions
         puts "Adding transitions...";          
         foreach v_tran $v_pf_act_tranz {
              set v_TRANSITION_NAME [lindex $v_tran 0 ]
             set v_DESCRIPTION [lindex $v_tran 1 ]
             set v_CONDITION  [lindex $v_tran 2 ]
              set v_SOURCE_ACTIVITY_NAME [lindex $v_tran 4 ]
              set v_TARGET_ACTIVITY_NAME     [lindex $v_tran 5 ]     
              OMBALTER PROCESS_FLOW '$v_pf_proc_name' ADD TRANSITION '$v_TRANSITION_NAME' \
                   FROM ACTIVITY '$v_SOURCE_ACTIVITY_NAME' \
                   TO '$v_TARGET_ACTIVITY_NAME' \
                   SET PROPERTIES (TRANSITION_CONDITION, DESCRIPTION) VALUES \
                        ('$v_CONDITION','$v_DESCRIPTION');
         # set parameters
         puts "Setting parameters...";
         foreach v_param $v_pf_act_parameterz {
              set v_PARAMETER_NAME [lindex $v_param 0 ]
             set v_PARAMETER_VALUE [lindex $v_param 2 ]
             set v_DESCRIPTION [lindex $v_param 3 ]
             OMBALTER PROCESS_FLOW '$v_pf_proc_name' MODIFY ACTIVITY '$v_pf_act_name' \
                  MODIFY PARAMETER '$v_PARAMETER_NAME' SET \
                   PROPERTIES (VALUE,DESCRIPTION) VALUES ('$v_PARAMETER_VALUE','$v_DESCRIPTION') ;
         puts "Reconcile complete for $v_pf_path/$v_pf_proc_name/$v_pf_act_name";          
    }I hope I haven't lost too much in cutting and pasting! Anyway if something is not clear, ask freely. I don't use OWB built-in process flows any more, but I should remember enough to explain my own code.
    Antonio

  • What is the OMB+ script to apply a schedule to one process flow?

    Let's say process flow name : 'MY_PRLCESSFLOW'
    schedule name:'MY_SCHEDULE'
    so what is the script?
    thanks so much.

    http://blogs.oracle.com/warehousebuilder/2007/07/more_process_flow_basics_for_l.html
    OMBALTER PROCESS_FLOW_MODULE
    Purpose
    Alter the Process Flow Module by renaming it, and/or reset its properties.
    Prerequisites
    Should be in the context of a project.
    Syntax
    alterProcessFlowModuleCommand =  OMBALTER ( PROCESS_FLOW_MODULE
         "QUOTED_STRING" ( "renameClause" [ "alterPropertiesOrReferenceClause"
         ] | "alterPropertiesOrReferenceClause" ) )
    renameClause =  RENAME TO "QUOTED_STRING"
    alterPropertiesOrReferenceClause =  SET ( "setPropertiesClause" [ SET
         "setReferenceClause" [ UNSET "unsetReferenceClause" ] | UNSET
         "unsetReferenceClause" [ SET "setReferenceClause" ] ] |
         "setReferenceClause" [ UNSET "unsetReferenceClause" ] ) | UNSET
         "unsetReferenceClause" [ SET "setReferenceClause" ]
    setPropertiesClause =  PROPERTIES "(" "propertyNameList" ")" VALUES "("
         "propertyValueList" ")"
    setReferenceClause =  ( "setReferenceLocationClause" [ SET
         "setReferenceIconSetClause" ] | "setReferenceIconSetClause" )
    unsetReferenceClause =  ( "unsetReferenceLocationClause" [ UNSET
         "unsetReferenceIconSetClause" ] | "unsetReferenceIconSetClause" )
    propertyNameList =  "UNQUOTED_STRING" { "," "UNQUOTED_STRING" }
    propertyValueList =  "propertyValue" { "," "propertyValue" }
    setReferenceLocationClause =  ( REFERENCE | REF ) LOCATION "QUOTED_STRING"
    setReferenceIconSetClause =  ( REFERENCE | REF ) ICONSET "QUOTED_STRING"
    unsetReferenceLocationClause =  ( REFERENCE | REF ) LOCATION
         "QUOTED_STRING"
    unsetReferenceIconSetClause =  ( REFERENCE | REF ) ICONSET
    propertyValue =  ( "QUOTED_STRING" | "INTEGER_LITERAL" |
         "FLOATING_POINT_LITERAL" )
    Keywords And Parameters
    alterProcessFlowModuleCommand
    This command modifies an existing process flow module.
    renameClause
    Rename an existing process flow module.
    setPropertiesClause
    Set values of properties of a process flow module.
    Base properties for PROCESS_FLOW_MODULE:
    Name: BUSINESS_NAME
    Type: STRING(200)
    Valid Values: N/A
    Default: NAME
    Business name of a Process Flow Module
    Name: DESCRIPTION
    Type: STRING(4000)
    Valid Values: N/A
    Default: ''
    Description of a Process Flow Module
    propertyNameList
    Comma-delimited list of property names. Property names are not in
    quotation marks.
    propertyValueList
    Comma separated list of property values.
    setReferenceLocationClause
    Set a location to a supported workflow engine.
    unsetReferenceLocationClause
    Unset the location of the process flow module.
    propertyValue
    Value of a property.
    Examples
    OMBALTER PROCESS_FLOW_MODULE 'process_module' RENAME TO 'p_module' SET
    PROPERTIES (DESCRIPTION, BUSINESS_NAME) VALUES ('This becomes a process
    flow module.', 'process module')
    This will rename the Process Flow Module "process_module" to "p_module",
    and set its description to "This becomes a process flow module", set its
    business name to "process module".
    See Also
    OMBALTER, OMBCREATE PROCESS_FLOW_MODULE, OMBDROP PROCESS_FLOW_MODULE

  • Multiple Schedules of Objects (Process Flows)

    Hello,
    How can you assign a process flow to two different schedules (and jobs) f.i. a week and a weekend job. Do I need to make some copy of the object as you assign object to a schedule.
    Regards,
    Edwin

    a process flow can only refer to one calendar at a time, for your requirement you will ahve to copy the process flow and refer the weekend calendar to one Pf and weekly calendar to another.
    this is precisely the reason why lot of companies use schedulers like autosys or control M as there is a calendar facility which can run the same job at different schedules rather have 2 different process flow and calendars.
    You might be better off using "sqlplus_exec_template" which can be found in /owb/rtp/sql folder and using a dbms scheduler to run this process flow in which case you can sue the same process flow but create 2 jobs to run at different times rather than creating 2 process flow and 2 different schedules.
    Regards

  • Internal Order process config details and process flow

    Hi All,
    Please help me,
    I need process flow and configuration for assets to buy from vendors and store it and sell to our stores.
    how to config Internal order configuration step by step and post to Asset management. (Instead of normal sales for revenue account).
    Thanks and regards

    Dear Ravi,
    1. first create an internal order through KO01. That internal order type should be for investment purpose.
    2. Give the relevant detials and go to investment tab and give investment profile.
    3 Then in toolbar go to extras and select "asset under construction". An asset under construction would be created and give the details of this asset.
    4. Then release the internal order in KO02.
    5. Then go to GB60 and post a invoice with this internal order as account assignment object.
    6. GO to KO02 and in extras see the order balance. that amount would be reflected in the order.
    7. now go to KO88 and execute the internal order through which your internal order will be credited and AUC which you created would be debited.
    8. Now go to KO02 and see the order balance it would show 0 as it is moved to AUC.
    9. Now to go to settement rule and give the asset to which u want to transfer this amount and take the category as FXA and give the detials.
    9. now go to KO02 and change the status of internal order to "TECO complete".
    10 Now go to KO88 and execute the order and now the AUC is credited and your asset is debited.
    This is the exhaustive steps which i think wil solve your purpose.
    But make sure the configuration about the order type and investment profile are properly maintained.
    Thanks
    sap firdo

  • HFM Process Control and Process Flow History

    Hi,
    When I review the Process Flow History for top parent members of the consolidation, the POV shows <Scenario> <Year> <Period> <Entity> <Entity Currency>. For all other applications and entities, the <Entity Currency> member shows but I've built a new application and the <Entity Currency> shows as "EUR Total" (EUR is one of the currencies) except that EUR is not the default currency of application or the entity's value. Could anyone explain to me why it would show that way or where it could be coming from?
    I'm using HFM v11.1.1.3.
    Thanks in advance.

    did you add new metadata?

  • Scheduling Mapping or Process Flows- How to?

    Hi
    I am able to succesfully deploy and execute mappings & PFs using control center. Now i want to test the scheduling part. How should i a schedule a mapping to run at regular intervals of time.
    Regards
    Vibhuti

    Hi I figured out a way to schedule my mapings as explained in the below link
    Scheduled Map Error: ORA-20001: Internal error ...
    but when i deploy my schduled job it gives mr the follwoing errors
    ORA-01749: you may not GRANT/REVOKE privileges to/from yourself
    RPE-02224: Internal Error: Cannot grant privilege EXECUTE on object SPRO_OWNER.WB_RTI_WORKFLOW_UTIL to user SPRO_OWNER.
    What does it mean? Do i have to give any privileges on SPRO_OWNER.WB_RTI_WORKFLOW_UTIL pakage? If yes then what privileges do i have to give?
    Please let me know.
    Regards
    Vibhuti

  • Open Quantities / relation between Sales Schedule LInes and Doc flow tables

    Hi Experts,
    I got a requirement, in which I hv to develop a report for the Open quantities for sales order items. fine.
    I ssue is that, in the report I hv to show at Schedule line level i.e. VBEP.
    So, by pulling the records from VBEP depending on the seldction criteria, I hv to pull the MATCHING record from VBFA for Post Goods Issue.
    Then, I hv to implememnt the follwoing formula,
    Open Quantity = VBEP, say Schedule line 1 quantity -(mnus) first saved corresponding Order, Item Quantitity-RFMNG in VBFA for PGI type
    As we know, VBEP is at Schedule line lvel, where as VBFA is ITEM level!!
    So, pls. let me know that,
    1 - Is really, we can develop the MATCH btwn VBEP and VBFA tbls, I mean, always, the data will be correctly pulled?
    2 - If so, Is the folllwoing statement is OK?
    LOOP AT IT_VBFA WHERE VBELV = IT_VBEP-VBELN and POSNV = IT_VBEP-POSNR and ERDAT LE IT_VBEP-MBDAT
    3 - Any other good statement for my requirement?
    Looks like, functional guy is also NOT much idea, here.
    Thanx

    no reply

  • Where jobs are stored in database after executing mapping and process flow?

    Hi Gurus,
    I am new to OWB, i would like to know where the owb jobs stored in the database. I am planning to execute jobs on database by avoiding OWB. Could anybody give steps to take code in database and execute.
    Thanks & Regards
    Vicky

    Hi Vicky,
    read chapter "3 Using SQL*Plus to Schedule and Execute Jobs" from "Oracle Warehouse Builder API and Scripting Reference":
    http://download.oracle.com/docs/cd/B31080_01/doc/owb.102/b28225/api_4sqlforjobs.htm#i1064950
    Hope this helps,
    Oleg

Maybe you are looking for

  • Delete songs from the purchased list

    Is there any way to delete songs from the purchased library list?

  • Connecting HDTV to powermac G5 as 2nd monitor?

    I just got a new flat panel LCD HDTV (Polaroid FLM232B) and it has a VGA input and I was wondering how I would go about hooking it up to my mac (if possible). I know I would need some sort of adapater, but ??? to VGA? I don't know much about video ca

  • Programmed I/O

    Hy, In DMA data transfer mechanism we have got a correct sinus signal but in Programmad I/O we have got this wrong signal. Why? Attachments: theproblem.png ‏5 KB

  • BUG: foreign key not altered when on delete is changed

    JDeveloper 10.1.2.0.0 Problem: create a master/detail relation in the database diagram. Set the foreign key to restrict references. Generate DDL directly in the database to create the tables and FK. Alter the FK to cascade on delete and regenerate th

  • LC ES and CF8

    Does anyone have a sample of calling the LC ES Web services using CF (8 in my case).  I constantly receive the error:<br /><br />Web service operation renderPDFForm with parameters {urlSpec={contentrooturi=C:\\Adobe&applicationwebroot=http://testLCse