Unusual mapping execution behaviour

Hi
Has anybody come across the following intermittent problem when executing mappings as part of a job schedule?
A mapping which may be part of a schedule of say 50 mappings is executed in a typical manner using
the sqlplus exec template.
An entry is written to owb audit tables indicating the mapping has been called but then no further action is carried out -
no SQL executed, no errors returned or reported, no activity on the database, nothing.
If the mapping is then rerun, it works fine !!
The worse thing about this problem is that it happens completely at random for different mappings which have ran successfully and without issue during the previous batch run.
Any suggestions much appreciated.
environment details:
Db: 9.2.0.6.0
Owb Client: 9.2.0.2.8
Repository: 9.2.0.2.0
Platform: HP-UX B.11.11 U
thanks

Maybe it's a time out in your DB? That's what happened to me yesterday...

Similar Messages

  • How to measure mapping execution speed

    Hi,
    currently i'm trying to measure performance differences between Interface Mappings which contain one single Message Mapping and Interface Mappings which contain 2 or 3 Message Mappings.
    I already tried to do this with RWB and Performance-Monitoring. But Performance Monitoring shows the processing time through the whole XI, and not only Mapping execution time. So it is difficult to get a clean measuring there, without influences from queueing and so on.
    Test Tab on Integration Builder has a too big step (one second). Mapping execution time is slower.
    Do you have any ideas to measure this?
    Or do you have experience with performance differences between those two kinds of Interface Mappings?
    regards,
    ms
    P.S. i'm using XI 3.0

    Hi, Manuel:
    For the two scenarios you want to compare performance, trigger them separately.
    You take following steps to take measurement for those two scenarios:
    Go to SXMB_MONI, find the message, go to pipeline step after your "Request Message Mapping"
    e.g. you can select "Technical Routing" step, expand it, -> SOAP Header -> Performance Header:
    You will see the start time stamp for each steps executed up to current step.
    Locate your mapping programs, get the begin time stamp and end time stamp, then you will know the how long the mapping program take.
    For the scenario that you have several mapping programs, make sure you get begin timestamp for the first mapping program and end timestamp for last mapping program, the difference is the time for you few mapping program take.
    Hope this helps.
    Liang
    Edited by: Liang Ji on Mar 29, 2008 5:42 AM

  • OWB11gR2: Mapping execution in a process flow not visible in OWB Browser

    When a mapping is executed inside a process flow, execution details are not visible in OWB Repository Browser (Control Center reports) - rows processed, errors etc. Mapping row is missing in a log, like it never happened (but it did).
    This auditing information is very important for monitoring reasons (to our customers also) and I just don't get it how this functionality is lost with this version. Another serious bug?

    Hi David,
    I was rather tired and frustrated last evening, so today I noticed some things I didn't yesterday. Your reply gave me a new motivation.
    The conclusion is - a mapping execution in a process flow is logged, but the way activities are displayed in OWB Browser are now different than in previous versions. If I click on 'Execution Job Report' on a process flow, I see all the activities listed except mappings (transformations, assign, file exists, subprocess etc.). If I want to see mapping execution row, I must click on a plus (expand) sign.
    This kind of behavior will make processes with a complex hierarchy (usually we have more than 5 levels of subprocesses) rather vast to monitor. In 10gR2, a drilling down was accomplished by opening a new browser tab (Execution Job Report link) for each subprocess/mapping activity. Now it shall remain on one huge screen (list) that keeps expanding.
    But, if that is the new feature, we shall live with that. If our customers won't like it, they will have to get used to it.
    Thank you for your reply!

  • Error in Mapping Execution

    Hi Experts,
       I am doing the dat file to RFC scenario.. development has been done.
       While testing i am facing the problem with mapping execution. I have tested with single record structure and multiple record structure.. but getting the same error.. pls see the below error.
    <!--  Request Message Mapping
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>Application</SAP:Category>
      <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>
      <SAP:P1>com/sap/xi/tf/_MM_DATA2RFC_</SAP:P1>
      <SAP:P2>com.sap.aii.utilxi.misc.api.BaseRuntimeException:</SAP:P2>
      <SAP:P3>Fatal Error: com.sap.engine.lib.xml.parser.ParserE</SAP:P3>
      <SAP:P4>xception: Invalid char #0x0 (:main:, row:5, col:2~</SAP:P4>
      <SAP:AdditionalText />
      <SAP:Stack>Runtime exception occurred during application mapping com/sap/xi/tf/_MM_DATA2RFC_; com.sap.aii.utilxi.misc.api.BaseRuntimeException:Fatal Error: com.sap.engine.lib.xml.parser.ParserException: Invalid char #0x0 (:main:, row:5, col:2~</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    I have maintained FCC in configuration...
    can any one suggest what would be the problem.
    Thanks,
    Swetha

    Hi Stefen,
      You are absolutely right/correct.. i have saved file in another editor with ANSI.. and triggered the interface but now i am facing the problem with message mapping.. pls see the below error.
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Request Message Mapping
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>Application</SAP:Category>
      <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>
      <SAP:P1>com/sap/xi/tf/_MM_EMFDATA2RFC_</SAP:P1>
      <SAP:P2>com.sap.aii.mappingtool.tf7.MessageMappingExceptio</SAP:P2>
      <SAP:P3>n: Runtime exception when processing target-field</SAP:P3>
      <SAP:P4>mapping /ns1:Z_H_EMF_RFC/IPFILE1/item[2]/TRMDA; r~</SAP:P4>
      <SAP:AdditionalText />
      <SAP:Stack>Runtime exception occurred during application mapping com/sap/xi/tf/_MM_EMFDATA2RFC_; com.sap.aii.mappingtool.tf7.MessageMappingException: Runtime exception when processing target-fieldmapping /ns1:Z_H_EMF_RFC/IPFILE1/item[2]/TRMDA; r~</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    can you pls suggest abt the above error.
    Thanks,
    Swetha

  • Skipping mapping execution in process flow

    I have a process flow that calls multiple mappings. Based on some condition I want a mapping not to execute, E.g. when rerunning the process flow in case of failure. Currently I keep track of mapping execution and store the status in a control table. When a mapping has completed, it updates the status as completed. When I need to rerun, in the SQL of the tables join in the mapping, I have a where clause that returns 0 rows (mapping_completed = N) . So mapping is excuted but no rows added/updated.
    The flip side of the approach is when using a complex join specially using views, the SQL takes a long time to run only to return 0 rows. So logic is okay but I want to save time by avoiding the execution of the mapping it self.
    I would like to know how others are implementing this scenario.
    Regards
    Sandeep

    Can you not have two different mappings one for running it first time and one which you can run on failure...
    In your process flow you can have a param i.e 'F for failure and 'I' for intial and based on this condition you can decide which mapping to invoke and hte path to be followed.

  • Unexpected error during mapping execution

    Hello,
    we are implementing business intelligence for Siebel 8.1. Having one strange issue and hoping anybody could help us out.
    We are using OWB 10.2.0.1. Certain error appears during mapping execution LOAD_SR which basically loads service requests from Siebel DB to data warehouse.
    The mapping is using two tables as input (S_SRV_REQ and S_SRV_REQ_X) which are joined by S_SRV_REQ.ROW_ID = S_SRV_REQ_X.PAR_ROW_ID (+) using Joiner operator, because I always need extension table (S_SRV_REQ_X) rows either filled either containing only nulls (when there is no corresponding extension row). Sadly, this does not work, during execution it generates:
    Error:
    ORA-00997: illegal use of LONG datatype
    ORA-06512: at "DWH_ADM.LOAD_SR", line 32
    ORA-06512: at "DWH_ADM.LOAD_SR", line 3507
    ORA-06512: at "DWH_ADM.LOAD_SR", line 4553
    ORA-06512: at "DWH_ADM.LOAD_SR", line 9984
    ORA-06512: at line 1
    Warning:
    ORA-00997: illegal use of LONG datatype
    In summary, these join conditions generate same error as above:
    S_SRV_REQ.ROW_ID = S_SRV_REQ_X.PAR_ROW_ID (+)
    S_SRV_REQ.ROW_ID (+) = S_SRV_REQ_X.PAR_ROW_ID (+)
    S_SRV_REQ.ROW_ID = S_SRV_REQ_X.PAR_ROW_ID
    Strangely, when I use join condition S_SRV_REQ.ROW_ID (+) = S_SRV_REQ_X.PAR_ROW_ID, it works. But then right join is used and some records get rejected by joiner operation (those who do not have corresponding rows in extension table).
    Could someone help me with this issue?
    Any feedback would be greatly appreciated. Thank you for your time reading this!
    Edited by: user8872556 on Sep 28, 2011 2:21 AM
    Edited by: user8872556 on Sep 28, 2011 2:23 AM
    Edited by: user8872556 on Sep 28, 2011 2:23 AM

    We still were not able to solve this issue, would greatly appreciate any effort to help us out.

  • Logging start & end time of map execution

    Hello,
    I want to log start & end time of execution of my map (OWB 11g), so I've created a table for this purpose and I used it in every map that I want to log time, twice; First for logging start time, and second for end time.
    I pass a constant with SYSTIMESTAMP value through my log table and also name of my map. but the problem is, both of my records' time (start & end) are very near to each other (difference is in milliseconds!) however my map takes time for more than 2 minutes! So, I've changed my map Target Load Order to: [log table for start time] + [Main tables of my map] + [log table for end time]. I've set my map Use Target Load Ordering option True, too.
    Why it doesn't work? Is there any better solution for logging every map execution time in a table, or not?
    Please help me ...
    Thanks.

    To do that, I have created a view that lists all processes that are running or finished. The view contains fields:
    process_name
    process_type (plsqlmap, plsqlprocedure, processflow, etc)
    run_status (success, error, etc)
    start_time
    end_time
    elapse_time
    inserted
    updated
    deleted
    merged
    You could insert into your log table using select x from this view after every map, or, how I do it, is to insert into log table after every process flow. That is, after my process flow is complete I then select all of the details for the maps of the process flow and insert those details into my log table.
    Here is the SQL for my view. This is for 10.2.0.3. For
    CREATE OR REPLACE FORCE VIEW BATCH_STATUS_LOG_REP_V
    AS
    (SELECT PROCESS_NAME,
    PROCESS_TYPE_SYMBOL,
    (CASE
    WHEN RUN_STATUS_SYMBOL IN ('COMPLETE_OK', 'COMPLETE') THEN 'SUCCESS'
    WHEN RUN_STATUS_SYMBOL IN ('COMPLETE_FAILURE') THEN 'ERROR'
    WHEN RUN_STATUS_SYMBOL IN ('COMPLETE_OK_WITH_WARNINGS') THEN 'WARNINGS'
    ELSE 'NA'
    END
    ) RUN_STATUS_SYMBOL,
    START_TIME,
    END_TIME,
    ELAPSE_TIME,
    NUMBER_RECORDS_INSERTED,
    NUMBER_RECORDS_UPDATED,
    NUMBER_RECORDS_DELETED,
    NUMBER_RECORDS_MERGED
    FROM OWB_RUN.RAB_RT_EXEC_PROC_RUN_COUNTS
    WHERE TRUNC (START_TIME) >= TRUNC (SYSDATE) - 3)
    ORDER BY START_TIME DESC;

  • Mapping Execution Status

    Hi,
    When we I want to see the mapping execution status, I used to look into the WB_RT_AUDIT and get the mapping name, map run id, start time, end time and status. Here the status field gives the information as COMPLETE/SUCCESS. This is relating to 9.0.4. Recently, when I upgrade to OWB 10g the same view for the status it gives as 1 and even for error also it shows as 1, how to exactly know the mapping execution status is successful or not. Pls mention the table name and column name to know the status
    Kishan

    Ola Koshan,
    When you look at the audit table WB_RT_AUDIT and check the column RTA_STATUS you'll get a number. When you decode the number you will see wether the mapping is completed, running or ended in an error.
    This can be done like this:
    DECODE (rta_status, 0, 'Running', 1, 'Completed', 2, 'Error','Else')
    Please keep in mind that if you kill a mapping, the status will still be 'running'. This is because the process will not be able to update oracle's repository (because you killed it).
    There is quite a lot you can extract from the Audit tables. I do not quite know what you mean with mentioning the table and column name to know the status... As far is I understood you are looking for the mapping status... Right?
    Regards
    Moscowic

  • Mapping execution statistics

    Hello,
    Normally using the repository browser,one can see the dml statistics of a mapping execution,
    but i shows the total number of records, selected, inserted or updated by a mapping.
    If there are more than one target table in a mapping, and I want to see the number of records
    inserted/updated/delete in each of the target tables,
    where do I need to check?
    Thnks
    MD

    Hi
    The below query will give u all the necessary details
    Execute it from OWNER Schema.
    select
    x.TASK_NAME,
    y.TARGET_NAME,
    z.NUMBER_ERRORS,z.NUMBER_RECORDS_INSERTED Inserted,
    z.NUMBER_RECORDS_UPDATED UPDATED ,z.NUMBER_RECORDS_MERGED MERGED, z.NUMBER_RECORDS_SELECTED SELECTED,
    z.ELAPSE_TIME,z.RUN_STATUS,
    x.EXECUTION_AUDIT_ID,
    a.MAP_RUN_ID
    from ALL_RT_AUDIT_STEP_RUNS z,
    ALL_RT_AUDIT_STEP_RUN_TARGETS y ,
    ALL_RT_AUDIT_map_runs a,
    ALL_RT_AUDIT_EXECUTIONS x
    where x.EXECUTION_AUDIT_ID = a.EXECUTION_AUDIT_ID and
    a.MAP_RUN_ID = z.MAP_RUN_ID and
    z.step_id = y.step_id
    and x.task_name like *'Mapping_name'*
    order by x.EXECUTION_AUDIT_ID, a.MAP_RUN_ID

  • Regarding default select clauses executing during OWB Mapping execution -

    All-
    While observing the statements executing during an OWB(11gR2) Mapping execution,by monitoring the session using SQL Developer
    Iam finding the following statements executing multiple times and possibly consuming more of the mapping execution time:-
    SELECT MAX(EXECUTION_AUDIT_ID) FROM ALL_RT_AUDIT_MAP_RUNS WHERE MAP_NAME = :B1
    SELECT SYS_CONTEXT('owb_workspace' , 'workspaceID' ) FROM DUAL
    SELECT nvl2(translate(20040101, 'A1234567890','A'), 'F', 'T') FROM DUAL
    SELECT USER FROM SYS.DUAL
    These statements have no relation to be business logic being implemented.And seem to be generated by OWB default settings.
    Can anyone please let me know how to reduce the frequencey of the above mentioned statements or if possible remove them from the OWB mapping execution.
    So that the mapping would run more faster

    Hi,
    these statement are required to set the runtime audit data. Usually, they do not really impact the performance of a mapping so there is not need to bother about them.
    What causes performance problems is usually the business logic part of the mappings.
    You may purge old runtime metadata manually. Look at the script
    %ORACLE_HOME%\owb\rtp\jrtaudit\owbsys\purge_audit_tables.sqlDocumentation is included.
    Regards,
    Carsten.

  • Mapping Execution hangs...

    Hi,
    When I execute a single mapping (load_table_name) from the deployment manager it hangs. However, the data appears to get loaded. I try to cancel the execution window but that freezes all owb windows (deployment manager and owb projects window). Then I have to kill the owb application from the tasks manager.
    I'm using owb client on windows2K to access the repositories/instances on oracle 9.2 on aix box.
    Your help would be appreciated,
    Bechir

    Igor,
    The mapping is configured as parallel. It uses APPEND PARALLEL(TARGET_TABLE_NAME, DEFAULT, DEFAULT) as a loading hint for the target table.
    The mapping is used for initial load of data from source to target. It uses INSERT as the loading_type. I should add that the mapping execution does not fail, it loads the data successfully. However, the execution window doesn't close after the load is complete. It freezes all open owb windows.
    Thanks,
    Bechir

  • Exception during mapping Execution

    Hi Experts
    an Matmas05 xml is coming from MDM system. & it should be posted as Matmas05 in R/3.
    i imported the MDM content in IR. but the fields order in coming Matmas05 xml file and the fields order in
    Matmas05 Idoc  are Not same.
    while Executing Mapping it  throws 'Exception during mapping execution'
    so pls give solution How can i solve...this error
    pls ....urgent.
    thanks & regards
    swapna

    HI swapna,
    The XMl coming from MDM have the different sequence than the original Matmas05 IDOC that you have imported in IR,.
    Lets take it this way as the Original Matmas IDOC structre is not matched up with the sender side Structure.
    You can change the sequence of IDOC XSD file with some tools like Altova xmlSpy and make it same as sender side. Then import the modified XSD file as external defination and use it on sender side.
    On Receiver side your Matmas05 structre sequence should have to be original one.
    Now map the corresponding fields and execute the scenario.
    thanks
    Swarup

  • How to get last date of mapping execution?

    Hi all,
    Anybody knows how to get the last run date mapping? I try to develop incremental load, but i don´t know get the last date of mapping execution ok.
    anybody can help me, thanks in advanced

    Hi,
    In our project we also use last_run_date to detect the delta for incremental loading.
    We created a table which holds the run date and time for each mapping. When a mapping is new (initial load) the run date is set to 01/01/1900.
    In each mapping we added a post-mapping process that inserts a record in the process table with the mapping name and last run date.
    We also added a Constant operator to all mappings, calling a procedure to get the last_run_date out of our own process table.
    The constant attribute is then used in a FILTER operator to compare the dates and select only dates after the last run date.
    ==============================
    An other option is to use the get this information out of the runtime audit tables.
    ALL_RT_AUDIT_EXECUTIONS which reside in the runtime owner schema.
    Regards,
    Ilona

  • Logging OWB mapping execution in Shell script

    Hi,
    I am executing a OWB mapping from a shell script like this
    $OWB_SQLPLUS MY_WAREHOUSE plsql MY_MAPPING "," ","
    I want to log this mapping execution process into a file.
    Please let me know if this will work:
    $OWB_SQLPLUS MY_WAREHOUSE plsql MY_MAPPING "," "," >> LOGFIL.log
    I will just be using this log file to track all the execution and use it for logging purpose.
    If this wont work, please tell me the proper way to do this...
    Thanks.

    Avatar,
    ">>" is the Unix operator that will redirect output and append to a particular file, so what you have should work if you're executing it from the shell prompt. Although I don't know specifically what OWB_SQLPLUS and MY_WAREHOUSE are.
    In my company, we have the call to the owb script inside another script. For example, file x contains the following line:
    sqlplus repository_user/pwd@database @sqlplus_exec_template.sql repository_owner location task_type task_name custom_params system_params
    Then at the prompt, we enter:
    nohup x > x.log &
    And the mapping or workflow executes.
    Jakdwh,
    Are you redirecting your output to a file so you can see why it's returning a '3'? The log file will usually tell you where the error occurred. I don't know what your input parameters for your mapping is, but the script is pretty picky about the date format. Also, even if you don't have any input parameters, the "," still has to be sent into the script.
    Hope this helps,
    Heather

  • OWB Performance issue (mapping execution always takes min 60 sec)

    Hi All,
    any owb mapping we execute in one of our environment , it seems the execution hangs for some time before it build the Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL' statement. The log file shows a constant difference of 30 sec. before executing the <map>.main() function. the data extraction is very low some thing like 10-1000 records
    Action taken : increase the SGA pool size to allow more resource. at the DB level
    changed the -Xms64M -Xmx256M to -Xms335M -Xmx440M.
    But no help
    Extraction from the owb log is as follows.
    2006/03/15-09:29:09-WST [1E0BF3BF] Initializing execution for auditId= 28339 parentAuditId= null topLevelAuditId=28339 taskName=XXIF_OUT_CSV_TRANS
    2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create adapter 'class.RuntimePlatform.0.NativeExecution'
    2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL'
    2006/03/15-09:29:39-WST [1E0D73BF] PLSQL callspec: declare l_env wb_rt_mapaudit.wb_rt_name_values; l_IN_BATCH_ID null........
    Kindly note the difference of 30 sec between create native operator to actual execution of PLSQL code.
    The same set of mapping is working fine(5-15secs) in our Dev env. but is taking additional time (kind of 1-3 mins ) in Test env. and the execution of mapping does not go in parallel mode(Is this an expected behaviour ?) and if we have 10 seperate excution of the mapping , it takes 30mins to complete in TEST env. compare to 3mins in DEV env.
    The noticible difference between these two env. is
    These mapping been created using 10.1.0.2.0 client and deployed on 10.1.0.1.0 repository. in DEV env.
    but the TEST Env. uses 10.1.0.4.0 repository.
    When check the audit browser .. Can see the total elapse time is 61sec but the actual mapping exec time is only 1 sec.
    Is there any configuration settings which cause this delay.
    Any pointers on this will be of great help.
    Regards,
    njain

    I am having exactly the same issue as you have described here. i.e. my mappings are taking some time to initialize before they run. Did you or anyone find a solution to this problem?

Maybe you are looking for