Post Table/Target Load processing capability

OK - I know I can use the Post Mapping Process operator to execute code/function after the entire map has run. But what about executing custom code/functions - after each Target is Loaded AND ran without warnings. ? Is there a feature in OWB to do this ? My maps contain many Target Loads and I would like to update flags on the source record after each one is successful.
thanks
OBX

One idea....
After each step in your map you add another step which checks the status of the current execution's previous step and based on a condition does what you want.
You can add the ALL_RT_AUDIT_STEP_RUNS (public view, either import or create unbound with numeric columns MAP_RUN_ID, NUMBER_RECORDS_INSERTED, NUMBER_RECORDS_MERGED,NUMBER_RECORDS_UPDATED and string column STEP_NAME) into your map, then have a filter which does something like;
INOUTGRP1.MAP_RUN_ID = get_runtime_audit_id
AND INOUTGRP1.STEP_NAME ='SALES_STEPA'
AND INOUTGRP1.NUMBER_RECORDS_INSERTED > 0
The previous step has a target operator named SALES_STEPA which is bound to the SALES table, the filter will check the current execution, step SALES_STEPA and see if any records were inserted for example then you can put whatever logic you need after the filter. You would add this logic after each step in the map.
Here is a graphic to hopefully help;
http://blogs.oracle.com/warehousebuilder/pictures/viewer$191
The ALL_RT_AUDIT_STEP_RUNS also has NUMBER_ERRORS columns so whatever condition you have you should be able to do what you want.
Cheers
David
.

Similar Messages

  • Adding pre-mapping process breaks target load order

    OWB 11.2.0.2 on Oracle database 11.2.0.2
    I created a mapping that has 4 sources (views on external tables) and 4 targets (3 regular tables).
    V1 => T1 (truncate/insert)
    V2 => T2 (truncate/insert)
    V3 => T3 (truncate/insert)
    V4 => T3 (update/insert)
    The above is the target load order. It tested fine.
    I added a pre-mapping process (packaged procedure) that is unrelated to any of these source/target tables, but just exits or raises a failure to control whether the mapping should continue to run or not.
    When I run it, the selected/inserted/merged counts are identical, but from what I'm seeing in T3 is that the order was update/insert followed by truncate/insert. I verified that the target load order remained the same.
    Has anyone else run into this problem?

    Hi,
    For OWB 11.2.0.2 the target load order property will set to 'False' by default.
    If this is the case you can not guarantee that the targets will be loaded in the order specified. It should be set to 'true'
    Right click on your map and select configure -> Code generation Properties -> Use target load ordering set it to true.
    Please note that you should have ODI EE licence and you must have installed OWB with Enterprise opton to set this property.
    Regards,
    Pnreddy

  • Table to view the details of Load Process

    Hi All,
    Can anyone give me the 'table name' to see the time taken by each component involved in Extraction & Loading process like start time and end time of Extraction, Transfer, Update to PSA, Transfer Rules, Update Rules and Update to Data Target. Or else suggest me the best way to get this details.(Apart from DETAILS tabstrip in Monitor/RSMO) I need to notedown the time for more than 30 cubes and for 30 days.
    Thanks in Advance
    Regards,
    -Adhi.

    Hi,
       If you need more details regarding the complete setup of tables for the WHM activies etc, let me know u r email address or mail me at [email protected]
    regards,
    ravi

  • Table to down load processing times

    Hello all,
    Can any one tell what is the table that we can down load "processing times of infopackages on a give day"
    Thank you so much.

    Hi:
    If the above table doesnot have all the data, try this table: RSREQDONE.
    Any loads in BW will have to have an entry in this table including LOGDPID = InfoPackage.
    It has times in UTC, loacal time and SLoad Status.
    Ram Chamarthy

  • Load into PSA, then into target - in process chains

    Hi
    I have a question regarding first loading into PSA, then into target - using process chains
    It can be done by executing a package only loading into PSA. Then using the process type "Read PSA and Update Data Target".
    My problem is, that I want to split this into two seperate chains. I.e. loading all data into PSAs before loading into targets. But when I do these two tasks in seperate chains I get a syntax error shown below. Can it really be true, that you are not able to do this in seperate chains???
    Thanks
    Karsten
    Process LOADING variant ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7 is referenced but is not in the chain
    Message no. RSPC060
    Diagnosis
    Process LOADING, variant ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7 must precede the current process. This process is not however available in the chain.
    System response
    The system is unable to activate the chain.
    Procedure
    Change the current process so that it no longer references LOADING ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7, or schedule process LOADING ZPAK_3WI2VMPZM3FE8Y1ELC0TKQHW7 in the chain so that it precedes the current process.

    Hello Karsten
    I've discussed this with SAP and even if the response was not clear, it doesn't seem to be possible. We also wanted to do this because we have source system working 24h/24 and 7/7. So as master data and tx data can be created at any time, it was the only way to make sure all master data were available for the load of tx data (loading tx data in PSA, then loading master data, then PSA to target). Is it for the same reason that you want to dissociate the loads ?
    What I've done eventually is load in an ODS not BEX relevant (so SID not required), no master check in IP, so the ODS is like the PSA. Then load master data, activate. Then load from this ODS to target. It's working fine. I do not reconsidere changing this.
    Good luck
    Philippe

  • How to see if what a specific load process is doing?

    Hi all, having a little bit of a problem in that I have a load (not replication) job that seems to be hanging (not failing) but also not loading any data into HANA or not finishing.
    - Attempted to load table STXL in full, 440k or so rows loaded into HANA
    - Created an include for BOR event to filter all but 1 record, worked fine in about 3 minutes and 1 record present in HANA
    - Added additional code to decompress a compressed column format, code shown below. This was tested in a 'standalone' fashion as a program in an ECC system, so this is migrated to SLT as an include. In this case, the load job is just "hanging", it's not failing or not finishing. When I look in the monitor, can see it running for the last say 15 minutes "In Process".
    From here, no data ever makes it into HANA and the job in SLT doesn't seem to finish.
    How to diagnose where this is getting hung up or where the problem is? Normally if the code is bad, I would get a hard error in the application logs, which I don't get here.
    Code for include
    *&  Include           Z_TEST_IMPORT_TEXT
    types: begin of ty_stxl_raw,
             clustr type stxl-clustr,
             clustd type stxl-clustd,
           end of ty_stxl_raw.
    DATA: lt_stxl_raw type standard table of ty_stxl_raw,
          wa_stxl_raw type ty_stxl_raw,
          lt_tline type standard table of tline,
          wa_tline type tline.
    IF <WA_S_STXL>-TDNAME <> '0020874233'.
      SKIP_RECORD.
    ENDIF.
    clear  lt_tline[].
    wa_stxl_raw-clustr = <WA_S_STXL>-clustr.
    wa_stxl_raw-clustd = <WA_S_STXL>-clustd.
    append wa_stxl_raw to lt_stxl_raw.
    import tline = lt_tline from internal table lt_stxl_raw.
    READ TABLE lt_tline into wa_tline INDEX 1.
    <WA_R_STXL>-TEXT = wa_tline-tdline.

    Frank, awesome to hear you worked through the issues.
    Regarding DMC_STARTER, can you share how you were able to find the issue, specifically with such a large dataset, did you just step through until you hit a dump? When I referred to the documentation on debugging (that I had not yet created!), here is what I gathered just last week as a rough start. I'll probably repost this as a blog if you can confirm it is similar to your steps.
    - Ensure however you would like to test has already been started
    - For example if you want to test initial load, start the initial load and let it fail, and follow the steps below. When DMC_STARTER kicks up, it will start initial load again in debug.
    - If you want to test replication phase (picking up logging table records to process), get a table to replication status, stop the master job, then make a change in the source to log new entries in the logging table and follow the steps below. When DMC_STARTER kicks up, it will start in replication phase, picking up logging table records for processing.
    After the above is "set up" for testing, do the following, some of the functions will have different names based on your implementation.
    - SE38, DMC_STARTER
    - Use Migration Object Z_<TABLE_NAME>_<CONFIGURATION_ID>, Z_STXL_001 for example, access plan = 1, test mode = X
    - /h to enable debug
    - Find OLC function module call CALL FUNCTION olc_rto_ident, set breakpoint, continue to enter the FM
    - Enter 1CADMC/OLC_100000000000*, find CALL FUNCTION '/1CADMC/IL_100000000000769', set breakpoint, continue to enter the FM
    - Enter /1CADMC/IL_100000000000769, find PERFORM _BEGIN_OF_RECORD_0001_, set breakpoint, continue to enter
    - Enter PERFORM _RULE_BOR_C., the include code should be found here and you can step through.
    Regarding your issues discovered
    1) When I referred the code changes I need to tag on the other blog, I was referring to the following
    - Check for <WA_S_STXL>-SRTF2 = '0' to only grab the first line (as you mention)
    - The CLUSTR size as you mention is a helpful check.
    - Also need to put a check above the processing of the record to be able to handle DELETE's, else you'll get a short dump. When a deletion occurs, only the key columns are passed through this logic, and when we are working (importing) on a non-key field it will of course be empty and cause a dump.
    if <WA_S_STXL>-CLUSTD is not initial.
    *Put Source fields into internal table for IMPORT statement to work on, main logic here
    endif.
    2) The specific texts we were targeting would never be this large, they only consist of one single line so we never hit this issue. In the case you exceed 345 lines, are you just truncating the remainder?
    All in all, great stuff - good to see someone taking this concept further. If you want a deeper look into the folks who are doing much heavier text processing with additional logic (and much stronger ABAP'ers than I ), see Mass reading standard texts (STXH, STXL)
    Happy HANA!
    Justin

  • What would make my Cp7 course get hung up during the loading process when launching on LMS?

    In Cp7, I created a SCORM project to post on our LMS and submitted it for testing. Our LMS department confirmed it worked successfully, and in fact I tested it myself – it worked great. However, my work team asked me to make some changes to the content before pushing it out of the testing phase to launch company-wide. Per company policy, making changes to a course means resubmitting it once more to be tested before making it available to everybody. So I made the requested changes and resubmitted it for testing. Now our LMS department reports the course will not launch properly – it gets stuck in the loading process. It shows “Loading...” endlessly but never loads.
    When I submitted the updated version for testing, I kept all the project settings that worked successfully the first time. The changes I made to the project were:
    I added a slide toward the beginning (slide 2) to give the user navigation tips.
    I set slides so that each one must play all the way through before the user can proceed to the next slide (I think I did this simply by removing “play” on the playbar”).
    Under table of contents settings, I checked “navigate visited slides only,” so the user can navigate backwards using the contents bar at left, but can only navigate forward to slides that have already played or to the next slide in the queue.
    I broke up a couple of lengthy multiple choice questions into shorter ones (for an additional two slides).
    Is there any reason one of these changes would make the course get hung up during loading?
    Is there a size limit Cp7 projects which, if exceeded, might be causing such an issue? 
    Or does anyone have ideas about what else might be making it get hung up in the loading process?
    Thank you, any and all, for your feedback.

    back up the iPhoto library like any other backup - make a copy of the iPhoto library in case of problems
    you Depress the option (alt) and command keys and launch iPhoto - anyplace you can launch iPhoto you do this - keep the keys down until you get the rebuild window
    LN

  • Load Process in 11g

    Hi,
    How do you model a load process in BPM 11g using the new BPMN palette. The load process queries an external oracle database table and creates tasks for the end users in the workspace. Each task has a user interface that will display the data passed in.
    thanks.

    OracleStudent,
    I am not going to recommned to fiddle with your redo log size and that will be my last option if I have to.
    number of record = 8413427
    Txt fiel size = 3.59GB
    columns = 91
    you said remote server, is that the case? i've have no idea could you tell mean what is meaning of remote server?? plz tell me how can i check this???? i've recenly join this campany i asked to developer who show me the code where he is using direct = ture. plz help me this process of loading is very annoying for me. plz tell what i need to check
    Couple of questions.
    How are you loading this data? You mentioend using some .NET application my question, is this .NET applicaiton resides on the same server as your database or does it run from a different machine. Also if you are invoking sqlldr (as you mentioned), please post your sqlldr control file. Also during the load it should be generating a log file , check and look for following line to verify and confirm you are using direct path.
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Continuation:    none specified
    Path used:      Direct2. Do you have any indexes on this table, if yes how many and what type? I mean regular btree or bitmap or both?
    3. Does this table in logging or nologging state?
    Regards

  • Repository corrupted/loading process is taking long time

    The repository load process is getting stuck at the below message for long time. We have disabled sort indexes on all look up tables but still it is taking huge amount of time to load the repository.Any information on this would be of great help.
    94      2011/08/26 20:16:55.523     Report Info     Background_Thread@Accelerator     Preload.cpp               Processing sort indices for 'Products'... (98%)
    81      2011/08/27 02:08:23.298     Report Info     Background_Thread@Accelerator     Preload.cpp               Processing sort indices for 'Products'... (100%)
    Regards,
    Nitin

    Hi Nitin,
    There are many performance improvement steps that one can take regarding this including verifying your repository.
    But i think it should only be a problem if this problem reoccurs.
    The accelerators get created when there is a change in table and update indices creates them,possibly there are multiple changes and load by update indices has not taken place for sometme thats why it is creating them.
    For better performance one can do following:
    Make a judicious choice of which fields to track in change tracking
    Disk Space has a huge impact on smooth functioning of MDM. If the disk space is not enough MDM Server cannot even load the repositories
    Have a closer look at data model and field properties
    Verify if your MDS.INI has this parameter: Session Timeout Minutes (Number. Causes MDM Console, CLIX, and applications based on the new Java API to expire after the specified number of minutes elapses. Default is 14400 (24 hours).When set to 0, sessions never time out.).When you have many open connections, this can to generate performance issue on MDM server
    Refer to SAP Note Number: 1012745
    Hope this helps.
    Thanks,
    Ravi

  • Target Load order is not working properly in 11.2.0.2

    Hi,
    Mappings whihc are migrated from OWB10.2 version to 11.2 which contains multiple targets are not working in accordance with the target load order given in the mapping.
    The order in which targets are loaded changes between execution.
    To test is this a problem with migration we had created a smaple mapping containg 3 targets and mentioned loading order in each target table as well as in mapping property.
    The order in which the targets are loaded was random. But when we have 2 targets it seems to be working. But for migrated maps this is also not working.
    Is there any bug reported in 11.2.0.2 for this issue? Any help is highly appreciated.
    Thanks and Regards,
    Pnreddy

    Target load ordering is set to false by default in 11.2.
    You need to set it to true if you want your mapping to use it.

  • How can I control del Target Load plan in a mapping?

    Hi,I have a mapping (attachment) that does inserts and updates to the same table in 3 different situations. It is a simple mapping that inserts new rows or update one that already exists, but also keeps the instance before update. If one record already exists it inserts a new instance of that record and updates the old one with the modification date.But i'm having an issue because of the order in wich each target is loaded. I know i can make 3 independent pipelines and then use the target load plan but y want to avoid reading the source more than one tiem.Regards, Matías.

    kglad, thank you very much for taking a look.
    Perhaps it would be better to view the entire site here:
    http://www.katodesignstudio.net/linda/index.html
    On the News page, I'd like to have a button that will load
    the Screening page (screening.swf) where the News page is loaded.
    At the link above, the button has been removed until I can find a
    solution but it would be located where the "see screening page for
    details" text is now.
    As you can see by going to any other page, all of the pages
    in the site load into a container clip. From what I can tell so
    far, I need the button to unload the News.swf from the container
    clip and load the Screening.swf into that same container clip,
    similar to what the top Nav menu is doing. The trick seems to be to
    get the button to unload the swf that the button is located in...
    How can this be done?
    thanks again for your help!

  • Fact tables not loading

    Hi guys,
    We have installed ODI 11 with OBIA and so far the installation has been done with out any problems.But the problem here is none of the fact tables is getting loaded what would be the root cause for this.
    Most of the dimension tables got loaded.By default only 1 fact table got loaded,but there are 50 fact tables in that module(HRMS & finance) but when we execute the process is running but the tables ar not loading.
    Its an urgent requirement,i had no clue whats happening.Any ideas guys.
    TIA,
    Kranthi.
    Edited by: Kranthi.K on Apr 28, 2010 3:34 AM

    First of all you need to check if all the dimensions got loaded or not?
    Because if a crucial dimension like customer or something else, which is being used in all fact loads is not getting loaded then obviously none of your facts will be loaded.
    Hope it helps.

  • Btree vs Bitmap.  Optimizing load process in Data Warehouse.

    Hi,
    I'm working on fine tuning a Data Warehousing system. I understand that Bitmap indexes are very good for OLAP systems, especially if the cardinality is low and if the WHERE clause has multiple fields on which bitmap indexes exist for each field.
    However, what I'm finetuning is not query, but load process. I want to minimize the total load time. If I create a bitmap index on a field with cardinality of one million, and if the table has one million rows (each row has a distinct field value), then my understanding is
    The total size of the bitmap index = number of rows * (cardinality / 8) bytes
    (because there are 8 bits in a byte).
    Hence the size of my bitmap index will be
    Million * Million / 8 bytes = 116 GB.
    Also, does anyone know what would be the size of my B-tree index? I'm thinking
    The total size of the B-tree index = number of rows * (field length+20) bytes
    (assuming that the field length of rowid is 20 charas).
    Hence the size of my b-tree index will be
    Million * (10+20) bytes = 0.03 GB (assuming that my field length is 10 charas).
    That means B-tree index is much lesser than the size of the Bitmap index.
    Is my math correct? If so, then the disk activity will be much higher for a bitmap index than a B-tree index. Hence, creation of the bitmap index should take much longer than the B-tree index if the cardinality is high.
    Please let me know your opinions.
    Thanks
    Sankar

    Hi Jaffar,
    Thanks to you and Jonathan. This is the kind of answer I have been looking for.
    If I understand your email correctly, for the scenario from my original email, bitmap index will be 32MB where as Btree will be 23MB. Is that right?
    Suppose there is an order table with 10 orders. There are four possible values for OrderType. Based on your reply, now I understand that the bitmap index is organized as shown below.
    Data Table:
    RowId     OrderNo     OrderType
    1     23456     A
    2     23457     A
    3     23458     B
    4     23459     C
    5     23460     C
    6     23461     C
    7     23462     B
    8     23463     B
    9     23464     D
    10     23465     A
    Index table:
    OrderType     FROM     TO
    A     1     2     
    B     3     3     
    C     4     6     
    B     7     8     
    D     9     9     
    A     10     10     
    That means, you might have more entries in the index table than the cardinality. Is that right? That means, the size of the index table cannot be EXACTLY determined based on cardinality. In our example, the cardinality is 4 while there are 6 entries in the index table.
    In an extreme example, if no two adjacent records have the same OrderType, then there will be 10 records in the index table as well, as shown in the example below.
    Data Table (second example):
    RowId     OrderNo     OrderType
    1     23456     A
    2     23457     B
    3     23458     C
    4     23459     D
    5     23460     A
    6     23461     B
    7     23462     C
    8     23463     D
    9     23464     A
    10     23465     B
    Index table (second example):
    OrderType     FROM     TO
    A     1     1     
    B     2     2     
    C     3     3     
    D     4     4     
    A     5     5     
    B     6     6     
    C     7     7
    D     8     8
    A     9     9
    B     10     10
    That means, the size of the index table will be somewhere between the cardinality (minimally) and the table size (maximally).
    Please let me know if I make sense.
    Regards
    Sankar

  • Process Capability

    Hello,
    Please note that once we generate Control chart for the Required Characterisitcs.
    I would like to know the Table and fields for the Process Capability that is Cp and Cpk .
    Please let me know ASAP.
    I need to develop a report for the above.
    With regards,
    shridhar.d
    9880644496

    Dear SM
    1) The CP and CPk that appears in MCXD has no connection with SPC characteristic selection in the control indicator
    1) The mic should be single recording
    2) To get the Cp, CPK you have to drilldown to either Plant/masterinspchar or Insp. characteristic using the switch  drill down option
    3) The Mic should have upper and lower limit
    The SPC indicator in Mic is used only when you need a control chart is needed.
    2) There is another method by which you can get Cp, CPK
    Go to QS28. Input the plant and the mic and execute. You will get MIC details. Choose the MIC and click on Result history. A pop will appear Result history Limit Selection. In the Time Tab you can see inspection lot selection date From and To. Execute
    You will get the inspection lot list select all and click on histogram and there in that you have tab for statistical values
    Hope this helps
    Regards
    Gajesh

  • Help Please : Cron Job to automate load process

    Hi
    I am trying to automate data load process. I am loading data into a number of tables from flat files.
    Currently I have a UNIX (SunOS) file with a bunch of SQLLDR commands and I changed permission on this file to executable. Every morning I execute this file to load data.
    Now I want to automate this process by writing a cron job and scheduling it. I am running into a number of problems. I exported ORACLE_SID, ORACLE_HOME and PATH still cron is unable to find SQLLDR.
    Whatelse am I missing. Here is my command file and cron file.
    Please help!?!?!?
    ORAENV VARiables
    export ORACLE_HOME=/export/opt/oracle/product/8.1.6
    export ORACLE_SID=fid1
    export PATH=$PATH:$ORACLE_HOME/bin
    .profile
    . $HOME/oraenv
    daily_full.sql file
    export ORACLE_SID ORACLE_HOME PATH
    sqlldr userid=user/pwd control=acct.ctl log=acct.log
    sqlldr .......
    Cron Job
    16 11 * * 1-5 /apps/fire/data/loadscripts/daily_full.sql >> /apps/fire/data/loadscripts/fulllog.log 2>&1
    Output fulllog.log file
    /apps/fire/data/loadscripts/daily_full.sql: sqlldr: not found
    /apps/fire/data/loadscripts/daily_full.sql: sqlldr: not found
    Thanks
    Shanthi

    Hi Ramayanapu,
    first; you have written a shell-script not an sql-script. Please rename your file from daily_full.sql to daily_full.sh
    I suggest that you use the cronjob from a user who has the enviroment with the variables ORACLE_SID and ORACLE_HOME.
    In this case cron will operate from the $HOME variable of this user.
    Perhaps your export will destroy the .kshrc setting. The statement has no effect in your script, please remove it.
    Rename your sqlldr-Statement as follows;
    $ORACLE_HOME/bin/sqlldr userid=user/pwd control=<path>acct.ctl log=acct.log
    <path> will placed with the path of your controlfile.
    Your user/pwd will correspond with a ORACLE user who has the right to insert in the destination table.
    Your logfile will be place in the %HOME directory.
    Hope that i could help to solve your problems
    with kind regards
    Hans-Peter

Maybe you are looking for

  • Acrobat XI Pro Installation Issues

    Hello , I tried installing Acrobat XI Pro and its not installing for me too . Please help us . 1. OS : Win7 , 64 bit 2. Setup file through adobe website 3. It freezez almost at the end of the installation and then rollsback completely 4. I am logged

  • Bank Account Reconciliation

    FI Experts, client do not have a clearing accoutn set-up for outgoign checks, therefore I am reconciling account manually first, before I introduce them to teh clearing accoutn. I  updated the encashment field for all outstanding checks, now I need t

  • Can i retrieve pics or videos after  deleted on my ipad mini

    i need to retrieve pics or videos that were deleted

  • Which one is better MPB retina or what?

    I am an online trader. I use  trade's architect and streamer tables which they both work with java. I am using macbook 4 for 4 years, but I am having problems. I am thinking to get a new Apple Product for trading purposes. What would you advice?

  • Itunes visualizer on music playback with apple tv

    Do any of you clever people out there know if it's possible to have the itunes visualizer playing on the TV instead of a photo slide show? I can get it working if I wired the TV directly to my macbook pro but not when I link it through the apple tv.