Parallel Processing in ODI with Unique Requirement

Hi All,
I have a unique requirement. I have a scenerio which i have to run multiple times in parallel with different user inputs.
Eg Suppose user inputs A,B,C then i have to run the same scenerio 3 times with input filter as A B and C respectively in parallel.
If user inputs A,B,C, and D then i have to run the same scenerio 4 times with inputs A B C and D in parallel.
Does any one know a way to achieve this.
Thanks in advance for your suggestions...
Regards
Edited by: 872116 on Jul 12, 2011 11:32 PM

Hi,
Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
http://odiexperts.com/interface-parallel-execution-a-new-solution
http://odiexperts.com/processing-multiple-interface-through-single-package

Similar Messages

  • Parallel processing in ODI

    Hi experts,
    I have 30+ files in text format, that hold partitions of the same table. These partitions can/must go through the same processing in parallel, and then must be appended back to a single large file.
    I'd have no problem designing the transformations in ODI (since I have the SQL code), but I'm looking for an "elegant" way to tackle the problem. By elegant I mean modular, with minimal replication of work, etc.
    Thank you very much.
    Joan

    1. Create a dummy data-store in ODI with column structure mimicking your file structure under file model
    2. Duplicate this data-store in Oracle model.
    3. Duplicate LKM file to oracle using loader and modify it in such a way that both target table and data file/path comes from KM options (add these options if they are not there already).
    4. Design one interface with these data stores and new LKM. Use an IKM that does a straight create target table and load without bothering to create I$ (IKM SQL Control Append seems ideal but you may need to modify to make sure target table comes from variable). Use two variables (var2 and var3) to specify the new options in LKM (and IKM if required).
    5. Create a scenario.
    6. Create an Oracle table that contains a sequence, target_table_name and source_file_with_path.
    7. Create a package, that sets the variable (say var1) to 1, use other two variables to store target_table_name and source_file_with_path based on var1 and call your scenario. Check for error and upon success, increment var1 and loop.
    8. At the end, create your big table with partitions and write a ODI procedure to loop through target tables and do a partition exchange.
    That is the most modular approach I can think of.

  • Parallel processing in workflow with fork

    Hello,
    I have a case in production system. the workflow has parallel processing with fork. This fork has 2 branches as inputs.
    It has 2 necessary branches with no other condition.
    Does anyone know of any scenario where the workflow proceeds ahead with one branch executed even though 2 branches are mandatory.
    Thanks.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel Processing - In conjuction with TWS scheduler.

    We have a .Bat file that uses upshell.exe to execute a custom script we've created.
    This custom script looks for files in the inbox and moves them to the relevant OpenBatches/OpenBatchesML folder. Once files have been moved it then runs a Parallel process up-to-load for all data. However, what we're finding is that because we're running this in parallel the .bat script is completing despite FDM still processing in the background. This is evident, as the TBATCH table shows the batch as not 100% complete. This unfortunately causes ourselves a problem as the scheduler (TWS) thinks all processing has finished and subsequent downstream processing kicks off. As I understand it this wouldn't be an issue if we'd used Serial processing and not Parallel.
    We've chosen Parallel becuase of the volumes and the limitation of our batch window. Am I correct in assuming that in principle parallel will process data faster than serial? I am currently looking at specific performance tests on my data at present to prove this.
    The real issue is that we somehow need to ensure TWS doesn't kick off downstream processes and I need to somehow have FDM create a log file for batches that complete successfully. I'm assuming it should be done within an Event script but I'm not sure which as this is quite new to me.
    Has anyone come across this issue themselves? If so, I'm looking for some guidance/examples how you've managed to get round the problem.
    Thanks in advance.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel Process Option for Optimization in Background.

    Hi,
    I am testing the SNP Optimizer with various settings this week on  demo version from SAP for a client.  I am looking for information that anyone might have on the SNP Parallel Processing Option in the execution of the Optimizer in the background.   The information that I could find is very thin.   I would be interested in any documentation or experience that you have.
    Sincerely,
    Michael M. Stahl
    [email protected]

    Hello,
    While running the transaction /SAPAPO/SNPOP - Supply Network Optimization  in the background, In the variant of it you can enter Parallel Processing profile in the field Paral. Proc. Profile.
    This profile you will require to define in the Customization(SPRO) before use it in the variant.
    Path to maintain it is as below Use transaction SPRO
    Advanced Planning and Optimization --> Supply Chain Planning -->Supply Network Planning (SNP) --> Profiles --> Define Parallel Processing Profile
    Here you will require to define your profile... e.g. as below
    Paral. Proc. Profile SNP_OPT
    Description          SNP OPTIMIZER PP PROFILE
    Appl. (Parallel Pr.) : Optimization
    Parallel Processes   2
    Logical system :
    Server Group :
    Block Size:
    You will require to take Basis team's help to enter value for Server Group and Block size.
    I hope, above information is helpful for you.
    Regards,
    Anjali

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • BPM Parallel Process with Exclusive Gateway

    Hi,
    I am facing issue with Exclusive Gateway in Parallel Process.
    Issu is, process always in In-Progress state at parallel Join. I mean process stops at Parallel Join and more over there are no errors in the process. If I delete Exclusive Gateway in Parallel process, the process is going to next level human task through Parallel Join. It means working fine.
    I have designed my process in such a way that, 1st task is Human Task ---> then Parallel Split with 2 Human tasks, out of one task performing throught Excusive Gateway and another one is just simple approval. Finally I am merging these two Human tasks using Parallel Join then finally triggerting Approval Human task, and closing the Process.
    Appreciate your quick suggestions to fix this issue.
    Thanks in advance,
    Dev...

    Hi Unni,
    Thanks for your reply.
    I have checked all the parallel tasks and all are in completed state. No errors.
    If I delete Exclusive Gateway it is working fine. I have checked step by step tasks in NWA, and every thing goes well.
    Please let me know If I missout anything.
    Thanks in advance,
    Dev

  • Parallel process with reentrant VI have same value in both threads

    Has it been so long since I programmed LabVIEW that I forgot some basic stuff??
    I have a main VI which originally called dynamic processes in parallel.
    I then called the sub-vi's dirtectly and still run them in parallel, but thwey now have seperate names.
    I use a QSM.  Each parallel thread now has it's own QSM, because although I was using a Queue Name for each dynamic queue, the data that was being extracted was shared or done between the two threads.  If I confused everyone with the statement, I shall explain.
    Two parallel processes, calling a QSM (re-entrant).  They have the same number of elements and matching sets.
    EX:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task2
    Task3                 Task3
    Task4                 Task4
    Task5                 Task5
    Task6                 Task6
    I was expecting each thread (each process) to extract from the queue the list of tasks as entered (from Task1 to Task6).  What the processes were getting was the following:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task3
    Task4                 Task5
    Task6                 default
    Each Process was sending a different Queue Name to the QSM.  Each queue should have it's own name.
    I need to get this running by tomorrow with no excuse!  So I decided to do a lame workaround by also having 2 QSMs.  That fixed it..
    In each parallel process (which are a copy of each other with different names) there is a call to open a telnet session.  I probed and placed breakpoints in the code.  Although each process has a different name and the call to the function that opens the telnet session is re-entrant, the very same telnet reference number is assigned to both processes.
    Why?  Why would they get the same reference number?  I made all vi's down to (and including) Telnet Open Connection as re-entrant (although it was not needed) and it still assigns the same reference number to each telnet session.  Why?  What I am not seeing?  What am I missing?
    Unfortunately, I cannot post the code..  But it is not complicated code.  Just 2 sessions with different IP addresses.  I would expect different telnet session references... 
    As a matter of fact, I need to try something silly.. 

    I should provide more details with the solution....
    I just have to stop saying "D'Oh!!"
    Okay... here goes...
    In the LVOOP, I am using Notifiers and Semaphores to ensure that a race condition cannot occur.  Works well with previously written code.
    In this particular implementation, I have the same / similar object being created more than once (twice at this time).
    Where I went wrong (D'Oh!!!!), was to have a static name for a given object when creating the Notifier and Semaphore references.  Since the same name was given, so was the reference.  SInce the references were the same, so was the data, and so on.
    D'Oh!!!!!!
    D'Oh!!!!!!
    Now I know why a particular bird was called D'Oh-D'Oh bird... 
    D'Oh!!!  Such a silly mistake...
    D'Oh!!!
    Hope it makes a few people laugh...  or help another bird....

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Use of global data in transformation with parallel processing

    Hi,
    In an upgrade I have a global variable in a routine in 3.5 in a transfer rule. The global variable keeps count of an ID.
    The global variable is used in every data package,because the corrsponding infopackage is set to PSA only, where help says:
    "If you select this processing option and then request processing is done serially during loading, the global data are maintained as long as the process with which the data were processed remains.
    In 7.0 I also use a global variable to do the same thing, but an Infopackage is no longer available, because we use DTP´s now.
    How can I store my global variable so that all parallel processing of the data packages use the same global variable? or can I set an option so that the DTP does serial processing, similar to the infopackage setting in 3.5?
    As a workaround I increased the package size to a very large figure, but I would be more comfortable with a sounder solution.
    thanks

    Hi Max,
      Try to declare your global variable in start routine. Among the lines:
    $$ begin of global - insert your declaration only below this line  -
    $$ end of global - insert your declaration only before this line   -
    Best Regards.
    Javier Gómez

  • Strange responsetime with query triggering parallel processing

    Hi,
    Are there anyone who can run this SQL below on a RAC 11g R2, and give me feedback on how long time it took?
    In my environment it takes more than a minute, and I suspect there is a problem with the database and parallel processing.
    If I collect the values from the inner select and make a list of comma-separated values, it goes fast.
    Replace SCHEMA_NAME with an existing SCHEMA_NAME on you database before running.
    SELECT sa.*
      FROM gv$sqlarea sa
      WHERE sa.sql_id in (SELECT sql_id from gv$sqlarea where parsing_schema_name like '%SCHEMA_NAME%' and rownum < 100)
    Thanks!

    You are using the hint a bit incorrectly.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements006.htm#SQLRF50805
    <quote>
    The NO_PARALLEL hint overrides a PARALLEL parameter in the DDL that created or altered the table. For example:
    SELECT /*+ NO_PARALLEL(hr_emp) */ last_name
    FROM employees hr_emp;
    <quote>
    Check Metalink note 267330.1 as well.

  • Job fail with Timeout for parallel process (for SID Gener.): 006000

    Hello all,
    Im getting below error and not able to find any issue with Basis side. Please anyone help on this!
    Job started
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Background process BCTL_4XU7J1JPLOHYI3Y5RYKD420UL terminated due to missing confirmation
    Process 000006 returned with errors
    Data pkgs 000001; Added records 1-; Changed records 0; Deleted records 0
    Log for activation request ODSR_4XUG2LVXX3DH4L1WT3LUFN125 data package 000001...000001
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Activation is running: Data target CRACO20A, from 1,732,955 to 1,732,955
    Overlapping check with archived data areas for InfoProvider CRACO20A
    Data to be activated successfully checked against archiving objects
    Parallel processes (for Activation); 000005
    Timeout for parallel process (for Activation): 006000
    Package size (for Activation): 100000
    Task handling (for Activation): Backgr Process
    Server group (for Activation): No Server Group Configured
    Parallel processes (for SID Gener.); 000002
    Timeout for parallel process (for SID Gener.): 006000
    Package size (for SID Gener.): 100000
    Task handling (for SID Gener.): Backgr Process
    Server group (for SID Gener.): No Server Group Configured
    Activation started (process is running under user *****)
    Not all data fields were updated in mode "overwrite"
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Report RSODSACT1 ended with errors
    Job cancelled after system exception ERROR_MESSAGE

    Thanks for the link TSharma I will try that today.
    UPDATE:
    I ran a non-parallel Data Pump and just let it run overnight. This time it finished after 9 hours.  In this run I set the STATUS=300 parameter in the PARFILE which basically echos STATUS updates to standard out every 300 seconds (5 minutes).
    And as before after 2 hours it finished 99% of the export and just spit out WAITING status for the last 7 hours until it finished.  The remaining TABLES it exported (a few hundred) were all very small or ZERO rows.  There clearly is something going on that is not normal.  I've done this expdp before on clones of this database and it usually takes about 2-2.5 hours to finish.
    The database is about 415 Gigabytes in size.
    I will update what the TRACE finds and I'm also opening a case with MOS.

  • Parallel processing issue withing same server

    hi,
    i need to perform parallel processing withing same server using work processes available in same server.
    suggest if this can be accomplished and explain the design if possible.

    Hello Venkata,
    You can achieve parallel processing by using CALL FUNCTION .... STARTING NEW TASK <task name>.
    In this case function module runs in asynchronous mode without stopping calling program.
    For more details you can refer following link:
    https://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing
    Thanks,
    Augustin.

Maybe you are looking for

  • Mass processing

    HI, Is it possible to st the Deletion flag to the process order through mass processing. Regards, Amit

  • Where is the green screen effect in Elements 13 on a Mac

    In the help manual they list different key options, such as blue and green screens.  can not find them under keying for the Mac

  • "Could not complete your Request the internal File Signature is incorrect

    Just downloaded the CC version of PS and it wont take...this error continues to pop up when trying to launch... I have uninstalled twice, still no luck. I have CS5 on same computer...not sure if this a conflict...

  • Code generated in FB 4 for php zend service

    I am trying the data generation stuff for php in FB 4 I used to use amfphp and my class would look like this <?php error_reporting(E_ALL); require('DB_CONSTANTS.php'); class AudioService{     var $connection;     function AudioService(){         $thi

  • Macromedia Trial / Won't mount disk image

    Hi, I have just recently bought a 2Ghz dual core PowerMac and I have tried dowloading and installing some macromedia trial programs from their website. However, every time I do this I get an error message saying that it can't mount the disk image. Wh