Parallel Processing with version 2.0

Say I wanted to loop through a collection of servers, and copy a bunch of files to them, this part is simple, but how would I go about doing it, so that it copies the files to each server in parallel rather than serial? I know version 3.0 has Start-Job,
but version 2.0 doesn't.
I cannot use Invoke-Command because PSRemoting is not setup and most likely isn't allowed (I know dumb, but I don't make the rules).
If you find that my post has answered your question, please mark it as the answer. If you find my post to be helpful in anyway, please click vote as helpful.
Don't Retire Technet

Workflows require PowerShell 3.0.  Jobs should do the trick, so long as they're used smartly.  There's quite a bit of overhead involved with starting up a new job, so I wouldn't use them for really small individual tasks, but for a long file transfer
like you're describing, you probably won't notice the difference.
Do keep in mind that each job has a new process of powershell.exe running, which eats up 50MB or more of RAM until it's finished.  You don't want too many of them going at once.
An alternative would be to use runspaces.  They're faster, but quite a bit more complicated to use properly.  There are a few functions available on the internet which try to wrap this concept up and make it easier to call from a
script; try searching for "Invoke-Parallel" and "ForEach-Parallel", if you're interested in that.

Similar Messages

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • BPM Parallel Process with Exclusive Gateway

    Hi,
    I am facing issue with Exclusive Gateway in Parallel Process.
    Issu is, process always in In-Progress state at parallel Join. I mean process stops at Parallel Join and more over there are no errors in the process. If I delete Exclusive Gateway in Parallel process, the process is going to next level human task through Parallel Join. It means working fine.
    I have designed my process in such a way that, 1st task is Human Task ---> then Parallel Split with 2 Human tasks, out of one task performing throught Excusive Gateway and another one is just simple approval. Finally I am merging these two Human tasks using Parallel Join then finally triggerting Approval Human task, and closing the Process.
    Appreciate your quick suggestions to fix this issue.
    Thanks in advance,
    Dev...

    Hi Unni,
    Thanks for your reply.
    I have checked all the parallel tasks and all are in completed state. No errors.
    If I delete Exclusive Gateway it is working fine. I have checked step by step tasks in NWA, and every thing goes well.
    Please let me know If I missout anything.
    Thanks in advance,
    Dev

  • Parallel process with reentrant VI have same value in both threads

    Has it been so long since I programmed LabVIEW that I forgot some basic stuff??
    I have a main VI which originally called dynamic processes in parallel.
    I then called the sub-vi's dirtectly and still run them in parallel, but thwey now have seperate names.
    I use a QSM.  Each parallel thread now has it's own QSM, because although I was using a Queue Name for each dynamic queue, the data that was being extracted was shared or done between the two threads.  If I confused everyone with the statement, I shall explain.
    Two parallel processes, calling a QSM (re-entrant).  They have the same number of elements and matching sets.
    EX:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task2
    Task3                 Task3
    Task4                 Task4
    Task5                 Task5
    Task6                 Task6
    I was expecting each thread (each process) to extract from the queue the list of tasks as entered (from Task1 to Task6).  What the processes were getting was the following:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task3
    Task4                 Task5
    Task6                 default
    Each Process was sending a different Queue Name to the QSM.  Each queue should have it's own name.
    I need to get this running by tomorrow with no excuse!  So I decided to do a lame workaround by also having 2 QSMs.  That fixed it..
    In each parallel process (which are a copy of each other with different names) there is a call to open a telnet session.  I probed and placed breakpoints in the code.  Although each process has a different name and the call to the function that opens the telnet session is re-entrant, the very same telnet reference number is assigned to both processes.
    Why?  Why would they get the same reference number?  I made all vi's down to (and including) Telnet Open Connection as re-entrant (although it was not needed) and it still assigns the same reference number to each telnet session.  Why?  What I am not seeing?  What am I missing?
    Unfortunately, I cannot post the code..  But it is not complicated code.  Just 2 sessions with different IP addresses.  I would expect different telnet session references... 
    As a matter of fact, I need to try something silly.. 

    I should provide more details with the solution....
    I just have to stop saying "D'Oh!!"
    Okay... here goes...
    In the LVOOP, I am using Notifiers and Semaphores to ensure that a race condition cannot occur.  Works well with previously written code.
    In this particular implementation, I have the same / similar object being created more than once (twice at this time).
    Where I went wrong (D'Oh!!!!), was to have a static name for a given object when creating the Notifier and Semaphore references.  Since the same name was given, so was the reference.  SInce the references were the same, so was the data, and so on.
    D'Oh!!!!!!
    D'Oh!!!!!!
    Now I know why a particular bird was called D'Oh-D'Oh bird... 
    D'Oh!!!  Such a silly mistake...
    D'Oh!!!
    Hope it makes a few people laugh...  or help another bird....

  • How to do parallel processing with dynamic internal table

    Hi All,
    I need to implement parallel processing that involves dynamically created internal tables. I tried doing so using RFC function modules (using starting new task and other such methods) but didn't get success this requires RFC enabled function modules and at the same time RFC enabled function modules do not allow generic data type (STANDARD TABLE) which is needed for passing dynamic internal tables. My exact requirement is as follows:
    1. I've large chunk of data in two internal tables, one of them is formed dynamically and hence it's structure is not known at the time of coding.
    2. This data has to be processed together to generate another internal table, whose structure is pre-defined. But this data processing is taking very long time as the number of records are close to a million.
    3. I need to divide the dynamic internal table into (say) 1000 records each and pass to a function module and submit it to run in another task. Many such tasks will be executed in parallel.
    4. The function module running in parallel can insert the processed data into a database table and the main program can access it from there.
    Unfortunately, due to the limitation of not allowing generic data types in RFC, I'm unable to do this. Does anyone has any idea how to implement parallel processing using dynamic internal tables in these type of conditions.
    Any help will be highly appreciated.
    Thanks and regards,
    Ashin

    try the below code...
      DATA: w_subrc TYPE sy-subrc.
      DATA: w_infty(5) TYPE  c.
      data: w_string type string.
      FIELD-SYMBOLS: <f1> TYPE table.
      FIELD-SYMBOLS: <f1_wa> TYPE ANY.
      DATA: ref_tab TYPE REF TO data.
      CONCATENATE 'P' infty INTO w_infty.
      CREATE DATA ref_tab TYPE STANDARD TABLE OF (w_infty).
      ASSIGN ref_tab->* TO <f1>.
    * Create dynamic work area
      CREATE DATA ref_tab TYPE (w_infty).
      ASSIGN ref_tab->* TO <f1_wa>.
      IF begda IS INITIAL.
        begda = '18000101'.
      ENDIF.
      IF endda IS INITIAL.
        endda = '99991231'.
      ENDIF.
      CALL FUNCTION 'HR_READ_INFOTYPE'
        EXPORTING
          pernr           = pernr
          infty           = infty
          begda           = '18000101'
          endda           = '99991231'
        IMPORTING
          subrc           = w_subrc
        TABLES
          infty_tab       = <f1>
        EXCEPTIONS
          infty_not_found = 1
          OTHERS          = 2.
      IF sy-subrc <> 0.
        subrc = w_subrc.
      ELSE.
      ENDIF.

  • Parallel processing in workflow with fork

    Hello,
    I have a case in production system. the workflow has parallel processing with fork. This fork has 2 branches as inputs.
    It has 2 necessary branches with no other condition.
    Does anyone know of any scenario where the workflow proceeds ahead with one branch executed even though 2 branches are mandatory.
    Thanks.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel processing issue withing same server

    hi,
    i need to perform parallel processing withing same server using work processes available in same server.
    suggest if this can be accomplished and explain the design if possible.

    Hello Venkata,
    You can achieve parallel processing by using CALL FUNCTION .... STARTING NEW TASK <task name>.
    In this case function module runs in asynchronous mode without stopping calling program.
    For more details you can refer following link:
    https://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing
    Thanks,
    Augustin.

  • Parallel Processing in ABAP

    Hi,
    I have a internal table that has object references in it. Each item in the table are indepenent of the other. I want to extract info from each object and convert it into a internal table so that i can pass it to an RFC function.
    So how can i do this extraction of the info from the objects in internal table in parallel.
    To use the STARTING NEW TASK, i created a fn module that is RFC enabled.... then i can't pass the object reference to this module. So how can do this?
    Also i read that this function module call will create a main or external session which has a limit of 6 per user session.Is this correct?
    If above can be done, I also wanted to restrict the no of parallel processes being executed at any point of time to be 5 or so.
    thanks in advance
    Murugesh

    Hi Murugesh,
    Parallel processing can be implemented in the application reports that are to run in the background. You can implement parallel processing in your own background applications by using the function modules and ABAP keywords.
    Refer following docs.
    <b>Parallel Processing in ABAP</b>
    /people/naresh.pai/blog/2005/06/16/parallel-processing-in-abap
    <b>Parallel Processing with Asynchronous RFC</b>
    http://help.sap.com/saphelp_webas610/helpdata/en/22/0425c6488911d189490000e829fbbd/frameset.htm
    <b>Parallel-Processing Function Modules</b>
    http://help.sap.com/saphelp_nw04s/helpdata/en/fa/096ff6543b11d1898e0000e8322d00/frameset.htm
    Dont forget to reward pts, if it helps ;>)
    Regards,
    Rakesh.

  • Explain Plan - Parallel Processing Degree of 2 and CPU_Cost

    Explain Plan - Parallel Processing Degree of 2 and CPU_Cost
    When I use a hint to use parallel processing with a degree of 2
    the I/O cost seems consistently divided by (1.8) but the cpu cost
    adjustment is inconsistent(between 2.17 and 2.62).
    Any ideas on why the cpu cost varies with each table ?
    Is there a formula to adjust the cpu_cost ?
    Thanks,
    Summary:
    The i/o cost reduction is consistent (divide by 1.8)
    Table 1: 763/424 = 1.8
    Table 2: 18774/10430 = 1.8
    Table 3(not shown): 5/1.8 = 3
    But the cpu cost reduction varies:(between 2.17 and 2.62)
    Table 1: 275812018/122353500 = 2.25
    Table 2: 7924072407/3640755000 = 2.17
    Table 3(not shown): 791890/301467 = 2.62
    Example:
    Oracle Database 10.2.0.4.0
    Table 1:
    1.) Full table scan on Table 1 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,1)*/
    * from table_1
    SQL> select cost,io_cost,cpu_cost from plan_table;
    IO_COST CPU_COST
    763 275812018
    2.) Process Table 1 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_1,2)*/
    * from table_1
    IO_COST CPU_COST
    424 122353500
    Table 2:
    3.) Full table scan on Table 2 without parallel processing.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,1)*/
    * from table_2
    IO_COST CPU_COST
    18774 7924072407
    4.) Process Table 2 in parallel with a degree of 2.
    EXPLAIN PLAN FOR select/*+ CPU_COSTING
    PARALLEL(table_2,2)*/
    * from table_2
    IO_COST CPU_COST
    10430 3640755000

    The COST value is for the benefit of the CBO; not YOU.
    What should be more important to you is the elapsed run time for the SQL

  • Parallel processing using arfc.

    Hi experts,
    Does any have a presntation on how to proceed with parallel processing with ARFC?
    Thanks in Advance.

    look at the piece of code ....
    FORM start_onhand_extract_task .
      DO.
        IF g_num_running < g_avail_wps.
          EXIT.
        ENDIF.
        WAIT UP TO 5 SECONDS.
      ENDDO.
    Creating the file name with task number
      ADD 1 TO g_task_num.
      CONCATENATE p_file1 g_task_num INTO g_task_name.
      CONCATENATE g_filename g_task_name  INTO task_tab-filename1.
      CONDENSE task_tab-filename1 NO-GAPS.
      task_tab-task_name    = g_task_name.
      APPEND task_tab.
      CLEAR g_msg_text.
      CALL FUNCTION 'ZMIO_GET_MARD_DATA'
        STARTING NEW TASK g_task_name
        DESTINATION IN GROUP p_grp
        PERFORMING decrease_wp ON END OF TASK
        EXPORTING
          i_filename = task_tab-filename1
        TABLES
          i_matnr    = r_matnr
          i_werks    = r_werks.
      CASE sy-subrc.
        WHEN 0.
          ADD 1 TO g_num_running.
          g_num_submitted = g_num_submitted + 1.
        WHEN 1.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
          g_hold_num = g_num_running.
          WAIT UNTIL g_num_running < g_hold_num OR
                     g_hold_num = 0
                     UP TO 5 SECONDS.
        WHEN OTHERS.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1..
          error_rec-msg_text     = g_msg_text.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
      ENDCASE.
    ENDFORM.                    " start_onhand_extract_task

  • Parallel processing using BLOCK step..

    Hi,
    I have used parallel processing with a BLOCK step. I have put a multiline container element. In the BLOCK step, I have visibily to another container element generated because of the block step (multilne container_LINE). Thus the number of parallel processes are getting created as in the requirement, but the problem is the value in multilne container_LINE is not getting passed to the send mail step. I have checked the binding, everything is ok. Please
    Sukumar.

    Hi
    When I am sure that I am doing properly a binding but it doesn't work then:
    1. I activate workflow template definition (take a joke).
    2. I write the magic word
    /$sync
    in the command line, to refresh buffers.
    3. I delete the strange binding defined with drag&drop and define it one more time using old method from former binding editors in R/3 4.6c systems. I take container elements from the lists of possible entries or I write their name directly. I don't use drag&drop.
    Regards
    Mikolaj
    There are no problems, just "issues" and "improvement opportunities".

  • Parallel processing for program RBDAPP01

    Hi All,
    I am running this program RBDAPP01 daily after every 30minutes to clear the error I Docs (Status 51 Application document not posted) Status Message u201CObject requested is currently locked by user ADMINJOBSu201D when I run this job it only clears few Idocs because of the Status Message u201CObject requested is currently locked by user ADMINJOBSu201D Means when one Idoc is getting updated the second one tries to update the same time for same order, same customer, same material and same plant but different ship to party it finds locked and cannot be posted.
    Can any one tell me what parallel processing is and will it help my case.
    Thanks

    You didn't specfiy which release you use so I can just give some suggestions:
    Note 547253 - ALE: Wait for end of parallel processing with RBDAPP01
    Note 715851 - IDoc: RBDAPP01 with parallel processing
    Markus

  • Parallel Processing: Error message 00-250: "No CUA area available"

    Dear colleagues,
    I have implemented a parallel processing with asynchronous RFC for a large data analysis in CO-PC. I think I was able to implement everything properly as described in the sap help: http://help.sap.com/saphelp_nw04/helpdata/en/22/0425c6488911d189490000e829fbbd/content.htm
    But now I face one problem: My jobs are often cancelled with the error message 250 of class 00 "No CUA area available".
    I don't have a clue what this error message means. I couldn't find any help in the SDN, OSS or in the internet. Has anybody an idea how to handle this problem?
    Thx very much for your help!
    Marius

    In your program while you doing the CALL FUNCTION - STARTING NEW TASK, did you check how many work processors are available.
    Suppose you have 40K data in your internal table, and in one step you are passing 4 K data. Then 10 work processes are required. But in your group setting if you have define 9, then last one will fail. So before any 'submit job in new task' or any 'CALL FUNCTION - STARTING NEW TASK' statement if have to check if any work processes are available. if available then submit otherwise wait.
    Thanks
    Subhankar

  • Planning Area datasource Parallel processing

    Hello Experts,
    Have anyone worked on using a parallel processing profile in a data source.
    We are extracting the data from planning area for backup and extracting to BW for reporting purposes. To speed up the process, we have an option to implement parallel processing for the datasource related to the planning area.
    Does this really work without any discrepancies in the data extracted?
    Please provide some pointer on the concept.
    Thanks,
    Rag

    Hi Raj,
    In our project we are using 8 parallel processes with Block size 1000.
    Regarding question 2: Ideally, yes both the iterations should result in same number of records, but data packets will be different based on Block Size. Please check whether you are referring to data packets or records.
    Also if you have the check box marked in one of the iterations, the number of records may vary.
    Regards,
    JB

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

Maybe you are looking for

  • Getting error in HRIQ_AGR_RFC_APPR_CREATE

    Hi experts, I am using HRIQ_AGR_RFC_APPR_CREATE for mass uploading of  grades for various modules. I am getting the following error 'Technical error; appraised object is not unique Appraised object   does not exist. Kindly help me in finding the caus

  • MultiLanguage Support  queries in SDO_RDF_MATCH

    Hello; I can insert rows that have language specification for en/es with INSERTs. I see in the table that I have two rows, one for RDFLANG -es- and another one for -en-. The question is how can obtain with SDO_RDF_MATCH only the statement for the -es

  • Error When trying to import a DDL Script

    Hi all I want to move my table definitions in one schema to another. So I used Home>SQL Workshop>SQL Scripts>Utilities-> Generate DDL and saved teh schma I want to copy as a script. Now I am trying to import the script into the required workspace. At

  • User Defined Characteristics for Operating Concern.

    Hi All..., I got the total list of characteristics from the core team for the Operating Concern. Now some of the characteristics are there in the possible entries or Selection Box. Where as for the rest, we have to create manually with User Defined P

  • Can't ad compact fieldpoint to a project. Targets and devices button is missing

    Hello, I am using Labview 8.5 evaluation and a compact fieldfoint. I can see the fieldpoint in measurement and automation explorer and i can see the measured values comming form the different modules, so this works fine i gues. The problems start whe