Problem in Parallel Processing

Hi All,
i had performed all the steps involved in parallel processing like creating Multiline Container Elements
and creating Task.
when i run my workflow all the workitems are created in workflow logs in completed state
but in sbwp i can't find my workitems in the Inbox.
pls let me know why this is happening??
Thanks
Ravi Aswani

Hi Ravi,
As per as per your inputs, when you check workflow log all the workitems are in complete status.
1. Have you created all the tasks as background task?
if it is a background task, then you wont get any mails.
2. If you have created an foreground task and still if it is not sending any mails then check in the workflow log, if the flow is going to any activity step which is defined to send workitem.
3. Since you have said that the status is completed, i dont think there is any agent problem right now.
Please revert back if need more clarifications and with more inputs so that we can help you better.
Regards,
Gautham Paspala

Similar Messages

  • Parallel processing of Jobs in Java

    Hi Sun forum Experts,
    we are facing a problem in parallel processing of jobs in java,The jdk version used is 1.4.
    Find the issue analysis as below
    Assume we have TABLE1 as below:  
    1st job completed running and updates the status to OK by running an update query (so STATUS in TABLE1 is OK )---> 1st row  
    2nd job is still running and is in IN (Status in Table) ---> 2nd row  
    3rd job is still running and is in IN (Status in Table) ---> 3rd row  
    4th job is still running and is in IN (Status in Table) ---> 4th row  
    In the meanwhile, the First job has completed the job and hence  status is "OK" now. now we are calling another update query which updates another column in another table(TABLE2) in database.  
    The query is " Update TABLE2 set STATUS="OK" where INIT_NO='123456789' AND NOT EXISTS(SELECT COUNT(*) FROM  TABLE1 WHERE STATUS in('IN','RE'))  
    Hence the subquery is returning 3 rows as count for the inner query execution which skips the updation of the TABLE2.
    {code}
    *The problem is we want all the 4 jobs to be completed first with respect to TABLE1(running a update query)   & Once this is done we need to update another column in another table(TABLE2) in database.*
    i.e as shown below
    {code}
    public void execute(){  
                 dao.updateTable1(); // calls Table1 and updates it to OK. --> This has to be performed for n number for jobs.Finish all job completion and then proceed below.  
                //Once UpdateTable1 is completed ,after this we perform the below step  
                dao.updateTable2(); //calls Table2 and updates it to OK    
    {code}
    how do we handle this scenario in Java,
    Please Note:Version in java is used is 1.4 only
    Deepak                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    yes sir,you are absolutely correct.
    We run 4 jobs to update table1, wait for them all to complete, then update table2   can you please tell me how do we achieve this in java,
    Please note:JDK version is 1.4 which we are using.
    Thanks
    Deepak
    Edited by: Deepak_A_L on Jun 25, 2009 12:58 AM

  • Parallel Processing problem

    Hi All,
    I have implemented parallel processing based on the number of jobs given on screen.In the table,if there are 1000 records and the number of jobs is 10,then 100 records would  be passed to the submit program.The problem here is that those are accounts and each account can be associated to more than one contracts,so there is a possibility that if say 1 account has 4 contracts with it, 2 might get passed in the 1st batch while the other 2 can go in the next batch since the data is divided depending on the number of jobs given on screen.
    I would like to ensure that all the contracts associated with that account go at a single time to the submit program.Can anyone give me an idea as to how can this be achieved?
    gv_job_no is the number of records which will be sent in one batch and lt_ever is being sent to the submit program.
      LOOP AT gt_ever
        INTO gwa_ever.
        IF gv_lines EQ gv_job_no.
          CLEAR gv_lines.
          EXIT.
        ELSE.
          APPEND gwa_ever TO lt_ever.
          gv_lines = gv_lines + 1.
          DELETE gt_ever
            INDEX sy-tabix.
        ENDIF.
      ENDLOOP.
    Thanks,
    Shreeraj
    Edited by: shreeraj pawar on Jan 18, 2011 4:29 PM

    Hi shreeraj pawar ,
    do like this with sorted internal table first 2 fields account and contract  (symbolic code):
    loop at gt_account_contract assigning <any>.
      at new account.
        lv_start = sy-tabix.
      endat.   
      at end of account.
        append lines of gt_account_contract
          from lv_start TO sy-tabix
          to lt_account_contract.
        if lines( lt_account_contract ) >= lv_lines_per_job.
    * <submit lt_account_contract for processing>
        clear:
          lt_account_contract,
          lv_start.
        endif.
      endat.
    endloop.
    You may have accounts with many contracts, this may lead to exceed the  lv_lines_per_job limit. But you will still have a much smoother distribution.
    Regards,
    Clemens

  • Parallel processing end condition problem

    Hi,
    i use a block with parallel processing for each row (parforeach). In every branch i create a workitem where a user has to input something. This is stored in my field status.
    Now i want that if the status is 'X' all branches will be closed like it would be in a fork.
    I entered the end condition &status& = X
    But the end condition of parallel processing doesn't work. I tested it and it works but into the workflow the branches won't be ended.
    Does someone has a solution or can explain me the problem?
    Thx

    Hi,
    The dynamic parallelism using 'parforeach' is not the same as fork with multiple branches with number of required branches to end the fork !! Its like this, when you use dynamic parallelism , for each index of your multiline container element used for parallelism, the sub-workflow ( or the associated Task having dynamic parallelism) will be called parallely and each branch ( parallely called sub workflow or task ) is independent now having no relation ( by default ) and workflow will not continue to next step until unless all these branches are completed.
    However, to solve your probelm i suggest the following
    1. How are you handling the process in each branch, is it through a sub workflow? if yes, in your sub workflow you can create a fork parallel to your normal process. In that fork, create a 'Wait for Event' step and wait for a new custom event ( for this you have to define a new custom event on your BO) . Put the necessary branches required as 1 and Join this branch to end of this sub-workflow. and
    2. Whenever your requirement to end all the branches is fulfilled ( say in your case status=X) raise this new custom event using Create Event and this will be captured by 'Wait Event' step in the fork of your sub-workflow and it will end that sub workflow ( meaning, your branch is ended now) .Make sure that you pass the BO Object Instance to your sub-workflow through binding from your main workflow. !!
    Hope this helps you !!
    Regards
    Krishna Mohan

  • Problem in Dynamic Parallel processing using Blocks

    Hi All,
    My requirement is to have parallel approvals so I am trying to use the dynamic parallel processing through Blocks. I cant use Forks since the agents are being determined at runtime.
    I am using the ParForEach block. I am sending the &agents& as a multiline container in 'Parallel Processing' tab. I have user decision in the block. Right now i am hardcoding 2 agents in the 'agents' multiline element. It is working fine..but i am getting 2 instances of the block.I understand that is because I am sending &AGENTS& in the parallel processing tab.. I just need one instance of the block going to all the users in the &AGENTS& multiline element.
    Please let me  know how to achieve the same. I have already searched the forum but i couldnt find anything that suits my req.
    Pls help!
    Regards,
    Soumya

    Yes that's true when ever you try to use  ParForEach block then for each value entry in the table a separate workitem ID is created, i.e. a separate instance is created that paralle processing is not possible like that
    Instead of that what you can do is create a fork with 3 branches and define a End Condition such that until all 3 branches are executed .
    Before to the fork step determine all the agents and store them in a internal table , you can access the one internal table entry by using the index value check this [wiki|https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/accessingSingleEntryfromMulti-line+Element] to access single entry from a internal table.
    For each task in the fork assgin a agent
    So, as we have defined the condition that until all the three branches are executed you don't want to come out of the fork step so, it will wait until all the stpes are completed.

  • Parallel Processing Problems!

    Hello!
    I have a WF with a parallel processing sending a workitem to each user that I choose in a Z program, that's ok.
    What I need is to start the next step ONLY if all the aprovers approve it, if one of then reject it I need another action.
    Now if just one of the appprovers aprove the step, my WF is going to the next... How can wait all approvals to start the next step???
    am I clear?
    Tks
    Marcos Munhos

    I think the Parallel approval step always wait for the remaining approvers to approve which are in the Approval step. I think you need to check whether the parallel approval is properly defined in the Miscalleanious or Others Tab.
    If the parallel approval step is a Subworkflow then it is difficult to evaluate all the approval done or not. May be you have to make use of container operation.
    Thanks
    Arghadip

  • FORK is Not happening Parallel processing- It's working sequential

    Hi,
       we are into PI 7.O and SP 13.
       I am trying to test Parallel processing using Fork step. (With Two branches)
    My problem is sxm_moni both branches are not executed simultenously and it's executing one after the other.
    Did any body done in XI parallel processing using BPM...both calls has to finish at the same time. I mean first call 10 min and second call aslo has to finish first 10 min ..not other 10 min.
    I heard this problem from XI 3.0 and PI 7.O. But PI 7.1 did any body test the Parallel processing using Fork step.
       Pls help me is this issue will resolve if I go to PI 7.1.
    Regards,
    Venu.

    Hi Henrique,
    they would not necessarily start at the same time but shouldnt also be queued - Customer expecting the response within a 17 sec or 20 Sec but coming response 34 sec will not ok for the customer..tomorrow need add some more target again 17 sec will take...How PI can handle the Multi threading they are checking...I am not sure this problem fixed in PI 7.1 or not.
    there're # of connection restrictions in your system? Check that - Where can I check connections restrictions...If you know pls through some light on this.
    Also, how's your BPM transactional behavior (did you flag the create new transaction steps)?
    - I did not checked the flag for create new transaction step..once my server is up I can check the flag and I can test.
    Regards,
    Venu.

  • BI 7.0 parallel processing of queries in a web application

    Hi,
    I'm currently having problems with a web application / web template with 10 data providers (different queries). When executing the web application the 10 queries are executed sequentially. Since each query takes about 30 sec., the complete execution time exceeds 300 seconds which is not satisfactory.
    Is there any way to enable parallel processing?
    Thanx in advance,
    Patrick

    Hello Patrick
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/41c97a30-0901-0010-61a5-d7abc01410ee
    /thread/351419 [original link is broken]
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/media/uuid/ff5186ad-0701-0010-1aa1-e11f4f3f2f68
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2b79ba90-0201-0010-1b9a-fa13a8f38127
    Thanks
    Chandran

  • How to define "leading" random number in Infoset fpr parallel processing

    Hello,
    in Bankanalyzer we use an Infoset which consists of a selection across 4 ODS tables to gather data.
    No matter which PACKNO fields we check or uncheck in the infoset definition screen (TA RSISET), the parallel frameworks always selects the same PACKNO field from one ODS table.
    Unfortunately, the table that is selected by the framework is not suitable, because our
    "leading" ODS table which holds most of our selection criteria is another one.
    How to "convince" the parallel framework to select our leading table for the specification
    of the PACKNO in addition (this would be times 20 faster due to better select options).
    We even tried to assign "alternate characteristics" to the packnos we do not liek to use,
    but it seems that note 999101 just fixes this for non-system-fields.
    But for the random number a diffrent form routine is used in /BA1/LF3_OBJ_INDEX_READF01
    fill_range_random instead of fill_range.
    Has anyone managed to assign the PACKNO of his choice to the infoset selection?
    How?
    Thanks in advance
    Volker

    Well, it is a bit more complicated
    ODS one, that the parallel framework selects for being the one to deliver the PACKNO
    is about equal in size (~120GB each) to ODS two which has two significant field which cuts down the
    amount of data to be retreived.
    Currently we execute the generated SQL in the best possible manner (by faking some stats )
    The problem is, that I'd like to have a Statement that has the PACKNO in the very same table.
    PACKNO is a generated random number esp. to be used for parallel processing.
    The job starts about 100 slaves
    Each slave gets a packet to be processed from the framework, which is internaly represented
    by a BETWEEN clause on this PACKNO. This is joined against ODS2 and then the selective fields
    can be compared resultin in 90% of the already fetched rowes can be discarded.
    Basicly it goes like
    select ...
    from
      ods1 T_00,
      ods2 T_01,
      ods3 T_02,
      ods4 T_03
    where
    ... some key equivalence join-conditions ...
    AND  T_00.PACKNO BETWEEN '000000' and '000050' -- very selective on T_00
    AND  T_01.TYPE = '202'  -- selective Value 10% on second table
    I'd trying to change this to
    AND  T_01.PACKNO BETWEEN '000000' and '000050'
    AND  T_01.TYPE = '202'  -- selective Value 10%
    so I can use a combined Index on T_01 (TYPE;PACKNO)
    This would be times 10 more selective on the driving table and due to the fact,
    that T_00 would be joined for just the rows I need, about a calculated time 20-30 faster.
    It really boosts when I do this in sqlplus
    Hope this clearyfies a bit.
    Problem is, that I can not change the code either for doing the
    build of the packets or the one that executes the application.
    I need to change the Inofset, so that the framework decides to build
    proper SQL with T_01.PACKNO instead of T_00.PACKNO.
    Thanks a lot
    Volker

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • ABAP OO and parallel processing

    Hello ABAP community,
    I am trying to implement a ABAP OO scenario where i have to take into account parallel processing and processing logic in the sense of update function modules (TYPE V1).
    The szenario is definied as follows:
    Frame class X creates a instance of class Y and a instance of class Z.
    Classes Y and Z sould be processed in parallel, so class X calls classes Y and Z.
    Classes Y and Z call BAPIS and do different database changes.
    If classes Y or Z have finished, the status of processing is written into a status table by caller class X.
    The processing logic within class Y and class Z should be a SAP LUW in the sense of a update function module (TYP V1).
    Can i use events?
    (How) Should i use "call function in upgrade task"?
    (How) Should i use "call function starting new task"?
    What is the best method to realise that behaviour?
    Many thanks for your suggestions.

    Hallo Christian,
    I will describe you in detail, whow I have solved this
    problem. May be there is a newer way ... but it works.
    STEPS:
    I asume you have splitt your data in packages.
    1.) create a RFC-FM: Z_WAIT
    It return OK or NOT OK.
    This FM: does following:
    DO.
      call function TH_WPINFO -> until the WPINFO has more
    than a certain number of lines. (==> free tasks)
    ENDDO.
    If it is OK ==> free tasks are available
    call your FM (RFC!) like this:
    CALL FUNCTION <FM>
    STARTING NEW TASK ls_tasknam " Unique identifier!
    DESTINATION IN GROUP p_group
    PERFORMING return_info ON END OF TASK
    EXPORTING
    TABLES
    IMPORTING
    EXCEPTIONS
    *:--- Take care of the order of the exceptions!
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    RESOURCE_FAILURE = 5
    OTHERS = 1.
    *:--- Then you must check the difference between
    *:--- the started Calls and the received calls.
    *:--- If the number increases a certain value limit_tasks.
    wait until CALLED_TASK < LIMIT_TASKS up to '600' seconds.
    The value should be not greater then 20!
    DATA-Description:
    parameters: p_group like bdfields-rfcgr default 'Server_alle'. " For example. Use the F4 help
    if you have defined the report-parameter as above.
    ls_tasknam ==> Just the increasing number of RFC-Calls
    as Character.
    RETURN_INFO is a form routine in which You can check the results. Within this Form you must call:
    RECEIVE RESULTS FROM FUNCTION <FM>
    TABLES: ... " The tables of your <FM> exactly the same order!
    EXCEPTIONS
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    NO_ACTIVATE_INFOSTRUCTURE = 1.
    Her eyou must count the received Calls!
    And you can save them into a internal table for checking!
    I hope I could help you a little bit
    God luck
    Michael

  • Parallel process run independently, but do not stop when vi completes

    So I posted a problem yesterday about getting 'elapsed time' express vi to provide updates from a sub-vi to the calling vi. It was suggested that I create a parallel process in the sub-vi that will run the elapsed time function at the same time the other processes are running. I tried to implement this idea, but ran into a problem. The elapsed time process is a While loop paralleled with the main While loop in the sub-vi, It updates every second, based on my time delay, and when I view the running sub-vi, the elapsed time indicator is updating as it should. The calling vi, however, is not seeing these updates. I have wired an indicator to the sub-vi icon, but does not change until the sub-vi finishes.
    Also, the other problem with the parallel process is that it runs forever, regardless of the other loop finishing. I have tried to wire an or'd bool to the stop inside the While loop, but when I do that, the elapsed time process does not start. 
    I have tried data binding a shared variable in the project, and then dragging that to my calling vi, but again I get no updates on the elapsed time.
    Any ideas????

    A VI must finish looping before it has an available output (the terminal on the sub-vi icon).   Research how to communicate between loops. What you are doing can be accomplished with Notifiers (that's what I'd use), Queues, or Global/Shared Variables.
    Your issues appear to be due to a lack of understanding of the LabVIEW data-flow paradigm. Check out the Producer / Consumer example. Post code here and see if one of us can give more guidance.
    Richard

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Parallel Processing

    Hi,
    I am trying to implement parallel processing and am facing a problem where in the Function Module contains the statement :
    submit (iv_repid) to sap-spool
                        with selection-table it_rspar_tmp
                        spool parameters iv_print_parameters
                        without spool dynpro
                        via job iv_name
                        number iv_number
                        and return.
    I call the Function Module as such :
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    But I keep getting the error Output Device "" unknown.
    Kindly advise.
    Thanks.

    I need the output of a report to be generated in the spool and then I am retreiving it later on from the spool and displaying ti along with another ALV in my current program.
    I have called the Job Open and Job Close function modules. Between these 2 FM calls, I have written the code for the parallel processing.
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    After this, I retrieve the data from Function Module : RSPO_RETURN_SPOOLJOB.
    All the above steps work while I am in debugging mode.At the RFC call, it opens a new session and i execute the session completely, return to the main program execution, and then execute to the end , and I get the desired output.
    But in debug mode, if i reach the RFC, a new session opens.I do not execute the FM, instead if i go back to the main program and execute it directly, i can replicate the error : Output device "" unknown.
    So i guess it has got something to do with the Submit statment in the RFC.
    Any assistance would be great !
    Thanks !!

  • Parallel processing in quotation approval

    In approval process
    1) no of approvers are determined at run time i.e w.r.t discount value( 2 or 3).
    2) By using FM i got the approvers in my workflow container as (app1,app2,app3) (3 container elements).
    3) Here i need to trigger the workitem at a time with information in send mail step for all approvers at a time ( if 3 of them is approved the status of the quotation should change to status released.
    4) if anyone is rejected among them then  no status change required.
    problem here is  how to send the workitem to 2 or 3 approvers at a time  and how do we know all of them are approved or rejected that

    Hi SAUMYA,
       I GOT THE DATA IN MULTILINE CONTAINER BY USING METHOD, I HAVE CREATED A METHOD  WHICH GIVES POP-UP WITH APPROVE OR REJECT BUTTONS , I PUT THE MULTILINE CONTAINER ELEMENT IN Miscellaneous TAB  OF APPROVAL  STEP THAT  USER PRESSES APPROVE  RESULT ELEMENT IS '0' IF HE PRESSES REJECT VALUE '4' IS PASSED TO RESULT ELEMENT,  MY REQUIREMENT IS IF ALL APPROVERS GET THE WORKITEM  AT A TIME BY USING DYNAMIC PARALLEL PROCESSING , BUT HOW TO KNOW ALL OF THEM ARE APPROVED AND REJECTED , THE WORKFLOW SHOULD EXECUTE NEXT STEP TILL ALL OF THEM ARE EITHER APPROVED OR REJECTED. IF ANY ONE OF THEM IS REJECTED THEN WORKFLOW WILL BE COMPLETED AT THAT POINT.

Maybe you are looking for

  • Need of ALE settings to connect 4.7 to 5.0 to send IDOc

    Hi I have two servers 4.7 and 5.0 . Now I want to send Idoc from 4.7 client 800 to 5.0 client 800. Can anybody provide me the steps to create ALE settings and to send IDOC. creating logical systems in source and destination and creating RFC and partn

  • User profile and home page changed and Firefox hijacked by IE

    I use Firefox with XP os with Kaspersky os. When I turned on computer it went through changing user profile. Home page is changed to MSN-IE becomes the browser and all my files are gone. Computer hijacked. This is the second computer this has happene

  • Does the Sony DCR-HC32 camcorder work with Final Cut Express 3.5?

    I was just wondering if the Sony DCR-HC32 camera model works for the Macbook or Final Cut Express 3.5. I am really interested in getting the Macbook for video editing. Thanks Compaq Presario 700   Windows XP Pro  

  • Dynex 32 LCD -Half of the picture goes green with a red band

    Hello- I purchased a Dynex 32 LCD TV around Christmas and As a TV it was a great buy, however when I use it as a PC monitor using either DVI, VGA or HDMI connectors I get an intermittent issue with half of the screen. The right side of the screen (Fr

  • [SOLVED: corrupted FS] Why is pthread not found?

    Hi, I'm sure this is a stupid problem since i'm way too tired, but i have to get this fixed. I've been looking on google for hours, and i can't seem to find my answer. I'm trying to build player/stage. I tried both the player-svn package in AUR and m