Job schedule: How to process huge ammount of IDocs ?

HI,
we´got a huge ammount (1000-5000 IDocs) of one IDoctype.
I need a Job with a maximum of 40 IDocs in each job.
The job looks like:
1. Step RBDAPP01 with 40 IDocs
2. Step own report
Is there any way not to create a new variant for each "40 IDoc group" ?
I have to create for each IDoc Group(40 pce) a new variant and this is very time spending.
Is there a better way ?
Thanks.
Gordon

Gordon
Have you tried using parallel processing ?
We have processed more than 50,000 IDOC per night, and we use parallell processing. Also we copied program RBDAPP01 and make some specific-company changes.
Also we change the work process (SM51) of SAP. We opened more bacth queues at night to process faster. We decide to have dynamic work process, open and close automatically according our needs.
You shold carefull to avoid self-blocking data of transaction involved. I think you should try parallell processing or increase the amount beyond 40.
Regards..
Luis

Similar Messages

  • NWA Job scheduler, how to change parameters?

    Dear guys,
    moving to AEX (java-only) the ABAP Stack is gone.
    Regarding job scheduling there is no more transaction sm37, but Job scheduler within NWA to be used for scheduling jobs.
    I configured a job for the predefined job definition AlertConsumerJob to get emails in case of alerts.
    Doing this the questions arises how to change parameters once a job is scheduled e.g. add additional email receiver.
    So far is seems you have to stop the configured job and define a NEW one from scratch by providing all job parameters. It seems not possible to adjust a job.
    This is not very comfortable?
    Is there an workaround / solution for this?
    Best regards
    Jochen

    Hello Gaurav
    thanks for your reply.
    It's a pity, but shows that there is still some room of improvement regarding AEX functionality (java only)
    Best regards
    Jochen

  • How to process huge files using File Adapter without ESR and no FCC?

    Hi All,
    I came to a scenario where i need to pick the file from R/3 sys which of around 500MB and need to transport it to the Third party sys using PI 7.1 . This is just simply a pass through data i.e.., no mapping is required (no ESR). and i don't to make use of FCC or BPM
    My question is can the File Adapter can pick the file which is of 500MB size?
    If not then what would be best solution to do it?
    Do i need to divide the file into smaller sizes say around 100MB a file each in R/3 sys and then pick those files .
    can i do without dividing the files in the R/3 sys?
    Thanks in advance.
    Sai

    Hi Bhaskar,
    If you zip probably the size would come less which may increase his performance a little. But I feel SAP also dont recommend this much load with PI.
    If I am in his shoes then I will do a simple FTP without using PI itself. Else will look into other options. Thats my two cents.
    Hope this alexs blog will give an idea to Sai:
    /people/alessandro.guarneri/blog/2007/02/21/sap-xi-acting-as-a-huge-file-mover
    Regards,
    ---Satish
    Edited by: Satish Reddy on Jul 20, 2011 3:02 PM

  • Job Schedule to send IDocs

    Hi Experts,
    I have created variants for RBDMIDOC and RSEOUT00.
    Used the same in SM36 and scheduled periodically for 5 minutes. Once I complete the job scheduling, r/3(Distribution Model) sends all IDocs which were changed. But problem is the job will not run again.
    My goal is: Send master data IDocs without using BD10,12, and vk13.
    I welcome all the suggestions.
    Regards,
    Venu V

    Thanks for the replay,
    I have done change pointer activation and other related steps to send IDocs.
    My job, sends IDocs when I newly created. But it never starts again.
    Regards,
    Venu V

  • How to deactivate Filter and Job in Job scheduling?

    Hi,
    My scenario is:
    XI will poll the file from 3rd party system and wil send to R/3 via IDOC adapter.
    On every sunday , R/3 will be down for 3-4 hrs for maintenance purpose.
    I need to create job scheduling in such a way that XI sould collect all the message coming on sunday during that period
    of time and after endtime has reached , it should process those message and should send the message to R/3 system.
    I have created Sender/Receiver ID ..also create Filter and a JOBID.
    while creating Job ID i specified the following
    1: Start Date    = 18.08.2009
    2:Start Time     = 11.00.00
    3:End Date      = 18.08.2009
    4:End Time      = 15.00.00
    5:Period          =  7
    6:Period Unit   = D (Days)
    Job should be scheduled at 11.00.00 and end at 15.00.00 ,filter and Job both should deactivate and after 15.00.00 it should process those messages.
    But the problem is it doesnt deactivate the filter.
    I tried using SXMS_JOB_DEACTIVATE and SXMS_START_JOB_AT_ONCE
    Both deacitvtes the JOB but not filter.
    Can any body tell me , how to go with it.
    Also if I have to write a report, what all things are required.
    regards,
    Mayank

    Hi Volker,
    Down time for R/3 is more ..so i am not sure whether XI will trying till that time or not.
    And XI is not able to than all the incoming messages in XI during that period of time will be Red Flag.
    can we write a report in XI such a way that .. all the queued messages should be processed ..thorugh Job scheduling.

  • How to schedule infospokes in process chains..

    Hi All,
    I have a few infospokes to be scheduled in a process chain. In each infospoke, I have upto 10 jobs that are to be scheduled one after the other. At the end of these jobs I have to run a FTP script. I am new at creating the process chains. So can anybody help me out in letting me know:
    1. How to create a start process where I can select a given job by using its name
    2. How to run a FTP script as a follow up step on completion of a previous step.
    Thanks in advance
    John

    Hi John
    it a big task to explain all so I will try to give you the high level picture.
    1. process chain is easy to implement and learn.
       usually it is triggered either by a batch job or a
       batch job that fires an event.
    2. to call an OS (like Win or Unix) from the PC you can
       trigger an event from the end of the PC to call a
       ABAP program that will initiate an OS call.
       you use trx: SM69   - External Operating Sys Commands
       to set this call up.
    3. to implement an FTP you can do via a batch file with
       OS commands or by using free utilities (there are many of them).
    I hope it helps.
    Edan

  • Parallel processing in background using Job scheduling...

    (Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
    Hi ABAP Gurus,
    I have read a bit till now about parallel processing. But I have a doubt.
    I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
    Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
    to use parallel processing in SAP.
    Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
    But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
    Can this be also called parallel processing ?
    Please let me know if this sounds wise to you guys...
    Regards,
    Tushar.

    Thanks for your reply...
    So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
    I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
    report ZMMI_MATERIAL_MASTER_TEST
          no standard page heading line-size 255.
    include bdcrecx1.
    parameters: dataset(132) lower case default
                                 '/tmp/testmatfile.txt'.
       DO NOT CHANGE - the generated data section - DO NOT CHANGE    ***
      If it is nessesary to change the data section use the rules:
      1.) Each definition of a field exists of two lines
      2.) The first line shows exactly the comment
          '* data element: ' followed with the data element
          which describes the field.
          If you don't have a data element use the
          comment without a data element name
      3.) The second line shows the fieldname of the
          structure, the fieldname must consist of
          a fieldname and optional the character '_' and
          three numbers and the field length in brackets
      4.) Each field must be type C.
    Generated data section with specific formatting - DO NOT CHANGE  ***
    data: begin of record,
    data element: MATNR
           MATNR_001(018),
    data element: MBRSH
           MBRSH_002(001),
    data element: MTART
           MTART_003(004),
    data element: XFELD
           KZSEL_01_004(001),
    data element: MAKTX
           MAKTX_005(040),
    data element: MEINS
           MEINS_006(003),
    data element: MATKL
           MATKL_007(009),
    data element: BISMT
           BISMT_008(018),
    data element: EXTWG
           EXTWG_009(018),
    data element: SPART
           SPART_010(002),
    data element: PRODH_D
           PRDHA_011(018),
    data element: MTPOS_MARA
           MTPOS_MARA_012(004),
         end of record.
    data: lw_record(200).
    End generated data section ***
    data: begin of t_data occurs 0,
          matnr(18),
          mbrsh(1),
          mtart(4),
          maktx(40),
          meins(3),
          matkl(9),
          bismt(18),
          extwg(18),
          spart(2),
          prdha(18),
          MTPOS_MARA(4),
        end of t_data.
    start-of-selection.
    perform open_dataset using dataset.
    perform open_group.
    do.
    *read dataset dataset into record.
    read dataset dataset into lw_record.
    if sy-subrc eq 0.
    clear t_data.
    split lw_record
       at ','
    into t_data-matnr
          t_data-mbrsh
          t_data-mtart
          t_data-maktx
          t_data-meins
          t_data-matkl
          t_data-bismt
          t_data-extwg
          t_data-spart
          t_data-prdha
          t_data-MTPOS_MARA.
    append t_data.
    else.
    exit.
    endif.
    enddo.
    loop at t_data.
    *if sy-subrc <> 0. exit. endif.
    perform bdc_dynpro      using 'SAPLMGMM' '0060'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'RMMG1-MATNR'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=AUSW'.
    perform bdc_field       using 'RMMG1-MATNR'
                                 t_data-MATNR.
    perform bdc_field       using 'RMMG1-MBRSH'
                                 t_data-MBRSH.
    perform bdc_field       using 'RMMG1-MTART'
                                 t_data-MTART.
    perform bdc_dynpro      using 'SAPLMGMM' '0070'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MSICHTAUSW-DYTXT(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=ENTR'.
    perform bdc_field       using 'MSICHTAUSW-KZSEL(01)'
                                 'X'.
    perform bdc_dynpro      using 'SAPLMGMM' '4004'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '/00'.
    perform bdc_field       using 'MAKT-MAKTX'
                                 t_data-MAKTX.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MARA-PRDHA'.
    perform bdc_field       using 'MARA-MEINS'
                                 t_data-MEINS.
    perform bdc_field       using 'MARA-MATKL'
                                 t_data-MATKL.
    perform bdc_field       using 'MARA-BISMT'
                                 t_data-BISMT.
    perform bdc_field       using 'MARA-EXTWG'
                                 t_data-EXTWG.
    perform bdc_field       using 'MARA-SPART'
                                 t_data-SPART.
    perform bdc_field       using 'MARA-PRDHA'
                                 t_data-PRDHA.
    perform bdc_field       using 'MARA-MTPOS_MARA'
                                 t_data-MTPOS_MARA.
    perform bdc_dynpro      using 'SAPLSPO1' '0300'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=YES'.
    perform bdc_transaction using 'MM01'.
    endloop.
    *enddo.
    perform close_group.
    perform close_dataset using dataset.

  • How to know how many process chains were scheduled?

    Hi guys,
    Is there any table o transaction to know how many process chains per client were scheduled?
    I need to know per client how many pchains were scheduled and the technical name of each one.
    Thks and regards,
    EV

    Hello,
    SM37 - Search for BI_PROCESS_TRIGGER scheduled jobs.
    Regards,
    Jorge Diogo

  • How to delete  old schedule time of process chain after changing

    I am try to change the schedule time of process chain on production but after changed the old schedule is still working (i.e. every day on 7 am I want change it to 6 am only) ,one process scheduled two time.
    I want only last changed. How to remove the older one?
    Thanks

    hmm wierd...one more thing check if this process chain get trigger by anyother process chain by some abap program or any.
    Check the table rspcchain to know if it has any meta chain or not. IF not follow the below steps :
    make your process chain trigger based upon the event. Follow the below step.
    1.SM64 ---> Create event -
    > come to your process chain -> maintain the start variant -> event base---> give that event name here.Now we will be require to create the job which will trigger this event for  this.
    2.Let us take the below PC as example;
    ZL_TD_HCM_004 -- This PC is running after event u2018START_ZL_TD_HCM_004u2019
    3.Go to T Code: SM36 Here we define the Background job which will be available in SM37 after saving it.
    4.It will ask for ABAP Program to be entered. Give it as Z_START_BI_EVENT and Select Variant from the list. (Based on Process chain, you can select it)
    5.Then select Start Conditions and give the start time of process chain. and select periodicity.
    6.Save the newly created job in SM36.It will be now available in SM37.
    This should solve your problem now .. have  fingure crossed
    Thanks,
    Deepak
    Edited by: Deepak Machal on Dec 14, 2011 8:20 AM

  • How to display custom error message in Job log for batch processing

    Hi All,
    I am rexecuting one R/3 report in batch mode and i want to display all the custom error i have handled in job log when its executed from SM36,SM37. The custom error are like 'Delovery/Shipmet doe not exits' or others which we can display in online mode like message e100(ZFI) or any other way and accordingly we can handle the program control like come out of the program ro leave to transaction'Zxxx' or anything. But i want my program to be executed completely and accumulate all the error in job log of batch processing.
    Can anyone tell me how can i do so...
    Thanks,
    Amrita

    Hi,
    Thats what i have done from the begining. I have written message like this:
    Message i100(ZFI).
    I was hoping to see this message in the log. But i cant see. Can you help me pleae...

  • How does the GR processing time affect the scheduling of the process order & the latest start date in the operation.

    Hi
    Can anyone explain  how does the GR processing time affect the scheduling of the process order & the latest start date in the operation overview.

    Hi
    GR processing time means number of workdays required after receiving the material in storage.
    Check this link:GR Processing time
    Regards,
    Anupam Sharma

  • How to lock targetserver execution in job scheduling

    Hi,
    I've a SAP System with a central instance and two application servers.
    I would set authorizations so that users when they schedule their jobs (example in SM37), the field target server is locked and set to a particular application server.
    Any idea ?
    Thanks
      Maurizio Manera

    Hi,
    Giving an overview of the tables involved in batch processing. The jobs details are stored in the system on the following tables.
    TBTCO      - the overall state and information of a background job
    TBTCS      - the view of the jobs that the time based job scheduler has
    TBTCP      - the steps a background job consists of.
    BTCEVTJOB  - the view of the jobs that the event based scheduler has
    BTCCTL     - Controls what BTC schedules runs in system
    Fields include:
    ===============
    JOBNAME, JOBCOUNT: unique identifiers(i.e. key fields)
    SDLDATE, SDLTIME: scheduled date & time of when job is created
    SDLSTRTDT, SDLSTRTIM: scheduled start date & time
    These are the two fields EXECSERVER & REAXSERVER you should be concerned
    EXECSERVER: executing server; filled if you specify a target server
    REAXSERVER: filled at run-time; gets same server as EXECSERVER if target.
    Regards,
    Snow

  • When & how to use default settings option in job scheduling with BO 4.1

    Hi,
    With BO 4.1 we got new features and one of them is "Default Settings" option in the job scheduling. While scheduling a report in CMC we are getting below attached screen. I want to know when and how to use this option? while scheduling job on queries.
    Please guide me to, thanks in advance.
    Regards,
    Mithun Pati.

    Hi Mithun,
    Below thread may give you clear idea.
    The Purpose of the default settings is to have customized default settings for the enterprise. These settings would apply to any user who has access to Info view and wants to schedule a report. For example you can setup default values for "From" section of email so any scheduled reports use those settings. Similarly another setting we have defaulted is the number of retires involved and the seconds for each retry. Business Objects will not send out a failure emails unless the last retry has failed. The default settings should only be modified by an admin.
    Purpose of Default Settings when scheduling a WebI report?

  • How to do the job scheduling in BDC Call transaction

    Hi Experts,
    I've a Query like how to do the job scheduling in BDC Call transaction
      If anybody knows the answer please send me the reply.
      Thanks.
       Regards,
        Rekha

    Hi ,
    any progarm can be scheduled, wether it may be BDC or report thru SM36 Tcode.
    But do rememeber that if ur BDC is using GUI_UPLOAD function module, then it wont work , coz the function Gui_upload or GUI_DOWNLOAd wont work in back ground.
    If u r going to use OPEN_DATASET , READ dataset ....then it can be scheduled. i.e BDC can work if ur program retrievesz the data from Application server.
    Rvert back if any issues,
    Reward with poinst if helpful.
    Regards,
    Naveen

  • How to edit job scheduled in SM36

    Dear All,
    I have defined some background job (run weekly) by SM36, everything run fine except later I found that I forgot to change the output device to follow user's output device.
    I can't find edit button in SM36 when I go back to my scheduled job, is that possible to edit the background job ? How to do that ?
    Thank You.

    Once your Job is finished than it wound not be possible.
    But yes once your job is scheduled and for change your print parameters than follow this steps:
    SM37>Job name>execute>select the job>From menu path JOB>change(Ctrl+F11)>Steps>From menu STEP>Change>pop up will prompt press Print specifications(F5)>Change printer from here.
    PS:Again this would only possible once your job is only scheduled but not finished.

Maybe you are looking for