Back ground job for automatic MRP run

Hi pp guru,
my client want for a particularly  material no XXXXXX ,there should be automatic MRP rum daily at 8:00 pm.
kindly suggest me how i will map  back ground job in sap..
regards
Aqueel

Dear Aqueel,
1.Either make a copy of the MRP type say PD and assign the new MRP type to that paticular material.
2.Or else assign one MRP controller sepecifically for this material.
3.Activate the User Exit M61X0001 which allows to run the MRP in background based on MRP type or
a MRP controller.
4.Create the varaint for running MRP in MDBT.Schedule from here.
or else use SM36 select the program RMMRP000 and select the variant and go to scehduling and
select the time and execute.
Regards
Mangalraj.S

Similar Messages

  • Define back ground job for Maintenance Order Settlement through KO8G

    Hi Experts,
    I need to define back ground job for Maintenance Order Settlement through KO8G transaction code.
    Here are the requirements
    1. The job should run on weekly basis automatically.
    2. The settlement period and posting period should be taken care by the job dynamically. i.e. if the job runs in July, the job should pick up Settlement period and posting period as 7 automatically.
    Kindly let me know the steps to define the back ground job as per above requirement.
    Sanjeev

    To run the KO8G  each week, the a RECURRING job has to be created.  At my job, Basis/Ops department would create and schedule the job.
    To have the posting periods, dates etc dynamically determined at the time the job runs:
    Create the variant and save.
    In 5.0, KO8G does not save a variant, therefore, go to SE38 and create a variant for program: RKO7KO8G
    When you go to the ATTRIBUTES screen, to add a name and description, you will see a list of all the fields that are part of the selection criteria. 
    For an example, I frequently use the Posting Date. To the right of the screen is a field called Selection Variable, click on the drop box and select D - Dynamic Date Calculation.
    Go to field NAme of variable, click on drop box and select the variable you want.
    Each field has its own list of selection variable and name of variable depending on how it works in the progra.
    If you look at Posting period you will see:
         T: Table Variable from TVARVC
         B: User-defined variables
    I do not know about User defined variables, but for TVARC there is thread that leads to a "how to" blog
    [How create a variable in the TVARVC that calculate the last week]
    do some more searching and you should find more answers
    Good luck
    Althea

  • Back ground job for dtp

    hi,
    is there any possibility to do back ground job for dtp.
    if possible can u tell me the necessary settings?

    Hi Venkat,
    The way to schedule a DTP in BI in NW 2004s is by putting it inside a process chain. The process chain can be scheduled.
    See the following link on process chains.
    http://help.sap.com/saphelp_nw04s/helpdata/en/d5/e80d3dbd82f72ce10000000a114084/frameset.htm
    Hope this helps,
    Regards
    Karthik
    Assign points if helpful.

  • Error while using Back ground Job for Planning function in BPS

    I have created FM and Program for scdualing  Back ground Job for Planning function.
    I have created Planning function with exit option and passed parameter  Global seqence name.
    Error is lot of Jobs are creating while exe in BPS0
    Kindly help me the same
    Regards
    GR

    Hi Rama,
    It seems there are two diff. functional modules (UPC_BUNDLE_EXECUTE AND UPC_BUNDLE_EXECUTE_STEP). The second one divides the planning sequesces on the basis of something you specify (e.g. company code). Just make sure that you are using correct FM.
    just a thought.......
    Regards,
    SK

  • Back Ground Jobs for ESS/MSS

    Hi,
    I would like to know all the back ground jobs to be scheduled for ESS/MSS.
    As all the data updated in the portal as to reflect in r/3 system and all the data updated in r/3 system to portal.
    Regards
    Vish

    You need to set the background jobs depending on the functionality which is being implemented and what the data ?
    for eg: if it's leave request there background jos which needs to scheduled
    RPTARQEMAIL (Leave Requests: Send E-Mails)
    RPTARQPOST (Leave Requests: Post)
    RPTARQSTOPWF (Leave Requests: Complete Current Workflows)
    hope this would help you..

  • Create Back ground job for data recieved from a legacy system

    Hi All,
    I have a requirement where i need to schedule a back ground job once a proxy is triggered for the address data sent from a Legacy system. This proxy is triggerred from SAP PI, then i need to create a job shedule where address data is created as BP's(Business Partners) and assigned to a already created Target Group. Then once the Target group is completed with all BP needs to assigned to aCampaign Id. As the data is large it takes lot of time to execute. Please Advice.
    Regards,
    Nagesh Thanneeru

    Hi
    If you can divide the data into two internal tables and later collect into the one internal table, this could solve yuor problem.
    Also you need to make sure that enough memory is available in the system. Check DB6cockpit for that.
    Also there are a lot of correction notes available with SAP for the error"TSV_TNEW_BLOCKS_NO_ROLL_MEMORY"
    Check what the job exactly does, and search for the relevant notes, as there are different notes for different application areas...
    if its a loading job(Infopackage) you can reduce the data packet size of the IP, but that might hamper the loading performance.
    Hope this helps
    Regards
    Shilpa

  • How to Schedule Back Ground job For every 30 Min...

    Hi friends,
    Please Tell me the steps...
    How to schedule BACKGROUND Job for every 30 Min.
    Please Help me ..its urgent.
    Thanks & Regards,
    Vasu.

    Hi Vasu,
       Goto SM36, here you can create your own jobs and schedule them accordingly.
    Hope this is helpful to you. If you need further information, revert back.
    Reward all the helpful answers.
    Regards
    Nagaraj T

  • Back ground job for OLIX copying

    Hi,
    I am using OLIX to copy keyfigures between Infostructures.
    How to define a Mass job for this transaction to make copying for all keyfigures and all product groups at a time.
    thanks,

    Hi Venkat,
    The way to schedule a DTP in BI in NW 2004s is by putting it inside a process chain. The process chain can be scheduled.
    See the following link on process chains.
    http://help.sap.com/saphelp_nw04s/helpdata/en/d5/e80d3dbd82f72ce10000000a114084/frameset.htm
    Hope this helps,
    Regards
    Karthik
    Assign points if helpful.

  • Back ground job for dead line monitoring.

    Hi all,
    In Production, job name: SWWDHEX is scdhuled for dead line monitoring.
    I want to know the condition periodic schedule of background job,
    i am not able find out in Sm37. it is executing three in times in day, two times in one day, two, three days it is not executing.
    can you please suggest me that what is periodic time is needed for this job.
    Thanks,
    Surya.

    Hi Kjetil,
    You are absolutely correct and I firmly believe scheduled deadlines is the way to go for the following reason:
    A few years ago I was driving into the mountains to go fishing.  It was a Friday and my project was finished and I was looking forward to catching some trout.  Unfortunately I received a call from a customer with whom I had no contact who had gone live with HR/ESS that morning.  basically their system had ground to a halt at about 10am.  They were desparate so I turned around and went back.
    The problem was they had set deadlines to 5 minutes in their testing system and this had gone to Production.  And their 20 thousand staff had been sitting on their leave and overtime requests for a month.  All processes in their system were either running SWWDHEX, or scheduling to run SWWDHEX.
    Anyway, not a big deal to fix although the had to stop their EP and kill the outstanding processes before transporting the correct deadline timings and setting for scheduled deadlines.  But I was too late to join my compatriots up in the mountains. 
    So:
    A happy customer - 1 day;
    Reputation in workflow - 10% better;
    Not being at the best fishing in living memory (the party caught 20) - priceless.
    Cheers
    Gareth

  • VF06 - Back ground job for Credit Note Generation

    Dear SD GURUS n EXPERTS,
    My user want to create the Credit note generation (Sales Return Invoice) automatically. On SDN I found that it can be achieved by VF06 and scheduling this JOB in background on every night. IS IT CORRECT???
    If yes, Please tell me detailed steps or step by step procedure.
    Thanks in advance.
    DSC

    Hi DSC
    There should be no issue in meeting this need.
    When you create your returns delivery and carry out the post goods receipt in LE
    the system will write an entry into the billing due index table VKDFS with your
    returns billing type that is assigned for the process.
    You only have to schedule a background job to run when required with program SDBILLDL
    in SM36 with a variant where it is only selecting RE documents.
    This will bill all due returns documents as required
    Hope this helps
    Kind regards
    Brian

  • Back ground job for Risk Analysis

    Dear expert
    we have schedule BG for risk analysis at role level for a DEV box and its been 7 days since it is in running state .
    I have checked logs but no error .
    Is this normal behaviour .I am confused because of DEV box which is having test roles also .
    Also we are using logical system as well as physical system for ruleset .
    Kindly share your experience .
    Thanks & Regards
    Ashesh

    Hello All,
    We are geeting below mentioned error -
    WARNING:  Job ID:235 : Failed to run Risk Analysis
    java.io.IOException: No space left on device (errno:28)
         at java.io.FileOutputStream.writeBytes(Native Method)
         at java.io.FileOutputStream.write(FileOutputStream.java:260)
         at sun.nio.cs.StreamEncoder$CharsetSE.writeBytes(StreamEncoder.java:336)
         at sun.nio.cs.StreamEncoder$CharsetSE.implWrite(StreamEncoder.java:395)
         at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:136)
         at java.io.OutputStreamWriter.write(OutputStreamWriter.java:191)
         at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:111)
         at java.io.BufferedWriter.write(BufferedWriter.java:206)
         at java.io.Writer.write(Writer.java:126)
         at com.virsa.cc.xsys.riskanalysis.dao.dto.RAReportDTO.printToSpool(RAReportDTO.java:454)
         at com.virsa.cc.xApr 1, 2011 2:08:45 AM com.virsa.cc.xsys.meng.ObjAuthMatcher <init>
    Thanks,
    Jagat

  • Back ground job for cleaning for Transport buffer

    Hi Folks,
                  We are going to use some tool for transports from future so before that to implement that tool we need to clean the transport buffer by scheduling some background job.
    Can you please let me know is there any background job as such or how i need to clean that transport buffer..
    SAP version:4.6
    os :unix
    db:oracle
    Thanks in advance for the help.
    Regards,
    Raj

    Hi,
    You can clean the transport buffer by rename the following file. Once this file is renamed, go to STMS_IMPORT Tcode and refresh the screen, you will see that all the transport list gone from there.
    /usr/sap/trans/buffer/<SID of your system>
    If you want your old list back, just rename this file again  and refresh the above screen again.
    With Regards,
    Saurabh

  • Back ground job - report, is scheduled 49 hrs ago, still ACTIVE??

    Hi Experts,
    Pls. clarify one of my simple doubt that, Weather a report(which has only one SELECT statement, with key in WHERE clause) wuld take 49 hours to execute? bcoz, I scheduled a back ground job for my_report, 49 hours ago, its still ACTIVE, OR does it went to infinitive loop?
    thanq

    Hi
    no one select query won't take that mucj of time
    may be that program went in infinite loop
    please check with that
    reward if useful
    <b>below is the best way to write a select query</b>
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Error in Back ground job schedule for call transaction

    Hi Experts ,
    I have a Program which as Three BDC in it . 1 - to create contact person , 2- customer 3-sales order
    Using call transaction if i run with all screens and no screens it's running fine..
    if i schedule it background job
    conact and customer works fine but sales order not works ..
    can anyone give me the solution ?

    Hi Phani and Pavan ,
    It works Fine for Back ground anf fore ground i.e ( N and A )  sales order is created here .
    but Sales order not created when i schedule it in back ground  as
    Program - execute in back ground  .
    sales not created when i schedule it in background job  only using SM36 or Program - execute in back ground  ?
    Edited by: Pradeep Annaiah on Jan 13, 2009 5:29 AM

  • What is the back ground job when we run an Attribute Change run ?

    Dear One´s,
    What is the back ground job when we run an Attribute Change run ?
    Thanks in advance,
    Raj

    Hi Raj,
    1. If the attribute change run is triggered through process chain the job name is 'BI_PROCESS_ATTRIBCHAN'
    2. If the change run is triggered from RSA1> tools?Appl Heirarchy/Attribute changerun then the job name starts with BI_STRU*
    3. If you trigger the program RSDDS_AGGREGATES_MAINTAIN from SE38 then the job name will be RSDDS_AGGREGATES_MAINTAIN
    Hope this helps...

Maybe you are looking for