Performance while data load

Hello Friends,
I an having a query regarding performance in BW 3.5.
while data load from R3 to BW we have 4 options.
Only PSA
Only Data Target
PSA and later data target
PSA and Data target in parallel.
In system performance point of view, which is the best option with less system resources and How.
Your help aprreciated.
Thanks
Tony

Hi ,
for performance point of view ..
PSA and later data target will be better option ..
for more info check this link ..
http://help.sap.com/saphelp_nw04/Helpdata/EN/80/1a6567e07211d2acb80000e829fbfe/frameset.htm
Regards,
shikha

Similar Messages

  • Error while data loading

    Hi Gurus,
    I am getting error while data loading. At BI side when I  check the error it says Background Job Cancelled and when I check in R3 side I got the following error
    Job started
    Step 001 started (program SBIE0001, variant &0000000065503, user ID R3REMOTE)
    Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
    DATASOURCE = 2LIS_11_V_ITM
             Current Values for Selected Profile Parameters               *
    abap/heap_area_nondia......... 2000683008                              *
    abap/heap_area_total.......... 4000317440                              *
    abap/heaplimit................ 40894464                                *
    zcsa/installed_languages...... ED                                      *
    zcsa/system_language.......... E                                       *
    ztta/max_memreq_MB............ 2047                                    *
    ztta/roll_area................ 6500352                                 *
    ztta/roll_extension........... 3001024512                              *
    4 LUWs confirmed and 4 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
    ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    Job cancelled
    Please help me out what should I do.
    Regards,
    Mayank

    Hi Mayank,
    The log says it went to short dump due to temp space issue.as its the source system job ,check in the source system side for temp table space and also check at the BI side as well.
    Check with your basis regarding the TEMP PSA table space - if its out of space ask them to increase the table space and try to repeat the load.
    Check the below note
    Note 796422 - DBIF_RSQL_SQL_ERROR during deletion of table BWFI_AEDAT
    Regards
    KP
    Edited by: prashanthk on Jul 19, 2010 10:42 AM

  • Error in Unit conversion while data loading

    Hi,
    I have maintained DSO in Material (0MATERIAL) info object > Bex tab > Base unit of measure > ZDSO; Then loaded this ZDSO from std data source  0MAT_UNIT_ATTR so that all conversion factors into different units which are maintained in material master data in ECC would get loaded to this DSO.
    Then I have created one Conversion type (ZCON) to read source unit from record and convert it to fixed unit "ST" with reference info object as 0MATERIAL. ST is customized UOM here.
    I am using ZCON conversion type to convert Qty in base UOM to Qty in ST in Bex reports under conversion tab of key figure. Now as this is std functionally, conversion would automatically takes place using ZDSO (mentioned above) as source and target UOM are not from same dimension (Base UOM is EA and target UOM is ST).
    If conversion factor to ST is not found in ZDSO then conversion to base UOM would happen automatically. Now this functionality is happening perfectly in Bex but its giving error if I use the same conversion type ZCON while data loads. Its giving error for those material for which ST conversion is not maintained. But when its not maintained by default it should convert it to base UOM, but its not converting and giving error in data loads.
    Hope I am able to explain the issue.
    Please help me on on this issue or any way around.
    Thanks in advance.
    Prafulla

    Ganesh,
    Can you please check out the Alpha Conversion Routine and also nodeid for that infoobject..
    There might be some inconsistencies in the code..
    Hope it helps
    Gattu

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Error while data loading in real time cube

    HI experts,
    I have a problem. I am loading data from a flat file.The data is loading correctly till the DSO but when i am trying to load it into the cube it is giving an error.
    The cube is  a real time cube for PLANING. I have chnaged the status to allow data loading but still the DTP is giving an error.
    It shows an error "error while extracting from DataStore" and some RSBK 224 ERROR and rsar 051 error.

    What was the resolution to this issue.  We rae having the same issue only with external system (not a flat file).  We get the  RSAR 051 with a return code of 238 error message, like it is not even getting to the rfc connection (DI_SOURCE).  We have been facing this issue for a while and even opened up a message with SAP.

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Increse No of BGP while data load and how to bypass the DTPin Process Chain

    Hello  All,
    We want to improve the performance of the loads. Currently we are loading the data from external Data Base though DB link. Just to mention we are on BI 7 system.  We are by passing the PSA to load the data quickest. Unfortunately we cannot use PSA.  Because loads times are more when we use PSA. So we are directly accessing views on external data base.  Also external data base is indexed as per our requirement.
    Currently our DTP is set to run on the 10 parallel processes (on DTP settings for batch Batch Manager with job class A). Even though we set to 10 we can see loads are running on 3 or 4 Back ground parallel processes only. Not sure why. Does any one know why it is behaving like that and how to increase them?
    If I want to split the load into three. (Diff DTPs with Different selections).  And all three will load the data into same info provider parallel. We have the routine in the selection that will look a table to get the respective selection conditions and all three DTPs will kick off parallel as part of the process chain.
    But in some cases we only get the data for two or oneDTPs(depends on the selection conditions). In this case is there any way in routine or process chain to say that if there is no selection for that DTP then ignore that DTP or set to success for that DTP and process chain should continue.
    Really appreciate your help.

    Hi
    Sounds like a nice problemu2026
    Here is a response to your questions:
    Before I start, I just want to mention that I do not understand how you are bypassing the PSA if you are using a DTP? Be that as it may, I will respond regardless.
    When looking at performance, you need to identify where your problem is.
    First, execute your view directly on the database. Ask the DBA if you do not have access. If possible perform a database explain on the view (this can also be done from within SAPu2026I think). This step is required to ensure that the view is not the cause of your performance problem. If it is, we need to implement steps to resolve that.
    If the view performs well, consider the following SAP BI ETL design changes:
    1. Are you loading deltas or full loads. When you have performance problems u2013 the first thing to consider is to make use of the delta queue (or changing the extraction to just send deltas to BI)
    2. Drop indexes before load and re-create them after the load 
    3. Make use of the BI 7.0 write optimized DSO. This allows for much faster loads.
    4. Check if you do ABAP lookups during the load. If you do, consider loading the DSO that you are selecting on in memory and change the lookup to refer to the table in memory rather. This will save tremendous time in terms of DB I/O
    5. This will have cost implications but the BI Accelerator will allow for much faster loads
    Good luck!

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Short Dump Error While Data load

    Hi Experts,
    Data load to an ODS in production server has been failed because of Short dump error. The error message shows " OBJECTS_OBJREF_NOT_ASSIGNED ".
    Request you to please help me out with a solution ..
    Regards,
    Vijay

    Hi Vijay,
    follow the steps below (May Help you)
    Goto Monitor screen>Status Tab> using the wizard or the menu path >Environment -> Short dump> In the warehouse
    Select the Error and double click
    Analyse the error from the message.
    1.-->Go to Monitor
    -->Transactional RFC
    -->In the Warehouse
    -->Execute
    -->EDIT
    -->Execute LUW
    Refresh the Transactional RFC screen and go to the Data Target(s) which have failed and see the status of the Request-->Now it should be green.
    2.In some cases the above process will not work (where the Bad requests still exists after the above procedure).
    In that case we need to go to the Data Targets and need to delete the bad requests and do the update again
    Regards,
    BH

  • Number Ranges consideration while Data Load

    Hello team
    we are upgrading from 4.6 to ecc 6.0. abaper is gonna try test load..I already configured number ranges in new system with respect to what is in 4.6. should the number ranges be deleted before the data load tesing? How will the system react if number ranges are already there?
    Thanks

    If u r number range series are not Changes than there will be no issues
    As u will mention the seiries of No Ranges in PA04 and set that series no as default value in NUMKR
    in case if u miss any of the above things in customisation than we may get of mismatch of No
    say u have a series with 100  200 but in the number ranges u given 200 300 than it may leads to trouble
    i never come across this situtation but this is just an INFO

  • Message while data load is in progress

    Hi,
    We are loading the data in the infocube every day twice i.e in the morning and the afternoon.
    Loading methodology is always delete and load. At any given point we have only one request in the cube. Data Load takes arround 20-30 minutes.
    When the user runs the query during the data load, he gets the message 'No Applicable Data Found'. Can anyone please advise, how do we show the proper message like 'Data is updating in the system..Please try after some time...' etc.
    We are using BEx Browser with a template and a query attached to the template.
    Please advise.
    Regards
    Ramesh Ganji

    Hi,
    Tell the time of the data load to the users so that they are aware that the loads are in progress and data will not be available for reportting as of now and prohibit themselves from running the report give a buffer time of around 15-20 mins as there might be some issue some where down the line. Ask them to run the report other than the time data loads are happening
    You could also reschedule the timings of the process chain to finish before the users comes in.
    As far as the functionaly you are referring to i am not sure if we are able to acheive this..
    Regards
    Puneet

  • Disable a portlet while data loads

    Hi,
    Is there a way to disable a portlet programatically? We load data at certain times of the day and would like to temporarily disable the portlets. Is there a call in the Soap API that I could use to do this?
    Thanks
    Rick

    Thanks for responding
    We are looking for a way to have the process that makes the changes disable the appropriate gadgets using the soapapi. In 4.5 we did it with a stored proc. This allows the folks modifying the data decide when they need to shut off access. I agree that this is an option but would mean modifying lots of gadget code versus one data load service.
    Thanks
    Rick

  • How to skip an entire data packet while data loading

    Hi All,
    We want to skip some records based on a condition while loading from PSA to the Cube, for which we have written a ABAP code in Start Routine .
    This is working fine.
    But there is a Data packet where all the records are supposed to be skipped and here it is giving Dump and Exception CX_RSROUT_SKIP_RECORD.
    The ABAP Code written is
    DELETE SOURCE_PACKAGE WHERE FIELD = 'ABC' .
    And for a particular data packet all the records satisfy the condition and gets deleted.
    Please advice how to skip the entire data packet if all the reocrs satisfy the condition to be deleted and handle the exception CX_RSROUT_SKIP_RECORD .
    Edited by: Rahir on Mar 26, 2009 3:26 PM
    Edited by: Rahir on Mar 26, 2009 3:40 PM

    Hi All,
    The Dump I am getting is :
    The exception 'CX_RSROUT_SKIP_RECORD' was raised, but it was not caught
    anywhere along
    the call hierarchy.
    Since exceptions represent error situations and this error was not
    adequately responded to, the running ABAP program 'GPD4PXLIP83MFQ273A2M8HU4ULN'
    has to be terminated.
    But this  comes only  when all the records in a particular Data Packet gets skipped.
    For rest of the Data Packets it works fine.
    I think if the Data Packet(with 0 records) itself can be skipped this will be resolved or the Exception will be taken care of.
    Please advice how to resolve this and avoid 'CX_RSROUT_SKIP_RECORD'  at earliest .
    Edited by: Rahir on Mar 27, 2009 6:25 AM
    Edited by: Rahir on Mar 27, 2009 7:34 AM

  • Error while data loading in BI

    Hi gurus,
    Our BI team is unable to load data from BI staging to BI Target.There is no data in BI Targets because of this Users can not test the BI reports. when they run it the header file status shows yellow instead of green.
    Please help.
    Regards,
    Priyanshu Srivastava

    The problem is that jobs logs cannot be written. for example for job
    BIDTPR_6018_1 .
    sm51
    M *** ERROR => ThCallHooks: event handler rsts_before_commit for event
    sm21 F6 1 TemSe input/output to unopened file.
    Can anybody tell how to resolve that.
    Regards,
    Priyanshu Srivastava

  • Update rule problem - while data load

    Hi friends,
    I got the following error while doing initialisation for 2lis_02_sgr.
    "ABORT was set in the customer routine 9998
    Error 1 in the update "
    In the forum i searched for this error and this error is something related to the start routine in my update rule.
    But i dont know whats wrong with my routine.
    Im giving the start routine below,pls go through this and give me your suggestions..
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: ...
    <i>TABLES /bic/AZMM_PUR100 .
    DATA:  T_PUR1 LIKE /bic/AZMM_PUR100 OCCURS 0 WITH HEADER LINE.</i>
    $$ end of global - insert your declaration only before this line   -
    The follow definition is new in the BW3.x
    TYPES:
      BEGIN OF DATA_PACKAGE_STRUCTURE.
         INCLUDE STRUCTURE /BIC/CS2LIS_02_SGR.
    TYPES:
         RECNO   LIKE sy-tabix,
      END OF DATA_PACKAGE_STRUCTURE.
    DATA:
      DATA_PACKAGE TYPE STANDARD TABLE OF DATA_PACKAGE_STRUCTURE
           WITH HEADER LINE
           WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
    FORM startup
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
               MONITOR_RECNO STRUCTURE RSMONITORS " monitoring with record n
               DATA_PACKAGE STRUCTURE DATA_PACKAGE
      USING    RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
      CHANGING ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
    fill the internal tables "MONITOR" and/or "MONITOR_RECNO",
    to make monitor entries
    if abort is not equal zero, the update process will be canceled
      CLEAR: T_PUR1[] ,
             T_PUR1,
             ABORT.
      SELECT * INTO TABLE T_PUR1 FROM /bic/AZMM_PUR100.
      IF SY-SUBRC EQ 0.
        SORT T_PUR1 BY DOC_DATE
                       DOC_ITEM
                        DOC_NUM.
      ELSE.
        MONITOR-msgid = sy-msgid.
        MONITOR-msgty = sy-msgty.
        MONITOR-msgno = sy-msgno.
        MONITOR-msgv1 = sy-msgv1.
        MONITOR-msgv2 = sy-msgv2.
        MONITOR-msgv3 = sy-msgv3.
        MONITOR-msgv4 = sy-msgv4.
        append MONITOR.
      if abort is not equal zero, the update process will be canceled
             ABORT = 1.
      ENDIF.
       ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    Thanks & Regards
    Ragu

    thanks gimmo and a.h.p,
    i have done the correction as you said,pls verify that.
    And also kindly explain me what is the reason for this start routine,what exactly it does???
    CLEAR: T_PUR1[] ,
             T_PUR1,
             ABORT.
      SELECT * INTO TABLE T_PUR1 FROM /bic/AZMM_PUR100.
      IF SY-SUBRC EQ 0.
        SORT T_PUR1 BY DOC_DATE
                       DOC_ITEM
                        DOC_NUM.
    abort = 0.    (  added  abort = 0 as per your suggestion )
      ELSE.
        MONITOR-msgid = sy-msgid.
        MONITOR-msgty = sy-msgty.
        MONITOR-msgno = sy-msgno.
        MONITOR-msgv1 = sy-msgv1.
        MONITOR-msgv2 = sy-msgv2.
        MONITOR-msgv3 = sy-msgv3.
        MONITOR-msgv4 = sy-msgv4.
        append MONITOR.
      if abort is not equal zero, the update process will be canceled
             ABORT = 1.
    exit. ( added exit as per your suggestion )
      ENDIF.
       ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    Thanks & Regards
    ragu

Maybe you are looking for

  • Retrieving WCM content from UCM and displaying on webcenter 11g

    We will have a separate site designed using site studio for content contribution (WCM content). Presenting this content will be done on a separate site. The presentation site will be developed using Webcenter Framework 11g. Our concern is how to retr

  • Tip for embedding fonts while maintaining bookmarks

    I've been sent a couple of very large PDFs in which the fonts aren't embedded properly. They're either missing, or the Adobe Serif MM has been used and embedded instead. This can cause the fonts to display and print poorly. Using the Preflight "Embed

  • Query Views on Workbook - filter doesnt work

    Hi All, I have developed a workbook using a query view, When i try to do filtering using the filter button the workbook data does'nt change. Can anyone of you advise what needs to be done. Thanking you.

  • VBox  multiple NICs per Guest/Desktop

    Hi all, How could I persistently add additional network interfaces to a VBox Desktop. I changes this by utilising the VirtualBox Interface, but this normally doesn't work as far as reading http://wikis.sun.com/display/VDI3dot1/I+can+see+my+VirtualBox

  • How to import mpeg2 footage to imovie08?

    Get very excited when I get my new macbook, installed imovie 08 and try to create my first movie on the mac, but get stuck in the first step trying to import my HDV video capture clip (1080i in mpeg2 format). I previously captured all my video on xp