Data Load Performance Tuning

Hi All,
    My requirement is to extract the transaction data from the external MS SQL server to BI 7.0. UDC connection successfully established, datasources available, but I need to wait for long time to check preivew data in preview tab of datasource. Just to display the data in preview mode if datasource takes this much time than we are worried about the data load to DSO or Cube. Would anyone faced this problem, is there any performance settings am I missing. We are on BI 7.0 and on Unix OS, answers will be appreciated.
Thanks,
Eric.

Hi Sriee,
   How can we restrict the data in BI datasource preview? I cannot see any filters button nor any option to restrict in preview tab? I also tried by restricting the no. of records in preview to 5 records. can you please mail me ([email protected]) the steps to improve data load performance in UDC.
Thanks,
Eric.

Similar Messages

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • To improve data load performance

    Hi,
    The data is getting loaded into the cube. Here there are no routines in update rules and transfer rules. Direct mapping is done to the infoobjects.
    But there is an ABAP routine written for 0CALDAY in the infopackage . Other than the below code , there is no abap code written anywhere. For 77 lac records it is taking more than 10 hrs to load. Any possible solutions for improving the data load performance.
      DATA: L_IDX LIKE SY-TABIX.
      DATA: ZDATE LIKE SY-DATUM.
      DATA: ZDD(2) TYPE N.
      READ TABLE L_T_RANGE WITH KEY
           FIELDNAME = 'CALDAY'.
      L_IDX = SY-TABIX.
    *+1 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) = '12'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE+4(2) = '01'.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 1.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ENDIF.
    *+3 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) => '10'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE4(2) = ZDATE4(2) + 3 - 12.
        ZDATE+6(2) = '01'.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 3.
        ZDATE+6(2) = '01'.
      ENDIF.
      CALL FUNCTION 'FIMA_END_OF_MONTH_DETERMINE'
        EXPORTING
          I_DATE                   = ZDATE
        IMPORTING
          E_DAYS_OF_MONTH          = ZDD.
      ZDATE+6(2) = ZDD.
      L_T_RANGE-HIGH = ZDATE.
      L_T_RANGE-SIGN = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      MODIFY L_T_RANGE INDEX L_IDX.
      P_SUBRC = 0.
    Thanks,
    rani

    i dont think this filter routine is causing the issue...
    please implement performance impovement methods..
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap

  • Data Load performance in BI7.0

    Hi,
    I have a generic question regarding BI7.0.
    From the perspective of data load performance what are the features  that BI7.0 has compared to earlier versions.
    Thanks in advance,,
    Rama Murthy

    Hi,
    In BI, the entry layer is PSA, and it is mandatory to maintain the PSA to maintain the data in BI from any source. Here(in BI) it is possible to make the PSA as typed and untyped.
    The Infopackage functionality is reduced, it will loads data up to PSA only.
    The DTP upload the data between BI object in BI. Transformations replaces the updates rules and transfer rules.
    DTP and Transformations removes the Data mart interface between the BI objects.
    It is possible that, if no transformation of data is required, we can load data directly to target without maintaining the InfoSource.
    All these properties are available bcoz of the new concept and new object type DataSource ie., BI DataSource (object type RSDS).
    Depending on the situation, the InfoSource is not mandatory and at some times it is mandatory but PSA is mandatory in BI and rest of all are same as in 3.x.
    Hope this helps in solving u r problem
    Regards
    Ramakrishna Kamurthy

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • How do we improve master data load performance

    Hi Experts,
    Could  you please tell me how do we identify the master data load performance problem  and what can be done to improve the master data load performance .
    Thanks in Advance.
    Nitya

    Hi,
    -Alpha conversion is defined at infoobject level for objects with data type CHAR.
    A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
    The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
    We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
    This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
    -The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
    - SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
    Regards,
    rvc

  • Check data load performance for DSO

    Hi,
        Please can any one provide the detials, to check the data load performance for perticular DSO.
       Like how much time it took to load perticular (e.g 200000) records in DSO from R/3 system. The DSO data flow is in BW 3.x version.
    Thanks,
    Manjunatha.

    Hi Manju,
    You can take help of BW statistics and its standard content.
    Regards,
    Rambabu

  • Data load performance using infoset Vs View.

    Hi Guru,
    I am performing generic extraction in that i am loading data to cube but my Data source is based on Infoset in R/3.
    it is taking 30 MIn. to load 10,00000 Lakh (Ten Lakh) records ideally it has to take 10 to 15 min. rit ?
    can anybody suggest me where i need to check for increase in performance or shall i create datasource over a view and try to load data will it help me in data load performance ?
    thanks,
    ganesh.

    hi Ganesh,
    Primary Index ->
    When you create a database table in the ABAP Dictionary, you must specify the combination of fields that enable an entry within the table to be clearly identified. The key fields must be specified at the top of the table field list, and define them as key fields. A minimum of 1 key field and a maximum of 16 key fields can be defined.
    When the table is activated, an index formed from all key fields is created on the database (with Oracle, Informix, DB2), in addition to the table itself. This index is called the primary index The primary index is unique by definition.
    In addition to the primary index you can define one or more secondary indexes for a table in the ABAP Dictionary, and create them on the database. Secondary indexes can be unique or non-unique.
    If you dispatch an SQL statement from an ABAP program to the database, the program searches for the data records requested either in the database table itself (full table scan) or by using an index ( index unique scan or index range scan). If all fields requested are found in the index using an index scan, the table records do not need to be accessed.
    The index records are stored in the index tree and sorted according to index field. This enables accelerated access using the index The table records in the table blocks are not sorted.
    An index should not consist of too many fields. Having a few very selective fields increases the chance of reusability, and reduces the chance of the database optimizer selecting an unsuitable access path.
    To create Index ->
    Yo have to use trx SE11 into Dev system.
    Enter the database table name and press
    Display -> Indexes -> Create
    Enter index name.
    Choose Maintain logon language.
    Enter short description and index fields.
    Then press save and create the request to transport the index to QA and PRD. Then press activate.
    Hope this helps,
    VA
    Edited by: Vishwa  Anand on Sep 29, 2010 12:50 PM

  • Poor Data Load Performance Issue - BP Default Addr (0BP_DEF_ADDR_ATTR)

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

  • Need suggestions for imporving data load performance via SQL Loader

    Hi,
    Our requirement is to load 512 (1 GB each) files in Oracle database.
    We are using SQL loaders to load files into the DB (A partitioned table) and have tried almost all the possible options that come with sql loaders (Direct load path, parallel=true, multithreading=true, unrecoverable)
    As the tables is growing bigger in size, each file load time is increasing (It started with 5 minutes per file and has reached 2 hours per 3 files now and is increasing with every batch- Note we are loading 3 files concurrently on the target table using the parallel = true oprion of sql loader)
    Questions 1:
    My problem is that somehow multithreading is not working for us (we have multi CPU server and have enabled multithreading=true). Could it be something to do with DB setting which might be hindering the data load to be done in multiple threads?
    Question 2:
    Would gathering stats on the target table and it's partitions help improve load performance ? I'm not sure if stats improve DML's, they would definitely improve sql queries. Any thoughts?
    Question 3:
    What would be the best strategy to gather stats on this table (which would end up having 512 GB data) ?
    Question 4:
    Do you think insertions in a partitioned table (with growing sizes) would have poor performance as compared to a non-partitioned table ?
    Any other suggestions to improve performace are most welcome !!
    Thanks,
    Sachin
    Edited by: Sachin Tiwari on Mar 13, 2013 6:29 AM

    2 hours to load just 3 GB of data seems unreasonable regardless of the SQL Loader settings. It seems likely to me that the problem is not with SQL Loader but somewhere else.
    Have you generated a Statspack/ AWR/ ASH report to see where all that time is being spent? Are there triggers on the table? Are there bitmap indexes?
    Is your table partitioned in a way that is designed to improve the efficiency of loads so that all the data from one file goes into one partition? Or is data from each file getting inserted into many different partitions.
    Justin

  • Improve data load performance using ABAP code

    Hi all,
             I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
    in BW side? if give sample code to improve load performance it will be usefull. please guide me.

    There are several points that can improve performance of your ABAP code:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    4.Avoid using nested SELECT and SELECT statements within LOOPs.
    5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
    6. Avoid using SELECT * and select only the required fields from the table.
    7. Avoid Executing a SELECT multiple times in the program.
    8. Avoid nested loops when working with large internal tables.
    9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
    10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
    11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
    12. Use CASE instead of IF/ENDIF whenever possible.
    13. Runtime transaction code se30 can be used to measure the application performance.
    14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
    15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
    16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
    17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
    Hope it helps.

  • Increase the number of background work processes for data load performance

    Hi all,
    There are 10 available background work processes in the BW system. We're doing some mass load to multiple ODS.But system uses only 3 background processes. How can i
    increase the number of used background work processes for new data load.
    I tried to change number of prosesses with RSODSO_SETTINGS. But no successes. Are there any other settings need to change?
    thanks,
    Yigit

    Hi sankar,
    I entered the max proc. number into ROIDOCPRMS. But it doesn't make difference. System still uses only 3 of background processes. RSCUSTA2 is replaced with
    RSODSO_SETTINGS in BI 7.0 and this trans. can only change the processes for data activation, SID generation and rollback. I need to change the process numbers for data extraction.

  • Data load performance statistics on package level

    Hello,
    table RSMONIPTAB shows me the data packages which belong to a specific request.
    Unfortunately in this table there is only timestamp information. Is there a possibility
    to calculate how many seconds each data package required for processing?
    Or can I join some other table e.g. via column MESS_ID which provides information
    about how long the processing required?
    Regards,
    Mark

    we can check job log of that particular back ground job and no of data packets it processed and time taken to complete this job
    data packets processing time may vary for each data load depending upon ABAP code that exists in transfer/update routines and start routines
    -DU

  • Data load performance calculation

    Hi All,
    We have 4 R3 COPA datasources(A1,A2,A3 and A4) which  gets loaded into a ODS in BW.
    Now  in our project client is making all Company codes into 1 Company Code.Hence data volume for a particular datasource increases.
    My question is now how to calculate the performance of the ODS..I mean may be before the change it was taking 1 hour to load but as now due to increased volume of data from one datasource the loading time will increase.Is there any way to anticipate how much more loading time it will take.
    For eg:-
    Before                                                   
    A1  takes 30mins to load
    A2  takes 20mins to load
    A3  takes 35mins to load
    A4  takes 25mins to load
    Now suppose after the changes data volume will increase in A1 as company code is getting centralised.How to calculate the load timings.
    I want to do a performance test

    http://wiki.sdn.sap.com/wiki/display/ERPFI/COPAPerformanceImprovementusingSummarization+Levels
    How to Improve COPA Performance
    http://wiki.sdn.sap.com/wiki/display/BI/ConsultingNotesfor+CO-PA
    Check the load timing through ST13

  • Data Load Performance Issue

    Hi Gurus,
    We are loading data from one source into two BW systems (A and B). The loads of system A only takes 2 hours, while the same loads in system B takes more than 6 hours usually. System B is the upgraded version, which is supposed to be faster than system A.
    One finding is in system B, BW picks only 5000 records each time from all source tables. In system A, it picks 50,000 records each time from all source tables. I checked the system setting from RSCUSTV6. It says data package size is 50,000. Isnt it strange?
    Any idea why this is happening? Please advise! Points will be generously awarded.
    Thanks!
    Linda
    One mistake.
    Message was edited by:
            Linda Tseng

    That is 50,000 records for 1 info-idoc...
    For more info...
    <b>Maximum size of a data packet in kilo bytes:</b>
    The individual records are sent in packages of varying sizes in the data transfer to the Business Information Warehouse. Using these parameters you determine the maximum size of such a package and therefore how much of the main memory may be used for the creation of the data package.
    SAP recommends a data package size between 10 and 50 MB.
    <b>Frequency with which status Idocs are sent</b>:
    With this frequency you establish how many data IDocs should be sent in an Info IDoc.
    Hope this helps.

Maybe you are looking for