Performance in datamart extractions

We have been facing the performance issues loading data using data mart technologies,the noral R/3 to BW extraction seems to be working fine ,but its only the problem while extraction of data from ODS to Cube or in some cases Cube to Cube.
anybody who has faced this scenarios or able to solve this issue .
thanks

As a part of sanity check
Table -ROIDOCPRMS
If no entry was maintained, the data is transferred with a standard setting of 10000 kbyte and 1 “Info IDOC” for each data packet.
The general formula for data transfer is:
packet size = MAXSIZE * 1000 \ transfer structure size
but not more than MAXLINES.
eg. if MAXLINES < than the result of the formula, MAXLINES size is transferred into BW.
Here are 3 examples of generic data extraction:
My transfer structure size = 250 Byte
Control parameters:
case 1.
BW OLTP
MAXSIZE - 20.000
MAXLINES - 60.000
calculated packet size = MAXSIZE(OLTP) * 1000 \ 250 = 80.000
MAXLINES = 60.000 < 80.000
=> I get packets with the size 60.000.
case 2.
BW OLTP
MAXSIZE - 20.000
MAXLINES - -
calculated packet size = MAXSIZE(OLTP) * 1000 \ 250 = 80.000
MAXLINES default value = 100.000 > 80.000
=> I get packets with the size 80.000
case 3.
BW OLTP
MAXSIZE 10.000 20.000
MAXLINES - 60.000
calculated packet size = MAXSIZE(BW) * 1000 \ 250 = 40.000
MAXLINES = 60.000 > 40.000
=> I get packets with the size 40.000
<b>PS</b> - There are application specific-extractor settings available ;however this doesnt apply to you as extraction is not from R/3 but its from BI to BI.
Note 409641 - Examples of packet size dependency on ROIDOCPRMS
Hope it Helps
Chetan
@CP..

Similar Messages

  • ODI Error: While running interface to perform Essbase ouline extraction

    Hi All,
    I have requiremnet to extract all dimensions from Essbase outline and put them into Flat Files.
    I created Essbase and Flatfile model and reversed successfully.
    I created interfaces for each dimension for extracting its metadata to flat file.
    All the interfaces ran successfully except for Entity dimension and Cube_Data.
    When i ran interface to extract Entity dimension i got the following error in "4 - Loading - SS_0 - Extract Metadata" step
    com.hyperion.odi.essbase.ODIEssbaseException: Extract operation failed : Cannot perform cube view operation. Essbase Error(1012703): Unknown calculation type [0] during the dynamic calculation. Only default agg/formula/time balance operations are handled.
    Please help me resolve this.
    Thanks,
    Srinivas
    Edited by: SRINIVAS_N on Apr 28, 2010 12:15 AM

    Hi,
    This is a problem on the essbase side of things and not an ODI issue, it usually means you have a problem with a formula in the outline of your database.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Performance issue while extracting data from non-APPS schema tables.

    Hi,
    I have developed an ODBC application to extract data from Oracle E-Business Suite 12 tables.
    e.g. I am trying to extract data from HZ_IMP_PARTIES_INT table of Receivables application(Table is in "AR" database schema) using this ODBC application.
    The performance of extraction (i.e. number of rows extracted per second) is very low if the table belongs to non-APPS schema. (e.g. HZ_IMP_PARTIES_INT belongs to "AR" database schema)
    Now if I create same table (HZ_IMP_PARTIES_INT) with same data in "APPS" schema, the performance of extraction improves a lot. (i.e. the number of rows extracted per second increases a lot) with the same ODBC application.
    The "APPS" user is used to connect to the database in both scenarios.
    Also note that my ODBC application creates multiple threads and each thread creates its own connection to the database. Each thread extract different data using SQL filter condition.
    So, my question is:
    Is APPS schema optimized for any data extraction?
    I will really appreciate any pointer on this.
    Thanks,
    Rohit.

    Hi,
    Is APPS schema optimized for any data extraction? I would say NO as data extraction performance should be the same for all schemas. Do you run "Gather Schema Statistics" concurrent program for ALL schemas on regular basis?
    Regards,
    Hussein

  • Performance tuning for extraction of data from MSEG table

    Hello experts,
    I m trying to extract data via select query from MSEG table based on non-primary keys, which affects my performance.
    Below is my select query  :
    SELECT SINGLE menge
    FROM mseg
    INTO w_rejqty
    WHERE ebeln = it_mseg-ebeln AND
          ebelp = it_mseg-ebelp AND
          bwart = '122'.   
    Kindly suggest some alternative way for it apart from creating secondary index on table MSEG which would be my last option because already four secondary index are created in my present situation and also is it advisable to create fifth secondary index for my problem?? Would it affect my database performance?
    Thanks in advance
    Raj

    Hi Raj,
    is that possible to use this query below ? You might ask to Functional whether is possible or not.
    SELECT SINGLE belnr gjahr buzei INTO w_ekbe FROM ekbe
        WHERE ebeln = it_mseg-ebeln
        AND     ebelp = it_mseg-ebelp
    SELECT SINGLE menge FROM mseg INTO w_rejqty
       WHERE mblnr EQ w_ekbe-belnr
       AND   mjahr EQ w_ekbe-gjahr
       AND   zeile EQ w_ekbe-buzei
    Best Regards
    Fernand

  • Performance Tunning- data extraction from FMGLFLEXA table

    Hi,
      Need to fetch data from FMGLFLEXA table based on below condtion.
           WHERE rfund IN s_rfund
               AND rgrant_nbr IN s_rgnbr
               AND rbusa IN s_rbusa
               AND budat LE v_hbudat.
    Please tell me how can i optimize this extaraction b'coz in production system there are lacks of records.
    Regards,
    Shweta.

    create a index on these fields due to which data extraction from table will be fast.

  • Need to perform robust feature extraction and tracking to determine the motion of the camera with respect to a stationary background ?

    i have tried to use intensity thresholding but the variations make this unreliable. I need to determine a way to efficiently extract reliable features within a ROI (which can be determined by the last known speed of the platform) and correspond these between successive images to determine speed ? Are there any image analysis/image processing gurus out there? I have enclosed a typical image from one of my sequences (please note the faint stripe of light moves with the platform - it is not a stationary feature)
    Attachments:
    2108_.jpg ‏23 KB

    I tried equalizing your image (Lookup Table: Equalize) and searched for a horizantal edge (Find Straight Edge: Horizantal Edge) with a very high contrast value (100) inside a very big ROI and that seems to work fairly robust. I'm attaching the Vision Builder Script.
    Let me know if you have any questions.
    Regards,
    Yusuf C.
    Applications Engineer
    National Instruments
    Attachments:
    script.scr ‏1 KB

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Data Extraction to Cube is very slow

    Hi Experts,
    I am working on SAP APO project .I have created a cube to extract data(Backup of a planning area) from Live cache.But the data load is taking lot of time to load.We analysed the job log found that for storing data in PSA and in datatarget takes less than 1 minute but reading data from live cache is taking lot of time and also it is creating and deleting the views for reading data from live cache.I have created the cube in the BI instance present in APO system itself ....So i believe it should be faster ....Could anyone help me in this regard...
    Thanks,
    Vijay.

    If you have 2 separate system APO and BW.
    Preference for performance should be Extract data from Live cache in APO Cube and read Data in BW from APO cube to BW Cube.
    SAP recommend 2 different instances especially for APO DP.
    Hope it helps otherwise you can find lot of documentation in the market place for this subject..
    Good luck.

  • Performance problems with 0EC_PCA_3 datasource

    Hi experts,
    We have recently upgraded the Business Content in our BW system, as well as the plug-in on R/3 side. Now we have BC3.3 and PI2004.1. Given the opportunity, we decided to apply the new 0EC_PCA_3 and 0EC_PCA_4 datasources that provide more detailed data from table GLPCA.
    The new datasources have been activated and transported to the QA system, where we experience serious performance problems while extracting data from R/3. All other data extractions work as before so there should not be any problem with the hardware.
    Do you use 0EC_PCA_3? Have you experienced any problem with the speed of data extraction/transfer? We already have applied the changes suggested in note 597909 (Performance of FI-SL line item extractors: Creating indexes) and created secondary indexes on GLPCA table but it did not help.
    thanks and regards,
    Csaba

    Seems the problem was caused by a custom development - quantity conversion in user exit. However, we tried loading earlier after removal of the exit, it did not help then (loading took even longer...). Now it did.

  • Slow extraction in big XML-Files with PL/SQL

    Hello,
    i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
    The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion.  Here is an example of a XML File i want to analyse:
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
       <soap:Body>
          <ns2:GetDocumentByIDResponse xmlns:ns2="***">
             <ArchivedDocument>
                <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
                   <Metadata archiveDate="2013-08-01+02:00" documentID="123">
                      <Descriptor type="Integer" name="fachlicheId">
                         <Value>123<Value>
                      </Descriptor>
                      <Descriptor type="String" name="user">
                         <Value>***</Value>
                      </Descriptor>
                      <InternalDescriptor type="Date" ID="DocumentDate">
                         <Value>2013-08-01+02:00</Value>
                      </InternalDescriptor>
                      <!-- Here some more InternalDescriptor Nodes -->
                   </Metadata>
                   <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
                      <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
                   </RepresentationDescription>
                </ArchivedDocumentDescription>
                <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
                   <Data fileName="20mb.test">
                      <BinaryData>
                        <!-- Here is the BASE64 converted document -->
                      </BinaryData>
                   </Data>
                </DocumentPart>
             </ArchivedDocument>
          </ns2:GetDocumentByIDResponse>
       </soap:Body>
    </soap:Envelope>
    Now i want to extract the filename and the Base64 converted document from this XML response.
    For the extraction of the filename i use the following command:
    v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    For the extraction of the binary data i use the following command:
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 15:46:11,402668000
    10.09.13 - 15:47:21,407895000
    00:01:10,005227
    v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    10.09.13 - 15:47:21,407895000
    10.09.13 - 15:47:22,336786000
    00:00:00,928891
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
    I wonder about it and started some tests.
    I tried to use an exact - non dynamic - filename. So i have this commands:
    v_filename := '20mb_1.test';
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    Under this Conditions the time for the document extraction soar. You can see this in the following table:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 16:02:33,212035000
    10.09.13 - 16:02:33,212542000
    00:00:00,000507
    v_filename_bcm := '20mb_1.test';
    10.09.13 - 16:02:33,212542000
    10.09.13 - 16:03:40,342396000
    00:01:07,129854
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
    Thank you,
    Matthias
    PS: I use the Oracle 11.2.0.2.0

    Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
    Here are two approaches you can test :
    Using the DOM interface over your XMLType variable, for example :
    DECLARE
      v_xml    xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> 
           <soap:Body> 
              <ns2:GetDocumentByIDResponse xmlns:ns2="***"> 
                 <ArchivedDocument> 
                    <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***"> 
                       <Metadata archiveDate="2013-08-01+02:00" documentID="123"> 
                          <Descriptor type="Integer" name="fachlicheId"> 
                             <Value>123</Value> 
                          </Descriptor> 
                          <Descriptor type="String" name="user"> 
                             <Value>***</Value> 
                          </Descriptor> 
                          <InternalDescriptor type="Date" ID="DocumentDate"> 
                             <Value>2013-08-01+02:00</Value> 
                          </InternalDescriptor> 
                          <!-- Here some more InternalDescriptor Nodes --> 
                       </Metadata> 
                       <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream"> 
                          <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/> 
                       </RepresentationDescription> 
                    </ArchivedDocumentDescription> 
                    <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0"> 
                       <Data fileName="20mb.test"> 
                          <BinaryData> 
                            ABC123 
                          </BinaryData> 
                       </Data> 
                    </DocumentPart> 
                 </ArchivedDocument> 
              </ns2:GetDocumentByIDResponse> 
           </soap:Body> 
        </soap:Envelope>');
      domDoc    dbms_xmldom.DOMDocument;
      docNode   dbms_xmldom.DOMNode;
      node      dbms_xmldom.DOMNode;
      nsmap     varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
      xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
      istream   sys.utl_characterinputstream;
      buf       varchar2(32767);
      numRead   pls_integer := 1;
      filename       varchar2(30);
      base64clob     clob;
    BEGIN
      domDoc := dbms_xmldom.newDOMDocument(v_xml);
      docNode := dbms_xmldom.makeNode(domdoc);
      filename := dbms_xslprocessor.valueOf(
                    docNode
                  , xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
                  , nsmap
      node := dbms_xslprocessor.selectSingleNode(
                docNode
              , xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
              , nsmap
      --create an input stream to read the node content :
      istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
      dbms_lob.createtemporary(base64clob, false);
      -- read the content in 32k chunk and append data to the CLOB :
      loop
        istream.read(buf, numRead);
        exit when numRead = 0;
        dbms_lob.writeappend(base64clob, numRead, buf);
      end loop;
      -- free resources :
      istream.close();
      dbms_xmldom.freeDocument(domDoc);
    END;
    Using a temporary XMLType storage (binary XML) :
    create table tmp_xml of xmltype
    xmltype store as securefile binary xml;
    insert into tmp_xml values( v_xml );
    select x.*
    from tmp_xml t
       , xmltable(
           xmlnamespaces(
             'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
           , '***' as "ns2"
         , '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
           passing t.object_value
           columns filename    varchar2(30) path '@fileName'
                 , base64clob  clob         path 'BinaryData'
         ) x

  • Error in extracting data from SAP using mySQL server database

    Hello Experts.
    We are now using MySQL server 2005 database for Nakisa OrgChart. We have already configured the SAPExtractor settings. Source-SAP has been provided and Test connection to destination database is successful. We manually created the database ExtractedData and AnalyticData in the MySQL management studio.
    We are getting the error upon starting the extraction
    Processing Function Read Table Function/BAPI Downloading Tables Organizational Assignment
    No Tables were downloaded for Read Table Function/BAPI . Function is flagged as critical.Terminating Extraction.
    Processing Stopped ! ! !
    Downloading from SAP completed.
    Processing Completed. 
    From CDS.log
    ERROR: Sap Authentication : Source {SAP.Connector}: Message {An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)}
    We would appreciate some assistance on this configuration
    Thanks,
    Angelo
    Accenture inc.
    SAP Basis

    Hello Luke,
    We have not done any modification of the downloadschema file. We receive message: The XML page cannot be displayed
    Cannot view XML input using style sheet. Please correct the error and then click the Refresh button, or try again later.  when we open the file.
    The SAP account we used has SAP_AlLL, SAP_NEW authorizations. Are these authorizations sufficient to perform the data extraction?
    Thanks,
    Angelo

  • Enhancing the extract structure for 0FI_GL_4,  0FI_AP_4,  0FI_AR_4

    Hi all,
          Does anyone know how to enhance the extraction structures with additional fields for datasources 0FI_GL_4 (General ledger: line item), 0FI_AP_4 (vendors: line item) and 0FI_AR_4 (customers: line item). 
    Thanks,
    Sabrina.

    Hi
        Here are the two scenario's described in the note:
    1. All the fields of the customer enhancement in the customer include are contained in the read structure (see the table above).  Then no additional action is required. The fields of the customer enhancement are filled automatically by the datasource from the assigned read structure via "move-corresponding".
    Example:   The extraction structure DTFIGL_4 for datasource 0FI_GL_4 (General ledger: line item) should be enhanced by the VALUT (value date) field.
               To do this, create structure CI_BSIS in the data dictionary of the R/3 source system and include the VALUT field in it. The data dictionary in the R/3 source system shows that the VALUT field is contained in the read structure of the datasource (table BSIS). For that reason the VALUT field is automatically filled by datasource 0FI_GL_4.
    2. Fields of the customer enhancement in the customer include are not contained in the read structure (see the table above).  In this case you have to program a function module to fill the field of the customer enhancement. To do this, there is a Business Transaction Event available (open FI interface for process 00005021). Create any function module you like and use function module SAMPLE_PROCESS_00005021 as a template for the interface (input parameter, changing parameter).
               Interface of function module SAMPLE_PROCESS_00005021:
                Input-parameter I_OLTPSOURCE: datasource that is currently extracting data from the R/3 source system.
                Changing-Parameter C_STRUCTURE: Extraction structure of the data source currently extracting including fields from the assigned customer include. When you call this customer defined function module, all the fields of the extract structure are transferred filled.
               Check whether the type pool SBIWA is declared in the TOP include of the function group.
    If not, add it with the statement TYPE-POOLS: SBIWA.
               Then maintain table TPS31 with Transaction SM30. Create the following entry:
               PROCS LAND APPLK FUNCT
               00005021 <fname>
               <fname> stands for the customer defined function module.
                By doing so, the function module you defined is called for each extracted record. Note that in this case the performance of the extraction may be reduced significantly regardless of the table read and the complexity of the programmed logic.
               Example:
               You want to enhance the extraction structure DTFIAR_3 for datasource 0FI_AR_4 (customers: line item) by the ORT01 (city) field from the customer master record.
                To do this, create structure CI_BSID in data dictionary of the R/3 source system and include the ORT01 field in that. The data dictionary in the R/3 source system displays that the ORT01 field is NOT contained in the read structure of datasource 0FI_AR_4 (Table BSID).
                Therefore you have to program a function module that reads the customer master in the R/3 system (table KANN1) and fills the ORT01 field of the customer enhancement. In the changing parameter C_STRUCTURE the filled fields of the extraction structure DTFIAR_3 are available to you. With the KUNNR field of extraction structure DTFIAR_3, you can select the respective master data record from table KNA1 and determine field ORT01 of the customer enhancement.
    1. All the fields of the customer enhancement in the customer include are contained in the read structure (see the table above).
               Then no additional action is required. The fields of the customer enhancement are filled automatically by the datasource from the assigned read structure via "move-corresponding".
               Example:
               The extraction structure DTFIGL_4 for datasource 0FI_GL_4 (General ledger: line item) should be enhanced by the VALUT (value date) field.
               To do this, create structure CI_BSIS in the data dictionary of the R/3 source system and include the VALUT field in it. The data dictionary in the R/3 source system shows that the VALUT field is contained in the read structure of the datasource (table BSIS). For that reason the VALUT field is automatically filled by datasource 0FI_GL_4.
               ATTENTION:
               By using Note 430303 you can enhance DataSource 0FI_GL_4 by all fields from table BSEG (instead of BSIS); then the fields are filled automatically in the extractor.
    1. Fields of the customer enhancement in the customer include are not contained in the read structure (see the table above).
                In this case you have to program a function module to fill the field of the customer enhancement. To do this, there is a Business Transaction Event available (open FI interface for process 00005021). Create any function module you like and use function module SAMPLE_PROCESS_00005021 as a template for the interface (input parameter, changing parameter).
               Interface of function module SAMPLE_PROCESS_00005021:
                Input-parameter I_OLTPSOURCE: datasource that is currently extracting data from the R/3 source system.
                Changing-Parameter C_STRUCTURE: Extraction structure of the data source currently extracting including fields from the assigned customer include. When you call this customer defined function module, all the fields of the extract structure are transferred filled.
               Check whether the type pool SBIWA is declared in the TOP include of the function group.
    If not, add it with the statement TYPE-POOLS: SBIWA.
               Then maintain table TPS31 with Transaction SM30. Create the following entry:
               PROCS LAND APPLK FUNCT
               00005021 <fname>
               <fname> stands for the customer defined function module.
                By doing so, the function module you defined is called for each extracted record. Note that in this case the performance of the extraction may be reduced significantly regardless of the table read and the complexity of the programmed logic.
               Example:
               You want to enhance the extraction structure DTFIAR_3 for datasource 0FI_AR_4 (customers: line item) by the ORT01 (city) field from the customer master record.
                To do this, create structure CI_BSID in data dictionary of the R/3 source system and include the ORT01 field in that. The data dictionary in the R/3 source system displays that the ORT01 field is NOT contained in the read structure of datasource 0FI_AR_4 (Table BSID).
                Therefore you have to program a function module that reads the customer master in the R/3 system (table KANN1) and fills the ORT01 field of the customer enhancement. In the changing parameter C_STRUCTURE the filled fields of the extraction structure DTFIAR_3 are available to you. With the KUNNR field of extraction structure DTFIAR_3, you can select the respective master data record from table KNA1 and determine field ORT01 of the customer enhancement.
    After you create the customer include in the R/3 source system you have to post process the accompanying datasource with Transaction RSA6. Select the application component according to the above table and change the datasource that fits the customer include. The "Hide fields" flag should be removed in the field list for the fields of the customer include. Then save the field list.
    If the fields of the customer include is not displayed in Transaction RSA6, refer to note 415530.
    1.  After you create the customer include in the R/3 source system you have to post process the accompanying datasource with Transaction RSA6. Select the application component according to the above table and change the datasource that fits the customer include. The "Hide fields" flag should be removed in the field list for the fields of the customer include. Then save the field list.
    If the fields of the customer include is not displayed in Transaction RSA6, refer to note 415530.
    Please let me know.
    Thanks,
    Sabrina.

  • Performance problem in loading the Mater data attributes 0Equipment_attr

    Hi Experts,
    We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
    when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
    we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
    can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
    Thanks,
    dp

    Hi,
    check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    Regards
    Andreas

  • Performance tunning in exraction.

    hii frnds,
       can anyone plz tell me the performance tunning in extraction,in report,in dataloading.plz reply me in pointwise..
    Thanks
    Rosy
    Please search for available information before posting.
    Edited by: kishan P on Jan 24, 2012 10:57 AM

    Hi Rosy,
    Please check the below docs,
    http://www.tli-usa.com/download/Expert_Tips_and_New_Techniques_for_Optimizing_Data_Load_and_Query_Performance__Part_One_.pdf
    http://www.tli-usa.com/download/Expert_Tips_and_New_Techniques_for_Optimizing_Data_Load_and_Query_Performance__Part_Two_.pdf
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/08f1b622-0c01-0010-618c-cb41e12c72be?QuickLink=index&overridelayout=true
    Thanks,
    Vinod

  • Datamart using stock cube

    Hi experts,
    I have a stock cube which is compressed and having marker update till today. Now I am trying to do datamart using this cube.
    Is it possible to do datamart with the cube which is compressed and having marker update.
    suggest me the procedure to if it is possible.
    regards,
    rajesh.

    Hi Rajesh,
    Check note 375098 (Datamart extraction from non-cumulative InfoCubes)

Maybe you are looking for

  • Questions about input/output streams

    In the following tutorial: http://chortle.ccsu.ctstateu.edu/CS151/Notes/chap85/ch85_10.html It mentions that some methods, such as write(), writeByte(), writeBytes(), and writeChar(), return the low eight bits of the argument to the output stream. I

  • Quicktime error in OS 9 (Classic)

    I've got an old Mac game that I run in Classic that is giving me a "Quicktime error -43" when it tries to run an animation. I don't know if this is the right area to post this, but can someone tell me what this error code means? I've tried searching

  • Hands off my browser window

    How can I prevent web pages from resizing and/or repositioning (hard to decide, which of these is a greater nuisance) my browser window with javascript in Safari 2.0.3? Thanks

  • Shared Members

    Is there any way in which you can use a filter to look at just Shared Members and not the original members?we have looked at this and believe that it is not possible on the basis that whenever you drill down in Excel, you can see the values for the s

  • Stock Transport Order Scenario

    Hi In Stock transport order,while doing PGR iam getting an Error Log saying that PL STOCK IN TRANSIT EXCEEDING MAT1 10 PCS, Pls throw some light on this. Thanks in Advance Ashwath reddy