What is DTP in BI 7.0 ?

What is DTP in BI 7.0 ?
Please search the forum before posting a thread
Edited by: Pravender on Sep 6, 2010 1:20 PM

Hi,
DTP is data transfer process which is used to transfer data from psa to Data target.
Or from one data target to another data target.
In BI 7.0 InfoPackage used to transfer data from source
system to data source/PSA and data transfer process(DTP) is
used to transfer data from data source/PSA to any data
target/Info provider.
Type of DTP>
1.Standard DTP.
2.Error stack DTP :->This DTP is used for error correction.
Ex:-If you have 10 records are coming to DSO,out of 10 if 5 records contain some special character.
Then 5 records will automaticaly moved to DSO and remaining errored record will move to error stack.
After going to monitor tab,click on error stack,there we can correct the record and run the error stack DTP.
This will update the errored record also.
3.Direct update DTP.
4.RDA DTP.
Regards,
Saveen kumar
Edited by: saveen kumar on Sep 5, 2010 4:51 PM
Edited by: saveen kumar on Sep 5, 2010 4:54 PM

Similar Messages

  • What is DTP in BI 7

    Hi Experts,
    Can anyone tell me what is DTP and what is the purpose of DTP in BI 7 ?
    Regards
    Venkata Devaraj

    DTP:
    The dataflow in 7.0 is so, first you need to get data to PSA with a infopack and from there you need to use DTP to load from PSA to the dataprovider(ODS). and other DTP to load to further infoprovider(cube).
    DTP is it like transferring data either from InfoObject or Infocube to another infoobject or infocube?
    NO, DTP is scheduling data from PSA to infoprovider/s. you would need to use infopack to load data from Sourcesystem to PSA.
    Transformations are just like transfer/update rules in 3.x. its just the mapping you do between the source fields and target fields. And DTP to load the data according to the mapping into the infoprovider.
    Purpose:
    You use the data transfer process (DTP) to transfer data within BI from one persistent object to another object, in accordance with certain transformations and filters. In this respect, it replaces the data mart interface and the InfoPackage. As of SAP NetWeaver 7.0, the InfoPackage only loads data to the entry layer of BI (PSA).
    The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process (the data transfer process determines the processing mode). You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a DataStore object and an InfoCube.
    Data transfer processes are used for standard data transfer, for real-time data acquisition, and for accessing data directly.

  • What is DTP in BI 7.0 why is it used what are it advantages?

    Hi all,
    I am new to BI 7.0 why should we created DTP what is it significance. Can anyone let me know how to create DTP.
    thanxs
    haritha

    In BI 7.0 if you load data from Source System it will go to PSA.This is still done by Infopackage.Here Infopackage can't push data beyond PSA .
    so loading will be like
    Source System>DataSource>Infopackage--->PSA.
    Now once the data comes to PSA ,it can be further updated to a Cube a DSO or Infobject using a DTP+Transformation.
    Here DTP and Transformation are parallel to each other .Each Transformation will have its dedicated DTP.Now whether your request is init or delta it will matter only while you are loading till PSA,so its only the concern of your Infopackage.DTP will not care much about that .Its main job is to push the request of the PSA to Target or to Push the request from One target to another target.So data flow is like this
    PSA>Transformation+DTP->Target->Transformation+DTP>Target1
    Now ,the important question is then what is Delta and Full in DTP.If you have setting as Delta in DTP then it will only pull the unloaded request from PSA and ignore the request which have been successfully loaded to the Target.It very important point to be taken care of.For loading between Data Target to Data Target (Data Mart scenario) delta option will bring only those records to Target1 from Target which have not been present there.
    If you have full load DTP ,it will pull all the requests from PSA irrrespective of whether they have been loaded or not ,causing you sometimes duplicate records issue etc.
    Please let me know if you have any doubts .
    Regards,
    Rohan

  • What is DTP

    Hello Frends,
    I am new to this BI 7.0 and really need some help from you gurus.
    I did not understand the concept of DTP is it like transferring data either from InfoObject or Infocube to another infoobject or infocube????????
    If in case i am using DTP how to upload data into an infoobject from an Excel sheet???( Not sure if this question right because i don't understand the purpose of DTP)
    And what is the difference between DTP and Transformation???
    Any help is really appreciated.

    the dataflow in 7.0 is so, first you need to get data to PSA with a infopack and from there you need to use DTP to load from PSA to the dataprovider(ODS). and other DTP to load to further infoprovider(cube).
    DTP is it like transferring data either from InfoObject or Infocube to another infoobject or infocube?
    NO, DTP is scheduling data from PSA to infoprovider/s. you would need to use infopack to load data from Sourcesystem to PSA.
    Transformations are just like transfer/update rules in 3.x. its just the mapping you do between the source fields and target fields. And DTP to load the data according to the mapping into the infoprovider.

  • Types of  DTP , Diff between Standard and Error DTP

    Hi experts,
    What is DTP ,  are the Types of  DTP , Diff between Standard and Error DTP , how it works in BI 7.0.
    I will assign points for your valuable answers.

    Hello ,
    Data transfer process (DTP) to transfer data within BI from one persistent object to another object, in accordance with certain transformations and filters. In this respect, it replaces the data mart interface and the InfoPackage. As of SAP NetWeaver 7.0, the InfoPackage only loads data to the entry layer of BI (PSA).
    The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process (the data transfer process determines the processing mode). You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a DataStore object and an InfoCube.
    Data transfer processes are used for standard data transfer, for real-time data acquisition, and for accessing data directly.
    1.Benefit: Data ‘Distribution’within BI Capabilities (from PSA or InfoProviders to InfoProviders)
    2.Improved transparency of staging processes across data warehouselayers (PSA, DWH layer, ODS layer, Architected Data Marts)
    3.Improved performance and high scalability
    4.Separation of delta mechanism for different data targets: delta capability is controlledby the DTP
    5.Enhanced filtering in dataflow
    6.Repair modus based on temporary buffers (buffers keep complete set of data)
    See these docs for more info
    [Data Transfer Process |http://help.sap.com/saphelp_nw04s/helpdata/en/42/f98e07cc483255e10000000a1553f7/content.htm]
    [Enterprise Data Warehouse (EDW)|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8a]
    [What's New with SAP NetWeaver 2004s - Detailed|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/910aa7a7-0b01-0010-97a5-f28be23697d3]
    [SAP NetWeaver 7.0 ETL and EII|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0b24053-654e-2a10-4180-b0e7c7b4c9f2]
    [How to Create Monitor Entries from a Transformation Routine (NW7.0)|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50fda171-e36e-2910-9290-e3dab26c50b5]
    FAQ on SAP NetWeaver 2004s
    [Modeling the Enterprise Data Warehousing for SAP NetWeaver 2004s FAQ|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7c2a7c65-0901-0010-5e8c-be0ad9c05a31]
    [Enterprise Data Warehousing for SAP NetWeaver 2004s FAQ|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c9f5fb91-0c01-0010-67a8-fd35946e9403]
    Thanks
    Chandran

  • Do Not Check Uniqueness of Data in Write Optimised DSO

    Hello,
    I am working with write optimised DSO with already billion records in it. I have flag 'Do Not Check Uniqueness of Data' in Settings as checked(means it will surely not check for uniqueness of data). I am thinking of removing this flag off and activate the DSO again. I am willing to remove this flag as it will provide option of Req ID input in listcube on DSO and without this list cube will never return back with results.(I have to analyze aggregations)
    I tried removing this flag and then activate DSO in production system with 17 million records which took 5 mins for activation (index creation). So maths says for a billion records activation transport will take around 6 hrs for moving to Production.
    I am willing to remove this flag as it will provide option of Req ID input in listcube and without this list cube will never return back with results.
    Questions:
    How does this flag checks the uniqueness of record? WIll it check Active table or from the Index?
    To what extent DTP will slow down the process of subsequent data load?
    Any other factors/risks/precautions to be taken?
    Let me know if questions are not clea or further inputs are required from my side.
    Thanks.
    Agasti

    Hi,
    Please go through the site :
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    As far as your ques are concerned... i hope above blog will answer most of it , and if it does'nt please read below mentioned thread.
    Use of setting "Do Not Check Uniqueness of Data" for Write Optimized DSO
    Regards
    Raj

  • What is the diffrence between full load and delta load in DTP

    hI ,
    I am trying to load the data into CUBE from another cube using DTP ..
    There are 2 DTPS ..
    1: DTP with full load
    2: DTP with DELTA load ..
    what is the diffrence betwen thse two in DTP ...
    Please can somebody help me

    1: DTP with full load  - will update all the requests in PSA/source to the target,
    2: DTP with DELTA load - will update only new requests to the datatarget
    The system doesnt distinguish new records on the basis of changed records, rather by the request. Thats the reason you have datamart status to indicate if the request has been loaded to further datatargets.

  • What is the difference between, DSO and DTP in BI 7.0

    Hi Guru's
    what is the difference between, DSO and DTP in BI 7.0 , how it will come the data from r/3 to BI 7.0, can u discribe?
    points will be assined?
    Thanks & Regards,
    Reddy.

    Hi,
    The data will be replicated in the same way as we do in 3.5.
    Activating, and Transporting the same DS in BW, and Replicating them in BW from R/3.
    First you need to know Diff b/w 3.5 nd 7.0, for that check the below doc's:
    http://help.sap.com/saphelp_nw04s/helpdata/en/a4/1be541f321c717e10000000a155106/content.htm
    blogs:
    /people/sap.user72/blog/2004/11/01/sap-bi-versus-sap-bw-what146s-in-a-name
    Re: How to identify Header, Item and Schedule item level data sources?
    For Transformations in BI:
    http://help.sap.com/saphelp_nw70/helpdata/en/33/045741c0c28447e10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/f8/7913426e48db2ce10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/a9/497f42d540d665e10000000a155106/frameset.htm
    For DTP:
    DTP:
    http://help.sap.com/saphelp_nw70/helpdata/en/20/a894ed07e75648ba5cf7c876430589/frameset.htm
    For DSO:
    Data Store Objects:
    http://help.sap.com/saphelp_nw70/helpdata/en/f9/45503c242b4a67e10000000a114084/frameset.htm
    Reg
    Pra

  • What are the major differences b/w DTP & Infopackage

    HI Experts,
    what are the  major benifits of DTP over Infopackages????  also what are the major differences b/w them ????
    Cheers
    Swaps

    Since you are talking of DTP's, I assume you are working on BI 7
    In BI 7:
    Infopackage- is used to pull data from the source system to the PSA. PSA is compulsory in BI 7. Beyond this infopackage doesn't play any role.
    DTP: DTP's are used to transfer data from one data provider to another.
    it can be
    PSA - DSO/Cube
    Cube - CUbe
    DSO - Cube
    DSO - DSO and so on
    Hope this helps
    Godhuli

  • What is the difference in createdelete indexes for  infocube and DTP ?

    Hi experts,
    Can anyone  tell me what is the difference in create\delete index process for infocube and create\delete index for  DTP or Infopackage? If i add into a chain an Infopackage then DTP  then Delete request processes,   system generates for all of them delete\create index processes  by default. Is that nesesary  leave them all or is that enough to leave only the ones for infocube?
    Thank you,
    Tigr_Z

    Hi Tigr,
    This delete/create index step is to make the load faster. This can be done on a infocube. Before loading data to infocube it is suggested by SAP to delete index and after loading again create the index.
    If you want to execute these steps through process chain, there are process types for delete/create index. It is related to infocube not to any DTP/Infopackage. Before loading to the cube add one DROPINDEX step (deleting) in the chain. After completion of loading add one INDEXING step (creating) in the chain.
    Thanks,
    Indrashis

  • DTP-extraction-tab:Check-box 'use aggregates' what does it mean?

    Hello BW experts
    Can someone explain in simple words, what das it mean the check box  'use aggregates'  in the data transfer process (Tab extraction).
    The F1-help gives the following explanation  -however I don't understand the info-:
    'Should aggregates be used? This also means that an additional selection condition is automatically inserted that restricts to rolled up data requests.'
    Best regards and thank you to help to understand
    Christian
    PS: We are loading data from cube to cube
    Edited by: Christian Baumann on Jan 9, 2009 9:24 AM
    Edited by: Christian Baumann on Jan 9, 2009 9:26 AM
    Edited by: Christian Baumann on Jan 9, 2009 9:27 AM

    Hi Christian,
    I have never tried it myself , but I guess, it would mean if aggregates are built on the cube from which the DTP was extracting data you could choose to extract data from the aggregates instead.
    The aggregates are the same as a cube, but with fewer characteristics and would hold data at a summarized level.
    If this summarized level is sufficient for load to downstream infoproviders you could choose to extract from the aggregates.
    This would mean lesser data is read during extraction and extraction would be faster.
    SAP excerpt
    With InfoCubes as the source: Use extraction from aggregates
    With InfoCube extraction, the data is read in the standard setting from the fact table (F table) and the table of compressed data (E table). To improve performance here, you can use aggregates for the extraction.
    Select data transfer process Use Aggregates on the Extraction tab in the DTP maintenance transaction. The system then compares the outgoing quantity from the transformation with the aggregates. If all InfoObjects from the outgoing quantity are used in aggregates, the data is read from the aggregates during extraction instead of from the InfoCube tables.
    Regards,
    Sunmit.

  • What is Error code "@43@####" in DTP, Temporary Data?

    What is Error code "@43@####" in DTP, Temporary Data?
    I am gerring this "@43@####" in Status field for some records, in DTP's Temporary data storage...
    Please let me know the solution... Thanks..
    Rammohan

    Hi,
    I guess you get some not allowed signs/characters in one field. Check psa for that and try to correct the values there.
    regards
    Siggi

  • WHAT IS ERROR HANDLING OPTIONS IN dtp? FOR DSO

    hi friends,
    what is use for every option error handling in dtp .
    1) deactivated
    2)no update , not reporting
    3)valid records update, no reporting(request red)
    4)valid records update,  reporting (request green).
    what is individual effect for my DSO, ple explain me acenerio,
    for my DSO what can i select, why?
    regards
    suneel.

    Check this help - self explanatory - trying checking F1 thats very informative and self explantory most of the times
    Switched off
    If an error occurs, the error is reported as a package error in the DTP monitor. The error is not assigned to the data record. The cross-reference tables for determining the data record numbers are not built; this results in faster processing.
    The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
    No update, no reporting (default)
    If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record.
    The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
    Update valid records, no reporting (request red)
    This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor).
    The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.
    Update valid records, reporting possible
    Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out.
    The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.

  • What are the tables that Contains DTP Filter Entries

    Hello,
    for the filter entries on a DTP Process I found the table RSBKSELECT "Selections for DTP Request (Summary)". But that holds only value which are done generic by Formula etc. but not the information which are entered manual.
    Does anybody knows the table for the rest of entries ??
    I need this Information to Report these Filter as Selectioncriteria.
    Thank for any help
    Henning

    Hi,
    Please note that I was able to link the tables by this kind of code:
    "Get the request from the cube, to identify them.
        CALL FUNCTION 'RSSM_ICUBE_REQUESTS_GET'
          EXPORTING
            i_infocube     = LV_IC
          IMPORTING
            e_t_request    = LT_REQUEST
          EXCEPTIONS
            wrong_infocube = 1
            internal_error = 2
            others         = 3.
    "Check each request
        LOOP AT LT_REQUEST INTO LS_REQUEST.
          "Populate LV_REQUID_CHAR to be able to read the table RSBKSELECT
          "The key is : Request id, without leading 0, right justified, and shifted one time to the left.
          LV_REQUID_CHAR = LS_REQUEST-PARTNR.
          SHIFT LV_REQUID_CHAR LEFT DELETING LEADING '0'.
          SHIFT LV_REQUID_CHAR RIGHT DELETING TRAILING SPACE.
          SHIFT LV_REQUID_CHAR LEFT.
          "Get the filters
          SELECT  *
            INTO TABLE LT_RSBKSELECT
            FROM RSBKSELECT
            WHERE REQUID = LV_REQUID_CHAR.
    ENDLOOP.
    The internal table LT_RSBKSELECT will now contains all the filters used by the DTP.
    It is way better than using the field that contains the concatenation of all filters since it is limited to 255 characters.
    Hope it helps.

  • What is the use of parllalization in loading infocube by the help of dtp

    what is the use of parllalization in loading infocube by the help of dtp

    An Index can improve the reading performance when data is searched for values of fileds contained in the index.
    The fact whether the Index is used or not depends on the database optimizer, which decides it after taking the following under consideration:
    - Size of the table
    - Fields in the index compared to the fields in the statement
    - The quality of the index (its clustering factor)
    One backdraw of Index is that they decrease writing performance, as they are maintained during the write to the table.
    Hope it helps you,
    Gilad

Maybe you are looking for