HCM Best Practices version 1.500 (HCM Data Transfer Toolbox)

Hi,
Does anyone have the HCM Data Transfer Toolbox (BPHRDTT, HR-DTT, HRDTT)? This seems to be discontinued and been replaced by LSMW which does not provide anything even close to this tool!!!!
I wish this would have been posted in the Downloads on SDN.
Thanks
Check out the following link to see what I mean:
http://help.sap.com/bp_hcmv1500/HRBP_DTT_V1_500/index.htm
Edited by: Andreas Mau on Jan 30, 2009 8:42 AM

Hi Uwe,
Thanks for giving it a try, but my question is very specific for the reason that SAP has discontinued a perfectly good tool and replaced it with some do-over.
LSMW - is not always suitable and for large loads the performance is out of wack. You always have to start from scratch creating the stuff.
PU12 - The interface toolbox is nice but it could offer something more like XML or MHTML formating so you can actually reuse the data without pain. Also for payroll extraction on retro-results for only deltas this is not useful either.
HR-DTT - Was a tool that had all you really needed and was working just fine. It had everything not just infotypes but also T558B/C/E load although I prefer T558D than T558C. Here SAP is totally lacking support. The 2 BAPIs for BUS1023 do not support all one is only for outsourcing, the other for only B/D tables. In the code the E table update is commented out since they forgot to actually add the corresponding structure for the load as importing parameter...
SXDA - A rather unknown Data Transfer Workbench which also uses projects to organize the data transfer hooks up onto LSMW as well, but again that is where the tools end. There are no example projects for any area that are of any use. SCM/SRM has some minor examples but nothing that could be called exhaustive.
The Link: Thanks to SAP support this link has been discontinued so that no-one can ask questions any more on this issue. You may still find the docu on the service marketplace in 50076489.ZIP for HRBP_USA_V1500
Edited by: Andreas Mau on Apr 23, 2009 10:36 PM
Edited by: Andreas Mau on Apr 24, 2009 1:39 AM

Similar Messages

  • Best Practices for converting SAP HR data (4.7 to ECC)

    Hello Experts ...
    We are going from 4.6 to ECC ... no upgrade ..it will be a new implementation ...
    I am looking for best practices to convert SAP HR data from one sap instance(4.6) to another(ECC) ...
    I am not sure if direct input or LSMW or any other method/tool is the best way ...
    Will really appreciate and award point if I can get good advise or documentation ...
    Let me know if my question is not clear enough.
    Thanks,

    Hi
    You can check SAP Marketplace and there follow the download link. You will require a SAP Marketplace login to download the Best Practices. Fortunately Best practices for ECC 5.00 and 6.00 are available there but they are country specific versions. I know HCM for US is available there.
    Reward points, if helpful.
    Regards
    Waz

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • Best Practices to separate voice and Data vlans

    Hello All .
    I am coming to the community to get some advices on a specific subject .
    One of my customer is actually using vlan access-list to isolate it is data  from it is voice vlan traffic .
    As most of us knows VLAN ACLs are very difficult to deploy and manage at an access-port level that is highly mobile. Because of these management issues they have been looking for a replacement solution consisting of firewalls but apparently the price of the solution was too high in the sky .
    Can someone guide me towards security best practices when it comes to data and voice vlan traffic isolation please ?
    thanks
    Regards
    T.

    thomas.fayet wrote:Hi again Collin , May I ask you what type of fw / switches / ios version you are using for this topology ? Also is the media traffic going through your fw if one voice vlan wants to talk to another voice vlan ? rgds
    Access Switches: 3560
    Distro: 4500 or 6500
    FW: ASA5510 or Juniper SSG 140 (phasing out the Junipers)
    It depends. In the drawing above, no voice traffic would leave the voice enclave until it talks to a remote site. If we add other sites to the drawing, at a minimum call-sig would traverse the firewall and depending on the location of the callers, all voice traffic may cross the firewall. All of that depends on how you have your call managers/vm/voice gateways designed and where the callers are.

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • Data Transfer Toolbox for ECC 6.0

    I am on ECC 6.0 and I am trying to install the data transfer toolbox (DTT) for HCM.  I installed the ERP05V4.SAR file (which is in the 600-ERP-DATA folder), but there are no DTT tcodes.  A colleague of mine told me he has used this in earlier versions of SAP (ECC 5.0) and he installed BPERPVD.SAR (which is in the 500-ERP-DATA folder) and the DTT was installed.   So it seems that this does not exist for ECC 6.0, but we need it so I am looking for help!
    I downloaded all of these files from OSS-downloads-SAP installations and upgrades-Enter by application group-SAP best practices-Best practices for my SAP all in one-SAP BP for HCM-HCM V1.500-installation.
    Thanks
    Mike

    Hi Michael,
    Were you able to get DTT installed for ECC 6.0? We are also, in the similar situation, trying to get DTT loaded in ECC 6.0.
    Thanks.

  • HR Data Transfer Toolbox - Initial Conversion

    Our project is trying to use the HR Data Transfer Toolbox to initially load all employees using best practice configured action 'X'.  However, we have turned on concurrent employment in our config settings.  This is causing a BDC error.  The BDC records SAPMP50A 2000, but the screen called when you run the BDC session is SAPMP50A 2200.  Has anyone come across this issue and what is the proposed resolution.

    I am looking for the same thing, from what I can tell, you need to purchase this separately, it's not free.  Try the following links and let me know if you find something different.
    https://websmp104.sap-ag.de/bestpractices
    then click ‘cross-industry packages’ and then HCM

  • Data transfer toolbox dump

    Hi,
    I am using sap standard data transfer toolbox (transaction - /n/hrbp/zdtt) to upload my OM related data. Initially I am trying to upload infotype 1000 and 1001. But I am getting a short dump with the below description.
    "The ABAP program lines are wider than the internal table.The current ABAP program "/HRBP/ZHDX1000" had to be terminated because it has
    come across a statement that unfortunately cannot be executed.".
    Could anyone help me if they have encountered such a problem in DTT.
    Regards
    Shyam.

    Hi,
    We encountered the same problem while using the DTT toolbox and have to copy the ZHDX1000 program to another ZProgram and then we changed the required include statement and now its running.Not from the DTT tool but that custom report.
    Alo some settings need to be altered in the User Profile who is running the tool like UGR is set to 99 and MOL is set to country running the program.
    Regards,
    Amit
    When we executed the DTT program, we followed the dump and found one of the include have a perform missing, so the developer just took care for that one and now its running through that custom program.
    Message was edited by:
            Amit Khare

  • HR Data Transfer Toolbox

    Hi,
    I am trying to create employee record in HR module using HR Data Transfer Toolbox. But when I typed in transaction code ZBPHR_ZDTT, the error came out that transaction doesn't exist.
    Can anyone tell me how I can get into this toolbox?
    Thank you in advance,
    Sunny

    I am looking for the same thing, from what I can tell, you need to purchase this separately, it's not free.  Try the following links and let me know if you find something different.
    https://websmp104.sap-ag.de/bestpractices
    then click ‘cross-industry packages’ and then HCM

  • Best practice for having separate clone data for development purposes?

    Hi
    I am on a hosted Apex environment
    I have a workspace containing two instances/ copies of the application: DEV and PROD
    I would like to be able to develop functionality and data in/ with the DEV instance and then insert it into DEV.
    I gather that I can insert pages from DEV to PROD via Create -> New page as copy -> Page in another application
    But I don't know how I can mimic this process with database objects, eg. if I want to create a new table or manipulate the data in an existing table in a DEV environment before implementing in a PROD environment.
    Ideally this would be done in such a way that minimises changing table names etc when elevating pages from DEV to PROD.
    Would it be possible to create a clone schema that could contain the same tables (with the same names) as PROD?
    Any tips, best practices appreciated :)
    Thanks

    Hi,
    ideally you should have a little more separation between your dev and prod environments. At the minimum you should have separate workspaces each addressing separate schemas. Apex can be a little difficult if you want to move individual Apex application objects, such as pages, between applications (a much requested improvement), but this can be overcome by exporting and importing the whole application. You should also have some form of version control/backup of export files.
    As far as database objects go, tables etc, if you have tns access to your hosted environment, then you can use SQL Developer to develop, maintain and synchronize between your development and production schemas and objects in the different environments should have identical names. If you don't have that access, then you can use the Apex SQL Workshop features, but these are a little more cumbersome than a tool like SQL Developer. Once again, scripts for creating and upgrading your database schemas should be kept under some sort of version control.
    All of this is supposing your hosting solution allows more than one workspace and schema, if not you may have to incur the cost of a second environment. One other option would be to do your development locally in an instance of Oracle XE, ensuring you don't have any version conflicts between the different database object features and the Apex version.
    I hope this helps.
    Regards
    Andre

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • TDMS HCM data transfer - no data transferred to receiver and no errors

    Hi,
    I am experiencing problems with a "Package for HCM Personnel Dev. PD & PA Receiver " for my client who has approximately 800 employees.
    I have no errors in any of the phases, and I can see the TDMS SEND and RECEIVER jobs executing during the data transfer. However, the data transfer phase executes for approximately 1-2 minutes but when I check the tables in the RECEIVER the tables (such as HRP1001) are empty.
    There are 42 tables selected, and CNVHCM_TR_TAB contains 1533 entries.
    Below is a sample from the 'Transfer Selection Criteria' phase:
    Transfer Target - System ECR / Client 210                                                                               
    The Transfer Program ran in PA Delete Mode                                                                               
    The Transfer Program ran in PD Delete Mode                                                                               
    Object Type: C // Total number: 5                                                                               
    Object Type: O // Total number: 5                                                                               
    Object Type: P // Total number: 28                                                                               
    Object Type: S // Total number: 31                                                                               
    Table HRITABNR - Time(ms) 10 - Entries 3                                                                               
    Table HRP1000 - Time(ms) 57 - Entries 65                                                                               
    Table HRP1001 - Time(ms) 39 - Entries 158                                                                               
    Table HRP1007 - Time(ms) 38 - Entries 5                                                                               
    Table HRP1018 - Time(ms) 39 - Entries 3                                                                               
    Table HRT1018 - Time(ms) 17 - Entries 3                                                                               
    Table PA0000 - Time(ms) 53 - Entries 47                                                                               
    Table PA0001 - Time(ms) 83 - Entries 76
    So I would reasonably assume that if there are no errors anywhere in the Sender, Receiver or the Central system - that table HRP1001 would contain an extra 158 entries or PA0001 would contain 76 entries. This is not the case PA0001 is empty and HRP1001 contains the same number of rows as before the Data Transfer executed.
    Has anyone come across this situation before, or got suggestions on troubleshooting the cause of no data transfer when all the phases are green.
    Any assistance is appreciated. Regards,, Sheryl.

    Hi Poonam,
    The reason why there is no extra PA* tables above is that this was just a short snapshot of the tables, those tables are included.
    I have selected the following for the Data Transfer:
    Plan Version = 01
    Object Type = O
    Object ID = 500383 (Contracts)
    Objects Status = All Existing
    Evaluation Path = BSSORG
    Status Vector = 1
    PD Selection tab:
    Use Current PD Selections - ticked
    Transfer PD Infotypes - 0000 to 9402 - ticked
    All other PD values unticked
    PD Delete and Target options:
    Delete target area - ticked
    Target plan version = 01, and transfer 1:1 without change chosen
    Root options:
    Without new root - chosen
    PA Selections
    Use current PA selections - ticked
    Transfer Master Data - Infotypes 0000 to 9402 - ticked
    Transfer Cluster Data - ticked
    Transfer Central Person - ticked
    Partial cut-off date - 01.07.2011
    PA Delete and Target Options
    Delete Target Area - ticked
    Target PERNR Options - Target range - 700000 to 99999999, maximum number range= 99999, transfer 1:1 without change (chosen)
    When I choose CONFIRM ONLINE the only warning messages I receive in the log file is (there are no error messages):
    No CP Object Type records will be transferred from the sender system
    Table PA3xxxx / PERNI is not registered as being released
    Table HRPxxxx / PERNI is not registered as being released
    Table PA09xx / PERNI is not registered as being released
    Hi Toribio,
    When I run the test with PD and PA Authorisation with granularity 04 I get no messages on the next screen. All users have SAP_ALL. SAP_NEW, SAP_TDMS_MASTER, SAP_TDMS_HCM_MASTER plus extra specific HR authorisations for PA and PY.
    Thankyou both for your assistance, hopefully we can get to the bottom of this problem.
    Regards,
    Sheryl.

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

Maybe you are looking for

  • System and Data Back up

    I am a user and only know enough about computers to know that I dont know very much, so please be gentle. I want an external back-up system, preferably portable, that backs up both the system (so that I can boot up from it if necessary) and my data a

  • Java does not get file list from shared folder in another server.

    Hi, I'm using java 1.4.2.16, Command below does not get file list. import java.io.;*.. File file = new File("\\\\10.242.22.28\\SapMII"); File[] files = file.listFiles(); SapMII folder is Everyone full Control permission. How can i solve this problem?

  • ChaRM Transaction Data Log Not Updating in 7.1

    Upgraded to 7.1 and still using 7.0 CRM_DNO_MONITOR and CRMD_ORDER to completed change requests in flight in existing projects.  The Transaction Data tab's Object and the Fast Entry tab's Overview show entries for action taken prior to upgrade.  Howe

  • Need help on IDOC to file interface.

    Hi Folks, I am working on IDOC to file Interface. We are pushing 15 Idocs from SAP R/3 system to SAP PI. Idocs in R/3 system are in correct order. When it comes nto PI system IDOC order is diffrent. For Example :  I am sending 6 idocs 1000, 1001,1002

  • Modifying an existing illustrator file

    Hello, I am new to illustrator and i am trying to modify specific illustrator files. In the files i am trying to modify there is grapfics and some text written on the designs. I can select the text separately but unable to modify those. Can you help?