Best Practice:  Delimiting Infotypes

Greetings, SAP Experts!
I'm relatively new to ABAP/SAP so the way I'm doing this may not be ideal.  Here's how I'm doing it but I'm curious as to whether there is a standard function module or better practice for delimiting a plan.
1) I'm reading the active 167 record
2) I'm getting the new begda from my input file and I'm changing the endda of the current record (soon to be the "old" record)  by making it:  endda = {effective-date-of-new-plan} - 1 day.
3) I'm doing a "MOD" operation via HR_INFOTYPE_OPERATION to update current 167 (changing the ENDDA)
4) I'm adding the new 167 record with begda = date in my file, endda = 99991231.
This is working fine but I just want to make sure there's not a better way.
Thanks!

Hi Steve,
For benifit plan delimition there are standard function module's to delimit, i would prefer the these standard fm's over HR_INFOTYPE_OPERATION for  to delimit the benifit plans since it does some extra validation's and checks. Following are the FM's for the same.
HR_BEN_TERMINATE_HEALTH_PLAN - 0167
HR_BEN_TERMINATE_INSURE_PLAN - 0168
HR_BEN_TERMINATE_SAVING_PLAN - 0169  etc.
Regards,
Shrinivas

Similar Messages

  • Infotype header - best practice

    Hi group
    Today our ifnotype headers are created in a way that makes us have to break a lot of infotypes just to refresh the header.
    There are 2 reasons in my opinion:
    - We have chosen to read header on infotype begda
    - We have eg. position information in the header
    We want to change this so we don't have to break eg. address just to read correct data.
    I don't feel reading header at sy-datum is a right choice.  This can give an incorrect picture if you read an infotype and the header is inconsistent with the actual situation in the date-range for the infotype.
    So I am leaning towards letting the header contain much less information.
    Does anyone have some "best practice" thoughts?
    Any documentation I can read up on regarding best practice on this matter?
    Thanks in advance
    Kirsten

    Hi,
    Headers, in general, are part of the whole philosophical debate. Iu2019ve seen both ways: current dated and infotype date driven. Iu2019m an advocate of them being infotype date driven for the very same reason that you mentioned: display what is on the record.
    The other school of thought is to always display current information. It also gets some leverage depending on what is being displayed. If you have OM related data like position, job, org units etc, and clients may want to always display the current information. Maybe they are trained that way to only pay attention to headers to give them ad-hoc type information. They can get historical data through other reports.
    I donu2019t think there is any u201Cbest practiceu201D with headers; itu2019s a purely philosophical choice.
    Hope this helps.
    Donnie

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • SAP Human Capital Best Practice Reports

    Hi friends!!!
    I'm currently working with a client that is looking for a list of SAP Human Capital Best Practice Reports. I'm having difficulty finding a list of SAP HCM best practice reports. Does anyone have a list? Or can you point me in the right direction?
    Thanks in Advance
    Thanks & Regards

    Hi
    Program Description
    H99CWTR0
    Wage Type Reporter. Returns pay for particular wage types. To submit from new report you will need to create copy and export value to memory.
    RHGRENZ0
    Delimit IT1000 and related 1001s. Program will delete any 1001 infotypes whose start date is after the delimit date.
    RHGRENZ1
    Extend the end date on delimited records. Very useful when you delimit a bunch of records incorrectly, and need to change the end date.
    RHGRENZ2
    Delimit infotypes (IT1001)
    RPCMPYG0
    Statutory Maternity Pay(SMP)
    RPCSSPG0_HIST
    Statutory Sickness History(SSP)
    RPDTRA00
    List all HR transactions and there uses
    RPTPSH10
    Personal work schedule, also accessed via PA20/PA30 infotype 2001
    RPUAUD00
    HR Report to list all logged changes in infotype data for an employee. Uses the PCL4 Audit Cluster.
    RPUAUDDL
    HR Report to delete audit data from the PCL4 Audit Cluster
    RPUDELPN
    Delete all info for an employee number, including cluster data and infotypes
    RPUP1D00
    View/Delete records from PCL1 Cluster
    RPUP2D00
    View/Delete records from PCL2 Cluster
    RPUP3D00
    View/Delete records from PCL3 Cluster
    RPUP4D00
    View/Delete records from PCL4 Cluster
    you can find more in the link
    http://www.sapdevelopment.co.uk/programs/programshr.htm
    Regards
    Sri

  • Best practice for sqlldr -- direct to core or to stage first?

    We want to begin using sql loader to load simple (but big) tables that have, up to this point, been loaded via perl and it's DBI connection to Oracle. The target tables typically receive 10-20 million rows per day (parsed log data from many thousands of machines) and at any one time can hold more than a billion total records PER TABLE. These tables are pretty simple (typically 5-10 columns, 2 or 3 part primary keys). They are partitioned BY MONTH (DAY is always one of the primary key columns) and set up on very large SAN disk arrays, stripped, etc. I can use sqlldr to load the core tables directly, OR, I could use sqlldr to load a staging table on a daily basis, then PL/SQL and SQL+ to move data from the staging table to the core. My instinct tells me that the second route is SAFER, that is there is less chance that something catastrophic could corrupt the core table, but obviously this would (a) take more time to develop and (b) reduce our over-all throughput.
    If I go the first route, loading the core directly with sqlldr, what is the worst thing that could possibly happen? That is, in anyone's experience, can a sqlldr problem corrupt a very large table? Does the likelihood of a catastrophic problem increase in proportion to the number of rows already in the target table? Are there strategies that will mitigate potential catastrophies besides going to staging and then to core via pl/sql? For example, if my core is partitioned by month, might I limit potential damage only to the current month? Are there any known potential pitfalls to using sqlldr directly in this fashion?
    Thanks
    matthew rapaport
    [email protected]

    Wow, thanks everyone!
    1. External tables... I'd thought of this, but in our development group we have no direct access to the DBMS server so we'd have to do some workflow to move the data files to the dbms server and then write the merge. If sql loader will do the job directly (to the core) without risk, then that seems to be the most straight-forward way to go.
    2. The data in the raw files is very clean, this being done in the step that parses the raw logs (100-500mb each) to the "insert files" (~20mb each), and there would be no transformations in moving data from staging to core, so again that appears to argue for direct-to-core loading.
    3. The data is collected by DAY, but reported on mostly by MONTH (e.g., select day, sum(col), count(col), from TABLE where day between A and B, group by day, order by day, etc where A and B are usually the first and last day of the month) and that is why the tables are partitioned by month, but perhaps this is not the best practice (???). I'm not the DBA, but I can make suggestions... What do you think?
    4. Time to review my sqlldr docs! I haven't used it in a couple of years, and I'm keeping my fingers crossed that it can handle the particular delimiter used in these files (pipe-tab-pipe expressed in perl as "|\t|". If I recall it can, but I'm not sure how to express the tab...
    Meanwhile, thank you very much, you have all been a BIG help... Strange no one asked me how it was that a Microsoft company was using Oracle :-) ... I work for DANGER INC (was www.danger.com if anyone interested) which is now owned (about 9 months now) by Microsoft, and this is the legacy reporting system... :-)
    matthew rapaport
    [email protected]
    [email protected]

  • Best practice: parameters, reports and control flow

    I am developing an application that has a number of different reports, each of which has a combination of similar parameter LOVs.
    I defined the LOVs on page 0, with a corresponding DISPLAY hidden field for each one, with each set to conditional display if its DISPLAY=Y. I have a page process on each page with a standard block setting the appropriate _DISPLAY's to Y or N depending on whether they are needed on that page or not.
    It is becoming difficult to maintain, and I would prefer to have a single block of code that is called when entering all pages for the first time; where a CASE statement can switch on and off the various LOVs for each page by setting their correspondings _DISPLAY hiddens.
    I cannot find a clear answer for this in the forums; and I am not very clear if it is possible, or if it is the best practice.
    If anyone has any advice, please let me know!!
    Thanks
    Mark

    Hi Mark,
    One of the first points of best practice in Apex is that any non-trivial chunks of PL/SQL coding should be centralised in the database as stored code.
    In your case, your generic code would check the page that is being loaded and through a case statement, selectively set values to display the required fields for that page. One problem with this is that you still need to modify this procedure every time you add a new page.
    An alternative to this would be to do away with the _DISPLAY items and have the LOV items Condidtion type set to
    Current Page is Contained Within Expression 1 (Comma delimited list of pages)
    You then only need to list the pages the item is available for as a comma separated list in Expression 1.
    You could go even further by storing the display logic for each LOV item in tables in the database and make this completely dynamic, but this may be seen as overkill.
    Regards
    Andre

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Best Practice - Changing description for Org Unit/Position or creating new?

    Hello Freinds,
    I just want to know from your experience what's normally practiced in your implementations for OM :-
    For scenarios where there is a need to change the description of a particular org unit or position, do u
    1. just change the description effective a particular date (to maintain history) or
    2. put an end date to those objects and create new ones ?
    Solution 1 is quick and easy, but in IT0001 the description displayed is as on start date of that infotype which is normally a date prior to the change in desc of those objects. As a result this infotype keeps displaying old description.
    Is there any way to change this display to show the current description instead of the description as on start date of this infotype ???
    Solution 2 calls for a lot of related activities, say if i create a new org unit and delemit the old one - then i have to move all the sub org units and positions into this new one ... which is quite time consuming and doesn't really seem practical.
    How do u manage such scenarios ?
    Thanks
    Allen

    We use option #1, although I am not sure this is best practice.  Using option 1 for positions makes it challenging when it comes to reporting on length of time in position.  We frequently have the scenario where a person is reorged not because they applied for another position, but just because the big wigs want to move around the chess pieces.  In these cases we simply modify the position attributes and then run a PA action.  Then if you run a query and use the standard delivered Length of time in position field, it appears as if the person has been in the same position for years (which they have), but their position has been retitled, re-graded, and re-orged numerous times.  This makes it very difficult to get to an employee's length of time in their role. 
    This is a great discussion question I hope more people respond with what they do and why.

  • Best approach-several infotypes requested to be extracted/modeled in SAP BI

    Hello Gurus!
    The business wants to pull the data from about 150 HR infotypes (including 9-series) into SAP BI. The requirement has been blessed and it is a go (maybe the total number might increase over 150 but not decrease)!! The main premise behind such a requirement is to be able to create ad-hoc reports from BI (relatively quickly) on these tables.
    Now, has anyone of you heard of such a requirement - if so, what is 'best practice' here? Do we just blindly keep modeling/creating 150 DSOs' in BI or is there a better way to do this? Can we atleast reduce the work load of creating models for all these infotypes, somehow? (maybe create views, etc.?)
    Any kind of response is appreciated. Thank you in advance. Points await sensible responses.
    Regards,
    Pranav.

    Personally, I'd say the best approach for this would be not to extract the Infotypes at all and use Crystal Reports to generate ad-hoc queries directly against the Infotypes in your R3/ECC environment. This would eliminate the need to extract and maintain all of that data in BW and unless you have SAP Business Objects Enterprise installed on top of SAP BW, "relatively quick" ad-hoc queries in BW is not what I would call a common theme (BEx Analyzer, which is what I'm assuming is your reporting UI solution for this, isn't exactly user-friendly).
    If you must bring all of these Infotypes in SAP BW, creating views of Infotype data may be your best bet. It would definitely reduce the number of repositories on your BW environment, thereby reducing your maintenance time and costs.

  • Best Practice to copy IT0758 subtype ABC to subtype DEF

    What is the best practice to create an infotype record with a specific subtype in a same infotype but different subtype? Let's say copy IT0758 subtype ABC to subtype DEF. Thanks!

    Call FM HR_INFOTYPE_OPERATION to create any new infotype record

  • SAP HCM Implementation: Best Practice for configuring

    Hi,
    This is my first independent project of HCM implementation. I have just started the system configuration. Done with setting up the PA, PSA, EG and ESG. Assigned to CC.  At this stage, I have a very basic question which is, what is the best practice for the next steps of configuration. What do I go to next, step up the OM in the SAP EasyAccess Menu? How should I go from here?
    Would really appreciate some explanatory assistance.
    Thanks in advance.
    Papri
    Edited by: papri_rc on Jul 8, 2011 6:40 AM

    Its all depends on business requirement
    at starting as i advised you review your BBP , make sure you configure everything
    for your reference iam giving the following data for OM and PA config..
    as part of OM
    1.     depict client org structure using simple maintenance , with this you can create large structures in less time(while doing org structure be careful and refer BBP)
    2.     maintain integration switches
    3.     maintain plan version
    4.     maintain number ranges
    Configuration for PA
    HR Enterprise /PersonnelStructure     
    u2022     Personnel Areas     
    u2022     Personnel Sub Areas     
    u2022     Employee Group     
    u2022     Employee Sub Group     
    u2022     Assignment of Personnel Area to Company Code     
    u2022     Assignment of Employee Sub Group to Employee Group     
    Basic Settings
    1.     Maintain Number Range Intervals for Personnel Numbers     
    2.     Determine defaults for number ranges     
    Personal Data     
    1.     Create Forms of Address     
    2.     Create Marital Status     
    Family     
    1.     Defined Possible Family Members     
    Addresses
    1.     Create Address Type     
    Communication     
    1.     Create Communication Types     
    Contractual and Corporate Agreements
    1.     Define Contract Types     
    2.     Determine periods of notice     
    Employee Qualifications
    1.     Create education establishment types     
    2.     Define Education Training     
    3.     Create educational Certificates     
    4.     Create branches of study     
    5.     Determine permissible certificates for education type     
    Infotype Menus     
    1.     User Group Dependency on Menus and Info groups     
    2.     Infotype Menu     
    3.     Determine choice of Infotype menus     
    4.     Infotype Menus     
    Actions     
    1.     User Group Dependency on Menus and Infogroups     
    2.     Info Group     
    3.     Personnel Action Types     
    4.     Create reasons for personnel actions     
    5.     Change Action Menu     
    *Developments(ABAPconsultant will do)     *Field Enhancements     (any field enhancements in infotypes)
    Customer Infotypes     -Develop any customer infotypes if required for the business from 9000 series
    Edited by: Piscian . on Jul 8, 2011 9:08 AM

  • Best practice in HCM

    Hi,
    1) Can you please advise,what all processes comes under SAP Best Practices for HCM?
    2) What all infotypes comes under Payroll implementations?
    Thanks!
    Manish

    Hi,
    You can read every thread in this forum.
    1.Personnel Administration - it deals with the HR Master Data
    2.Organization Management - It deals with the personnel planning all about Business Units,Cost centers etc this will give the picture how your organization look virtually.
    Keep these modules as base must know modules
    3 Time Management - This manages employee times ,leaves,quotas,shifts,working schedules etc working patterns etc
    CATS ,Cross Application Time sheets
    Shift Planning etc are part of Time Management but these are individually separate knowledge of these is not necessary when you start your career going forward it would be helpful to grow.
    4.Payroll- This manages employee payment part and payment including the off cycle payments
    These 4 can be considered to be the basic modules of HCM
    5 Recruitment -Deals with process of recruitment from manpower planning to on-boarding
    Apart from these we have
    6.Benefits -which deals with the fringe benefits (applicable to USA ) and some other countries-I t deals with the insurance ,medical etc
    7,ESS/MSS This is self service module employee self service and manager self service these are Portal based modules
    We have new generation modules
    Such(8) E- Recruitment this deals with the E Recruitment this is portal based modules
    Learning Solution
    Succession Planning
    Personnel Development
    Talent Visualization by Nakisa
    Enterprise compensation Management (this is again a separate module)
    These all part of Talent Management Suite and portal based modules knowing these would give you an edge.
    This module is very upcoming and hot
    To become an effective HCM consultant you need have good business process knowledge combined with the any of 4 to modules (indepth)
    Outside HCM if you knowledge of HR ABAP and MS Excel and MS Word of great advantage.
    To start your choose Personnel Management,Org Management combined with Time Management and Payroll
    Or ESS/MSS with Talent Management
    Search in transaction PA30- Hr Master data and press F4 , you will find entire details 0000 Actions 0001 Organizational Assignment 0002 Personal Data 0003 Payroll Status 0004 Challenge 0005 Leave Entitlement 0006 Addresses 0007 Planned Working Time 0008 Basic Pay 0009 Bank Details 0011 External Transfers 0014 Recurring Payments/Deductions 0015 Additional Payments 0016 Contract Elements 0017 Travel Privileges 0019 Monitoring of Tasks 0021 Family Member/Dependents 0022 Education 0023 Other/Previous Employers 0024 Qualifications 0025 Appraisals 0027 Cost Distribution 0028 Internal Medical Service 0030 Powers of Attorney 0031 Reference Personnel Numbers 0032 Internal Data 0033 Statistics 0034 Corporate Function 0035 Company Instructions 0037 Insurance 0040 Objects on Loan 0041 Date Specifications 0045 Loans 0048 Residence Status 0050 Time Recording Info 0054 Works Councils 0057 Membership Fees 0077 Additional Personal Data 0078 Loan Payments 0080 Maternity Protection/Parental Leave 0081 Military Service 0083 Leave Entitlement Compensation 0105 Communication 0121 RefPerNo Priority 0123 Germany only 0124 Disruptive Factor D 0128 Notifications 0130 Test Procedures 0139 EE's Applicant No. 0165 Deduction Limits 0167 Health Plans 0168 Insurance Plans 0169 Savings Plans 0171 General Benefits Information 0185 Personal IDs 0219 External Organizations 0236 Credit Plans 0262 Retroactive accounting 0267 Add. Off-Cycle Payments w/Acc.***. 0267 Additional Off-Cycle Payments 0283 Archived Objects 0290 Documents and Certificates (RU) 0292 Add. Social Insurance Data (RU) 0293 Other and Previous Employers (RU) 0294 Employment Book (RU) 0295 Garnishment Orders (RU) 0296 Garnishment Documents (RU) 0297 Working Conditions (RU) 0298 Personnel Orders (RU) 0299 Tax Privileges (RU) 0302 Additional Actions 0315 Time Sheet Defaults 0330 Non-Monetary Remuneration 0334 Suppl. it0016 (PT) 0376 Benefits Medical Information 0377 Miscellaneous Plans 0378 Adjustment Reasons 0379 Stock Purchase Plans 0380 Compensation Adjustment 0381 Compensation Eligibility 0382 Award 0383 Compensation Component 0384 Compensation Package 0395 External Organizational Assignment 0396 Expatriation 0402 Payroll Results 0403 Payroll Results 2 0415 Export Status 0416 Time Quota Compensation 0429 Position in PS 0439 Data Transfer Information 0458 Monthly Cumulations 0459 Quarterly Cumulations 0460 Annual Cumulations 0468 Travel Profile (not specified) 0469 Travel Profile (not specified) 0470 Travel Profile 0471 Flight Preference 0472 Hotel Preference 0473 Rental Car Preference 0474 Train Preference 0475 Customer Program 0476 Garnishments: Order 0477 Garnishments: Debt 0478 Garnishments: Adjustment 0483 CAAF data clearing (IT) 0484 Taxation (Enhancement) 0485 Stage 0491 Payroll Outsourcing 0503 Pensioner Definition 0504 Pension Advantage 0529 Additional Personal Data for (CN) 0552 Time Specification/Employ. Period 0553 Calculation of Service 0559 Commuting allowance Info JP 0560 Overseas pay JP 0565 Retirement Plan Valuation Results 0567 Data Container 0569 Additional Pension Payments 0573 Absence for Australia PS 0576 Seniority for Promotion 0579 External Wage Components 0580 Previous Employment Tax Details 0581 Housing(HRA / CLA / COA) 0582 Exemptions 0583 Car & Conveyance 0584 Income From Other Sources 0585 Section 80 Deductions 0586 Section 80 C Deductions 0587 Provident Fund Contribution 0588 Other Statutory Deductions 0589 Individual Reimbursements 0590 Long term reimbursements 0591 Nominations 0592 Public Sector - Foreign Service 0593 Rehabilitants 0597 Part Time Work During ParentalLeave 0601 Absence History 0602 Retirement Plan Cumulations 0611 Garnishments: Management Data 0612 Garnishments: Interest 0614 HESA Master Data 0615 HE Contract Data 0616 HESA Submitted Data 0617 Clinical Details 0618 Academic Qualification 0624 HE Professional Qualifications 0648 Bar Point Information 0650 BA Statements 0651 SI Carrier Certificates 0652 Certificates of Training 0653 Certificates to Local Authorities 0655 ESS Settings Remuneration Statement 0659 INAIL Management 0666 Planning of Pers. Costs 0671 COBRA Flexible Spending Accounts 0672 FMLA Event 0696 Absence Pools 0702 Documents 0703 Documents on Dependants 0704 Information on Dependants 0705 Information on Checklists 0706 Compensation Package Offer 0707 Activation Information 0708 Details on Global Commuting 0709 Person ID 0710 Details on Global Assignment 0712 Main Personnel Assignment 0713 Termination 0715 Status of Global Assignment 0722 Payroll for Global Employees 0723 Payroll for GE: Retro. Accounting 0724 Financing Status 0725 Taxes SA 0742 HDB Concession 0745 HDB Messages in Public Sector 0746 De Only 0747 DE Only 0748 Command and Delegation 0758 Compensation Program 0759 Compensation Process 0760 Compensation Eligibility Override 0761 LTI Granting 0762 LTI Exercising 0763 LTI Participant Data 0783 Job Index 0784 Inquiry Family Court 0785 Pension Equalization Payment 0787 Germany Only 0788 Germany Only 0789 Germany Only 0790 Germany Only 0792 Organizational Additional Data 0794 Pensioner Message A 0795 Certification and Licensing 0796 Duty Assignments 0800 Material Assignment 0802 Sanctions / Offense 0803 Seniority Ranked List 0804 Personal Features 0805 Honors 0806 Course Data 0813 Historical Additional Fees A 0815 Multiple Checks in One Cycle 0845 Work Relationships 0846 Reimbursements 0851 Shukko Cost Charging 0852 Shukko Cost Charging Adjustment 0853 Shukko External Org. Assignment 0860 Sanctions / Offense 0861 Award/Decorations 0863 Verdict 0865 Mobility 0873 Additional Amount - Garnishment FR 0875 Events - My Simplification 0881 Expense Information 0882 Insurability Basic Data 0883 Entitlement Periods 0884 Insurability Calculation 0887 Garnishments (ES) 0900 Sales Data 0901 Purchasing Data 0904 Override Garnishable Amount D 0908 Info. about Annual Income Check 0942 Capital Payment 0976 Municipal Tax per Person 0978 Pension Contribution A 0979 Pension A 2001 Absences 2002 Activity Allocation (Attendances) 2002 Attendances 2002 Cost Assignment (Attendances) 2002 External Services (Attendances) 2002 Order Confs.(Att) 2003 Substitutions 2003 Substitutions: Indiv. Working Times 2004 Availability 2005 Overtime 2006 Absence Quotas 2007 Attendance Quotas 2010 Cost Allocation (EE Rem. Info) 2010 Cost Assignment (EE Rem. Info) 2010 Employee Remuneration Info 2011 Completed Time Events 2011 Time Events 2011 Time Events (CO) 2011 Time Events (PM) 2011 Time Events (PP) 2012 Time Transfer Specifications 2013 Quota Corrections 2050 Annual Calendar 2051 Monthly Calendar 2052 Absence Recording 2052 List Entry for Attendances/Absences 2052 Weekly Calendar w/Cost Assignment 2052 Weekly Entry w/Activity Allocation 3003 Materials Management 3202 Addresses 3215 SWF Staff Details 3216 SWF Contract Details 3217 SWF Qualifications 3893 Time Account Status 3894 Factoring Information BPO 
    Thanks and Regards,
    Revathi.

  • SAP Best Practice HR: DX Toolbox

    Hi Suresh,Saquib
      This post is in continuity to my previous post, can u provide some advantages/disadv's of using  <b>ZBPHR_ZDTT - SAP Best Practice HR: DX Toolbox</b> tool to upload PA infotypes

    Vijay,
    I haven’t used it , but I just Heard that SAP give best practices of common infotype. Most of the time you wouldn’t find the country specific infotype . I also came to knew in discussion that you can not upload the data from application server.
    Hope this’ll give you some clue.
    Thanks
    Saquib
    Message was edited by: Saquib Khan

  • Data Migration Best Practice

    Is the a clear cut best practice procedure for conducting data migration from one company to a new one ?

    I don't think there is a clear cut for that.  Best Practice would always be relative.  It varies dramatically depending on many factors.  There is no magical bullet here.
    One except for above: you should always use Tab delimited Text format.  It is DTW friendly format.
    Thanks,
    Gordon

  • Best Practices on Routine Data Load.

    Can someone please tell me what are the best practices on routine data load from one database to another?
    We have PeopleSoft system where new employees' records are created; however, these new employees are required to take new employee tests that is being tracked by an application outside Peoplesoft on an Oracle db. Therefore, we need to populate the Oracle db with the new employee's information - on a daily basis or as needed. The data we will need to track are new employees or rehires, changes on existing employees - position, title, etc, terminated employees - date of termination, etc.
    What is the best practice to get the employee's information to the Oracle db?
    Any suggestions are appreciated.
    -andy

    Depends on your source and your database version which you didn't mention. What is the easiest way to get them out of your source database?
    Perhaps a database link though that might be a security violation.
    Perhaps as a delimited ASCII file loaded using SQL*Loader or an external table.
    Can you provide more information and database version numbers?

Maybe you are looking for

  • Source system in Responsibility TAB

    Hi All I am finding difficulty to know exact source system (product responsibilty). I could see lot of text.It might be good If sap would have maintained Logical system here instead of this Source system text ,I want to know the correct text accordin

  • How do you initialize audio player on iPhone 5s iOS8?

    I am trying to use Scan & Translate and Translate Photo but the error message that says "Unable to initialize audio player" comes up and cannot use those apps. How do you initialize audio player on iPhone 5s iOS 8?

  • Is it possible to limit the number of attachments

    Is is possible to limit the number of attachments that the mail server will receive in a message. For example if the email has 5 or more attachments reject it or through it in the bitbucket. The reason I ask is I have a remote site that is sending me

  • CR2 not recognized in photoshop CS4

    I just bought the new Canon T1i.  Its raw files are marked as CR2. When i try to open them in photoshop CS4 it says "Could not complete your request because it is not the right kind of document."  So i downloaded the upadate to camera raw, version 5.

  • Tcode  PFAL there is no model view for the distribution of HR Data.

    Hi friends , i am facing new problem,i want to send iDOC to other system. previeous its working fine. but to day  when i open tcode PFAL. it give me Error:"There is no model view for the distribution of HR master data" what is the solution for that.