Data Migration Template for Singapore Payroll.

Hi
I need a help on data migration template for Singapore Payroll.
Would also need guidance on what i srequired in case i want to migrate the last year payroll cluster data from legacy system to ECC 6.0.
Will appreciate if you can send me the migration template at my mail id.
Thanks and regards

Hai..
Check some of the fields mentioned below.. it may be of some use to u..
Client
Personnel Number
Sequential number for payroll period
Payroll type
Payroll Identifier
Pay date for payroll result
Period Parameters
Payroll Year
Payroll Period
Start date of payroll period (FOR period)
End of payroll period (for-period)
Reason for Off-Cycle Payroll
Sequence Number
Client
Personnel Number
Sequence Number
Country grouping
Wage type
Key date
Rate
Number
Amount

Similar Messages

  • Validation rules applied to data migration templates at import

    Hi everyone!
    First post here for me, so please bear with me if I missed something.
    My company has just started the initial implementation of ByDesign. We come from a set of disparate and partially home-grown systems that we outgrew a few years ago.
    As part of this initial phase, we are basically re-creating the data on customers, suppliers, etc. since none of our existing systems makes a good source, unfortunately. We will be using the XML templates provided by ByDesign itself to import the relevant data.
    It has become clear that ByDesign applies validation rules on fields like postal codes (zip codes), states (for some countries), and other fields.
    It would be really helpful if we could get access to the rules that are applied at import time, so that we can format the data correctly in advance, rather than having to play "trial and error" at import time. For example, if you import address data, the first time it finds a postal code in the Netherlands which is formatted as "1234AB", it will tell you that "there needs to a space in the 5th position, because it expects the format to be "1234 AB". At that point, you stop the import, go back to the template to fix all the Dutch postal codes, and try the import again, only to run into the next validation issue.
    We work with a couple of very experienced German consultants to help us implement ByDesign, and I have put this question to them, but they are unaware of a documented set of validation rules for ByDesign. Which is why I ask the question here.
    So just to be very celar on what we are looking for: the data validation/formatting rules that ByDesign enforces at the time the XML data migration templates are imported.
    Any help would be appreciated!
    Best regards,
    Eelco

    Hello Eelco,
    welcome to the SAP ByDesign Community Network!
    The checks performed on postal codes are country specific, and represent pretty much the information that you would find in places like e.g. the "Postal Codes" page in Wikipedia.
    I recommend to start with small files of 50-100 records that are assembled of a representative set of different records, in order to collect the validation rules that need reactions based on your data in an efficient way. Only once you have caught these generic data issues, I would proceed to larger files.
    Personnaly I prefer to capture such generic work items on my list, then fix the small sample file immediately by editing, and do an immediate resimulation of the entire file, so that I can drill deeper and collect more generic issues of my data sample. Only after a while when I have harvested all learnings that were in my sample file, I would then apply the collected learnings to my actual data and create a new file - still not too large, in order to use my time efficiently.
    Best regards
    Michael  

  • Data Mapping Templates for NON-RTA Systems

    Hi all,
    I'm designing n extractor for my NON-SAP System.
    I've been reading the Appendix B: Data Mapping Templates for Non-RTA Systems from Configuration Guide - SAP GRC AC 5.3.
    I have a couple of questions regarding the size of two fields in table ACTION PERMISSION OBJECTS TEMPLATE
    Is correct the size that the documents says about fields:
    ACTION (document says it's 20 but in other tables is 50)
    PERMISSION (document says it's 10 but in other tables is 100
    ACTVT (document says it's 10but in other tables is 50)
    Thank you in advanced.

    Hi Luis,
    Yeah i will follow the recomendations but i'm updating some files to RAR and i've seen another kind of inconsistency such as in the table User File Template. Recomendation says that both FNAME field and LNAME field are 50 size each one but in my case i can update a FNAME 51 size and a LNAME 49, i mean between FNAME and LNAME it's 100 size.
    Cheers.

  • Data Migration program for opportunity

    Hi SAP Experts,
         Please can one help with the Data Migration program for opportunity.
    I want to know the approach.
    These are few mandatory filed
    1.  Opportunity Name
    2.  Account
    3.  Contact
    4.  Opportunity Owner
    5.  Total Estimate/Curr*  value_ currency
    6. Project start date   ---
    7. Stage 
    8. Probability 
    9. Status 
    10. Product with Item category ZSOL
    11. Service
    12.  Service Offering 
    Regards,
    Jaya

    Hi,
    As suggested by Kai you can use LSMW. This will be better option as it is very easy to use and you can also write routines while mapping the fields. You can also use IDOCs for importing data from Legacy system.
    If you want to write a report then you can use BAPI_BUSPROCESSND_CREATEMULTI. Please refer to the documentation of this BAPI. This BAPI internally calls CRM_ORDER_MAINTAIN. Do not forget to call BAPI_BUSPROCESSND_SAVE to commit your data.
    Regards,
    Sandeep

  • Data migration approach for Scheduling Agreements

    Gurus,
    Can anyone provide guidance on the data migration approach for Scheduling Agreements? How can we migrate the open delivery schedules and the respective cumulative quantities? The document type being used is "LZ".
    Can correction deliveries (Doc type - LFKO) be used update the initial cumulative quantities?
    Any help in this regard is higly appreciated.
    Regards,
    Gajendra

    Hi Zenith
    You might find useful information here: IS-U data export (extraction) for EMIGALL.
    I've done numerous IS-U migrations in the last 10+ years, but never one from IS-U to IS-U.
    One main questions to start with: Are the two systems on the same release level?
    If they are not and it would be too much effort to get them onto the same release level I would think creating your own extract programs and using the Migration workbench (EMIGALL) is the way to go.
    If they are on the same level, there might possible other ways to do it, dependent on the differences between the two systems (mainly around customising). Assuming there are significant differences - otherwise why would you bother migrating? - there's the high probability that using EMIGALL is the way to go as well.
    Yep
    Jürgen

  • Do we need to create CPF Deduction Wage type for Singapore Payroll

    Do we need to create CPF Deduction Wage type for Singapore Payroll?

    Hi Vijay Kumar   ,
    CPF related technical wage types for Singapore Payroll are already there as per SAP standard:-
    Check  in table V_52D7_B
    u can also check the path in SPRO
    Payroll singapore : Wage Types >Processing Classes, Evaluation Classes, Cumulations>Check technical wage types for CPF

  • CRM Masterdata and Data Migration Templates

    Could some one please provide me the below templates if you have any...
    - CRM Master Data Mapping Template
    - CRM Data Migration Design Template
    - SAP CRM Data Design Template
    Really appriciate your help...
    Thanks in advance..
    Sr

    It is a duplicate one... so just marking it as answered because I could not find the option to delete.

  • Offline data migration fails for BLOB field from MySQL 5.0 to 11g

    I tried to use standalone Data Migration several years ago to move a database from MySQL to Oracle. At that time it was unable to migrate blob fields. I am trying again, hoping this issue might have been fixed in the mean time. That does not appear to be the case. The rows in question have a single BLOB field (it is a binary encoding of a serialized Java object, containing on the order of 1-2K bytes, a mixture of plain text and a small amount of non-ASCII data which is presumably part of the structure of the Java object). The mysqldump appears to correctly store the data, surrounded by the expected <EOFD> and <EORD> separators. The data as imported consists of a small (roughly 1-200) ASCII characters, apparently hex encoded, because if I do a hex dump of the mysqldump I can recognized some of the character pairs that appear in the blob field after import. However, they are apparently flipped within the word or otherwise displaced from each other (although both source and destinations machines are x86 family), and the imported record stops long before all the data is encoded.
    For example, here is a portion of the record as imported:
    ACED0005737200136A6
    and here is a hex dump of the input
    0000000 3633 3838 3037 3c39 4f45 4446 303e 3131
    0000020 3036 3830 3836 453c 464f 3e44 312d 453c
    0000040 464f 3e44 6e49 7473 7469 7475 6f69 446e
    0000060 7461 3c61 4f45 4446 ac3e 00ed 7305 0072
    0000100 6a13 7661 2e61 7475 6c69 482e 7361 7468
    0000120 6261 656c bb13 250f 4a21 b8e4 0003 4602
    0000140 0a00 6f6c 6461 6146 7463 726f 0049 7409
    0000160 7268 7365 6f68 646c 7078 403f 0000 0000
    AC ED appears in the 5th and 6th word of the 4th line, 00 05 in the 6th and 7th words, etc.
    I see explicit references to using hex encoding for MS SQL and other source DB's, but not for mysql.
    I suspect the encoder is hitting some character within the binary data that is aborting the encoding process, because so far the records I've looked at contain the same data (roughly 150 characters) for every record, and when I look at the binary input, it appears to be part of the Java object structure which may repeat for every record.
    Here is the ctl code:
    load data
    infile 'user_data_ext.txt' "str '<EORD>'"
    into table userinfo.user_data_ext
    fields terminated by '<EOFD>'
    trailing nullcols
    internal_id NULLIF internal_id = 'NULL',
    rt_number "DECODE(:rt_number, 'NULL', NULL, NULL, ' ', :rt_number)",
    member_number "DECODE(:member_number, 'NULL', NULL, NULL, ' ', :member_number)",
    object_type "DECODE(:object_type, 'NULL', NULL, NULL, ' ', :object_type)",
    object_data CHAR(2000000) NULLIF object_data = 'NULL'
    )

    It looks like the data is actually being converted correctly. What threw me off was the fact that the mysql client displays the actual blob bytes, while sqlplus automatically converts them to hex for display, but only shows about 2 lines of the hex data. When I check field lengths they are correct.

  • Check InDesign data merge template for errors without creating document?

    I've got a big InDesign template with hundreds of rows. Producing a full batch takes over an hour, and sometimes crashes. The deadline is fast approaching, and I'm waiting for the final minor amendments to the content spreadsheet, which could virtually land any minute.
    I want to check for overset text errors on the current sheet without creating a merged document or PDF, so I can get on with fixing any likely major problems before the final sheet arrives - but I don't want to create a full merged doc, because that will take a very long time and will that doc's content will be obsolete, whereas the overall length of the text will change very little.
    When you create a merged document, you have the option to turn "Generate overset text report" off. What I want to do, is ONLY generate the overset text report, without actually storing the merged document in memory.
    Is this possible?
    The closest I can see is click through the "Next" button in the Data Merge palette with Preview turned on, while watching the "No errors" preflight panel notification at the bottom of the screen. This is probably faster than creating a full merge, but still very slow.

    That is what Indesign does when it does a data merge, it creates copies of the document and there's no way to change this.  Have you tried exporting to PDF (In CS4 you can do this in one step from the original) and printing?  I've never had anyone confirm it definitively but I'm pretty sure that the PDF created will be an 'optimized' one, meaning that when it sends it off to a compatible printer the images will be sent once and cached, with only the text that changes being sent separately.
    Definitely worth comparing it to Publisher for speed anyway, Office programs can merge straight to the printer, but in effect they're doing the same thing.  There are programs out there for doing far fancier stuff than data merge, probably the best is XM Pie, but unless you've got a printer that recognises their language they're a waste of time in this circumstance anyway.

  • Customized DATA DEFINATION,TEMPLATE FOR R12 WIRE PAYMENT

    I have requirement to use lot of custom value in my R12 wire payment report.
    I have done the following
    1. i have creaed one procedure and registered it as concurrent program
    2. created data definatin and map it wiht Concurrent program name
    3. created RTF Template and mapped with data definaton.
    4. when i submit concurrent prgoram , with parameter as payment batchid, this report is working fine and getting necessary result
    5. when i submit using standard payment process, i am not getting only the hotcoded value in my template. but it didn't receive proper value from procedure.
    my queries are
    1)
    Is my approach is correct?
    2) Is there any other way to get custom value in standard WIRE PAYMENT METHOD
    pl. reply to me at the earliest
    REgards,
    siva

    Hi Siva
    So when you run the program as part of the payment batch is your new program being called? If so, is it generating the data as expected?
    If the above, then is your template being used?
    Im guessing that the format program needs to be set up - Im not so familiar with R12 payments but it ought to be covered in the AP user guide.
    Regards
    Tim
    http://blogs.oracle.com/xmlpublisher

  • PA data migration Strategy for global roll-out

    Hi Guys,
    What is the best practise to load PA data into SAP HCM for a global roll-out? We plan to go-live with 50 odd countries. For country specific infotypes like address, It does not make sense to create country-specific LSMW's for each of the country. We may end up having 100s of LSMWs which does not seem right to me. nearly 20 of these countries have less than 100 employees each. Any ideas what the best practises around this are?
    regards
    Sam

    Create batch input sessions with LSMW as usual.
    Wite a report that reads the batch input session (tables APQI, APQD), gets the country specific dynpro for every PERNR (tables T582A, T588M, feature Pxxxx, ...) and change the dynpro in the APQD. Use debugger to find offset and length of field values in the record.
    Than you can process the batch input.
    I did it that way many times.
    Alternativly you could update the PA tables directly - but difficult to detect errors.

  • Data migration procedure for upgrade

    Hi,
    I am working in upgrading 3.0B to BI7.0. The BI7.0  Production is already stabilized and already in use.
    My query is, to the existing BI7, I need to push data from 3.0B.
    Please clarify.
    Thanks in advance,
    Rama Murthy..

    Export data from 3.0B to flat file (through open hub) and then load it to BI 7.0.
    Basically upgrade supposed to be system copy of 3.0B to new box (BI 7.0 box and then upgrade) .
    Hope this helps..

  • DATA MIGRATION - Fails in Cloud for Customer

    Hello Community,
    I was doing a test load for Marketing lead in Cloud for customer and I am getting following error:
    The error - some how doesn't have much explanation.
    Can anyone please guide how to resolve the issue and what could be the probable cause for the same?
    Awaiting quick response.
    Regards
    Kumar

    Recording the incident is absolutely the right path.  Sometimes I get this error when:
    There is an updated version of the data migration template.
    When I copied and pasted, I overwrote the format of one or more cells in the XML template.
    I pasted data into the wrong column within the XML template.
    To resolve these issues, I download the newest XML file template from within the system, repopulate the data using and try again.  The important thing is to use copy > paste special > values only so that the format isn't overwritten.  Also check that the data is in the right columns before uploading.
    If these troubleshooting tips don't work, then I have to depend on support's assistance to resolve the error.

  • Data migration for Securities

    Hello
    I have been going the suggestions on the  thread below,
    Data Migration Strategy for Securities
    I would like to clarify at which price do i  book securities on cut off date, here is an example
    my cut off date is 31/12/2009 and i have the following deals
    1 .   Purchase of R207 bond (04I) nominal 10million on 06/06/2009 price 98%
      2.      Sale of R207 bond   5million on 11/11/2009 price 97%
    My balance nominal as at cut off on R207 bond is 5Million so on the 31/12/2009 i will book on TS01 5 MILLION on what price, currently on QA  we booked all deals at original prices including all sales. Note that we do MTM on 04I and amortisation on  04X bonds.
    Thank you in advance
    Regards
    Victor
    Data Migration Strategy for Securities

    Hi Victor,
    looking into your example, it seems you have outstanding balance of 5 mio of certain bond and result (P&L) of difference between purchase (10 mio) and sales (5mio) should be booked in Y2009 and is already included in financial figures in Legacy system. If you say that you use plain Mart-to-Market Valuation for this particular type bond, so the Balance value is dependant on Market price on last date of the Y2009.
    Rgds,
    Renatas

  • Data Extraction template

    Hi,
        In my current project there is a requirement for Data migration from legacy, So can anyone please help me on providing the data extraction template for Vendor and customer open line items, G/L balances and for Bank directory,
       Your help in this regard is highly appreciated
    Thanks
    Rajesh. R

    Hi Rajesh,
    When you extract the data from the legacy system the point that you should keep in mind are,
    1. How the organisation structure in legacy system is mapped in SAP system. Becuase the data upload into sap should also happen in the way the reports are expected from SAP.
    There is no standard layout which is used while extracting, you need to make sure that you extract all the necessary information from legacy system which is need to be uploaded in SAP.
    I can give you an example of the field that you may include in the layout of extraction
    Vendor account, Document data, Posting date, Document Type, Company code, Amount in Document currency, and local currency, etc
    2. Please keep in mind that there are some GL account which will be maintained in SAP as open item basis. Therefore your extraction should also possibly happen each transaction wise.
    3. There are certain GL which will be maintained in foreign currency, line bank GL which are in foreign currency. In such cases you need to extract the balance in foreign currency.
    My suggestion to you will be thinking through the precess first and then go ahead with the extraction.
    Hope this helps
    Regards
    Paul

Maybe you are looking for