MDMP - Unicode 46c - ecc 6

Hi sir, need some hint on the unicode conversion.. using spum4.
on our SMLT we are having EN, DE, TH.
after scanning the table without languange info and ambigious languange , and INDX analysis. there is words imported to vocabulary that need to be assigned.
some of this word unreadable ( having characters @#$ | ), how can i determine what languange is it, in order able to add the languange assigment to the words.
I have try to logon as TH. and i able to see some of Thai characters, and assigned it.. but the rest i'm lost.
Could anybody help me on  this..
Thanks
Regards

Hi in our system we are having this codepage:
8600     Thai Codepage ISO988/2533
1100     SAP internal, like ISO 8859-1        (00697/00819)
Now.. my SPUM4 status is Preparation on start release finished. but with warnings
@5D@     Vocabulary     31087 words have not been assigned a language
@5D@     Reprocess Log     346 reprocess logs are not maintained completely
@5D@     INDX Log     4 INDX logs are not maintained completely
should i re-process all the warning ? , because words in the vocabulary it's self is very ambigous to me.
Like this the one in IDX repair :
AQQU/SAPQUERY/H0CM_05     "Lohnartenbetrไg | Lohnartenbetrไg | Vergtungsbereich | Vergtungsanpassungsgrund | Vergtungsanpassungsart | Vergtungsgrundgeh"     3
the words in quote it's detected as text. but i think that is header fron the query.
What should i do with the warnings ?
Regards.

Similar Messages

  • Technical upgrade from 4.6C MDMP system to ECC 6.0 using CU & UC approach

    Hi SAP Experts,
    We are embarking on a project for Technical upgrade from SAP R/3 4.6C MDMP system to ECC 6.0 using the CU & UC approach for our Asian operations (comprising of India, Japan, Taiwan & Singapore).
    Please share with me the Best Practices you followed in your projects, checklists, DOs, DONTs, typical problems you encountered and the possible safeguards, any other useful information, etc.
    Since very little information is available from Organizations who have used the CU & UC upgrade approach successfully, I am raising this in SDN to share your experiences during the Technical upgrade.
    All useful information will be surely awarded with suitable points.
    Looking for your invaluable inputs which will be a BIG help for our project.
    Best regards,
    Rajaram

    Hello Rajaram!:
    It's a very big project, that you have to do in a lot of steps........
    If you want sent me a email with the problems and you want.
    Regards,
    Alfredo.

  • MDMP UNICODE CONVERSION- Global fall back code page.

    Hello ,
    Im doing MDMP unicode conversion. In that SPUMG settings has Global fall backup code bage.
    What is exactly Global fall back code page and its uses?.
    Im using 1160 has Global fall back code page in SPUMG? will there be any effect?
    Thanks and Regards
    Vinay

    Hello Vinay,
    Please have a look at the Unicode conversion Guide:
    Excerpt:
    "Whenever there is no information to determine the correct code page, R3load uses the Global Fallback Code Page (GFBCP) for the conversion. Its default value is the code page which corresponds to the logon language EN, or else 1100. This is a global setting for all tables."
    This basically means you should select the code page used by the majority of the users / data .
    If you use 1160, then the majority of users should be based on Latin-1 (Western European code page).
    Best regards,
    Nils Buerckel
    SAP AG

  • MDMP Unicode Conversion problem during ECC upgrade

    hi Experts,
    I met a problem when upgrade from 4.6c to ecc 6.
    in 4.6c, some key users made some chinese written entries in table field while he/she was logon as English, so, the language field in this entry is "EN" although the actual language was written by chinese, after upgrade to ECC6, we found the chinese words became messy code, maybe it was due to the chinese were encoded by english codepage during conversion, my question is, how can I avoid this? I really don't want this happen in our PRD upgrade.
    The second question is: if a field contains both chinese and english, then, which language should it be assigned? english or chinese? I'm afraid if we assign this word as "EN" in vocabulary, then, after upgrade, the chinese part will be messy code.
    Thank you for your kind suggestions.
    Freshman

    Hi,
    you can either use transaction SUMG (manually) to repair the entries ( you have to add the according table - please check unicode conversion guide for details) or you need to change the language key in the table to Chinese if you want the Unicode conversion to convert the data correctly.
    To your second question: If the English texts are restricted to US7ASCII characters ("normal" English without special characters) those texts containing Chinese and English words can (and must !)  be assigned to ZH, as US7ASCII characters are included in the Chinese code page.
    Best regards,
    Nilsn Buerckel

  • MDMP Unicode Conversion: maintenance system vocabulary in.Excel possible?

    We are currently converting from R/3 4.7 MDMP to ECC 6.0 EHP5 Unicode using CU&UC. We have native speakers to do the manual maintenance of the system vocabulary (assignment of words without language key to a language). However these native speakers are SAP "illiterates" and have no access to SAP. We would like to extract the system vocabulary to Excel, give it to the native speakers, they maíntain in Excel, and then we upload the Excel into SAP. Extracting the vocabulary is easy, since there is an ALV grid. Is there a way to upload the Excel to SAP? Thank you very much.

    Hi,
    the down- and upload scenario via spreadsheet is not forseen in SPUMG.
    I know customers who maintain the vocabulary with automatic and semi-automatic procedures and then verify the assigned languages via spreadsheet download sent to native speakers. In this scenario, there should be only a few words, which are assigned wrongly and these can be corrected manually.
    However even in this scenario customers have to take care that the code page used by e.g. excel fits to the assigned language. E.g. you need to do this on a local Windows (default code page local).
    But maintaining the language in the spreadsheet and then uploading the result to spumg is not possible.
    Best regards,
    Nils Buerckel

  • Open Dataset - unicode system ECC 6.0,special characters

    Hi,
    we have upgraded our system from 46b to ecc 6.0 unicode. We face following problem in file download to application server(solaris) using open dataset.
    in output file hyphen(-) character is by special character â.
    eg. if data in table is  'BANGALORE - 560038' is displayed as 'Bangalore â 560034' in application server file. if we download the same file to windows it shows proper data.
    only when we open file in vi editor on application server , system shows this special character.
    Kindly help.
    Regards,
    Sidhesh S

    OPEN DATASET G_APFILE FOR INPUT IN TEXT MODE.in 4.6                                                                                                    
        OPEN DATASET G_APFILE IN LEGACY TEXT MODE FOR INPUT in ECC6.0
    regards
    Giridhar

  • MDMP unicode conversion - Vocabulary maintenace

    Hello,
              W have upgraded 4.5B MDMP system to ECC4 EHP4 and now we are in process of unicode conversion.
    please advice on as specified below.
    In SPUMG Scan process, we have completed the following scans:
    a) Tables without language Info
    b) Tables with Ambiguous language Info
    c) Indx Analysis
    For maintaining vocabulary  we have implemented notes 756534 &  756535, 871541. and Tables with language are also scanned. While handling vocabulary the system has displays 539158 with duplicates and 189314 when duplicates are discarded, which doesnot assigened any languages. We have 9 acive languages.
    As per Unicode conversion document i left out with only 2 more option a) HInt management b) Manual assignment
    How to handle this? is this a normal situation or any correction can be done?
    Thanks and Regards
    Vinay

    Hi Vinay,
    This Thread belongs to the Unicode forum:
    Internationalization and Unicode
    In my opinion one of the best and most efficient possibility to reduce the number of words
    is to make use of hints. Please follow the  Unicode Conversion Guide and SAP note 1034188.
    Application colleagues should be contacted in order to find proper hints.
    Please also note that report umg_vocabulary_statistic or um4_vocabulary_statistic is the best tool to evaluate tables for hint processing.
    And the rest of the vocab, which can not be assigned by the hints method or by the SAP notes you mentioned, need to be assigned by native speakers (manual effort !).
    As you can see, SPUMG is NOT just a tool to be executed by Basis staff.
    It needs in deed collaboration between Basis, Application and native speakers.
    And it should be clear that the duration of the scans can be quite high - therefore a trial & error approach will take time (especially for large systems).
    Best regards,
    Nils Buerckel
    SAP AG

  • MDMP & Unicode: splitting data load according to the source codepage

    Hi,
    does anybody of you implemeted this solution from SAP descriped in HowTo-Guide "SAP BW Unicode with an MDMP Source System"? We have an MDMP source system (4.6C) and a Unicode BW (Netweaver 2004s). I have maintained the RFC connection in BW (no logon language, set MDMP active, entered values in MDMP-language list), changed the language in job step and tried to load data but it does not work. SAP tells me that they are not willing to support this issue. Any ideas from you?
    Thanks a lot for your answers!!!
    Regards, Carina

    Hi,
    One option for this, u can enable the calander day field  for selection at edit extract structure rsa6 for the data source.
    hope this help you.
    assign points if useful.
    regards
    SSMS.

  • CRM unicode and ECC 6.0 Non unicode.

    Hi All,
    We have upgraded our system from 4.6C to ECC 6.0
    Our ECC 6.0 is non unicode system. We are currently implementing CRM 5.0.
    I want to know if we implement unicode CRM system, will there be any inconsistency as our ECC 6.0 is non unicode.
    Regards,
    Imran

    Hi Imran:
    I had a problem with middleware when going through a similar scenario to yours. The middleware functions were returning garbage when calling from CRM to ERP systems. See note 651497 which addresses this problem.
    I implemented the fix by adding the following line to the beginning of function module BAPI_CRM_SAVE:
    call function 'Z_MW_RFC_PING'.
    Create a function group containing function module Z_MW_RFC_PING (code below):
    FUNCTION Z_MW_RFC_PING .
    *"*"Local Interface:
    data:
      lv_rfcdest type rfcdest.
    * Read middleware parameter table to determine R/3 backend system
    * name.
      select single parval1
        into (lv_rfcdest)
        from smofparsfa
        where parsfakey eq 'CRMCFSOLTP' and
              parname eq 'CRMCFSOLTP'.
    * If we get an RFC destination returned, PING it.
      if sy-subrc eq 0.
        call function 'RFC_PING'
          destination lv_rfcdest.
      endif.
    ENDFUNCTION.
    Note 777994 addresses some configuration on CRM system. In transaction R3AC6 on CRM system, make the following entries:
    Key SMOF, parameter CODEPAGE_CHECK_OFF, value 'X';
    Key R3A_COMMON, parameter CRM_SEND_XML, value 'X';
    Key R3A_COMMON, parameter DATA_FORMAT, value 'XML'.
    You should also do the following configuration:
    In table CRMPAROLTP on ERP system set parameter CRM_SEND_XML_FOR_DEFAULT_DESTINATION, user CRM, value 'X'.
    In table CRMRFCPAR on ERP system set the "Use XML" flag to 'X' for the CRM consumer. This also avoids code-page issues.
    The above took care of conflicts in code-pages for my systems.
    Regards,
    D.

  • 46C - ECC 6.0 || Problem in PREPARE

    Hello All!
    I'm initializing PREPARE and when running option 1 on SAPUP menu, this message is displayed:
    Message . . . . :   Function check. CEE9901 unmonitored by PREPAREOS4 at
      statement 0000046300, instruction X'0000'.
    Viewing joblog, the first error is:
    Message ID . . . . . . :   CPDB9C9       Severity . . . . . . . :   30       
    Message type . . . . . :   Diagnostic                                        
    Date sent  . . . . . . :   08/15/07      Time sent  . . . . . . :   13:16:00                                                                               
    Message . . . . :   Internal system error.  Error code is -72.               
    Cause . . . . . :   An internal error occurred loading i5/OS PASE program    
      /usr/sap/put/bin/SAPup.                                                    
    Recovery  . . . :   Try the request again. If the problem persists, report the
      problem using the Analyze Problem (ANZPRB) command.
    SAPUP is up-to-date. Can anyone help me what kind of "internal error" is this?
    Thank for all help!
    Best Regards,
    André Koji Honma
    FUJIFILM da Amazônia Ltda.

    Hi Andre,
    We got one requirement similar to the upgrade you have executed..
    Operating system : OS/400
    Machine Type      :  825
    Platform ID          :  592
    Database            : DB400 V5R4
    Source
    R/3 Version         :  4.6C
    Target Version     :  ECC 6.0
    In Service Marketplace it shows the upgrade guide selection as below
    1)Unix DB2 UDB for UNIX and Windows
    2)Unix DB2 UDB for z/OS
    3)Windows DB2 UDB for UNIX and Windows
    4)Windows DB2 UDB for z/OS
    5)IBM eServer iSeries DB2 UDB for iSeries
    I am obsolutely new to OS/400 environment. kindly suggest the relavent guide to be followed for executing the upgrade.
    Regards
    Kamal

  • Character conversion problems when calling FM via RFC from Unicode ECC 6.0?

    Hi all,
    I faced a Cyrillic character convertion problem while calling an RFC function from R/3 ECC 6.0 (initialized as Unicode system - c.p. 4103). My target system is R/3 4.6C with default c.p. 1500.
    The parameter I used in my FM interface in target system is of type CHAR10 (single-byte, obviously).
    I have defined rfc-connection (SM59) as an ABAP connection and further client/logon language/user/password are supplied.
    The problem I faced is, that Cyrillic symbols are transferred as '#' in the target system ('#' is set as default symbol in RFC-destination definition in case character convertion error is met).
    Checking convertions between c.p. 4103  and target c.p. 1500 in my source system using tools of transaction i18n shows no errors - means conversion passed O.K. It seems default character conversion executed by source system whithin the scope of RFC-destination definition is doing something wrong.
    Further, I played with MDMP & Unicode settings whithin the RFC-destination definition with no successful result - perhaps due to lack of documentation for how to set and manage these parameters.
    The question is: have someone any experience with any conversion between Unicode and non-Unicide systems via RFC-call (non-English target obligatory !!!), or can anyone share valuable information regarding this issue - what should be managed in the RFC-destination in order to get character conversion working? Is it acceptable to use any character parameter in the target function module interface at all?
    Many thanks in advance.
    Regards,
    Ivaylo Mutafchiev
    Senior SAP ABAP Consultant

    hey,
    I had a similar experience. I was interfacing between 4.6 (RFC), PI and ECC 6.0 (ABAP Proxy). When data was passed from ECC to 4.6, RFC received them incorrectly. So i had to send trimmed strings from ECC and receive them as strings in RFC (esp for CURR and QUAN fields). Also the receiver communication channel in PI (between PI and  RFC) had to be set as Non unicode. This helped a bit. But still I am getting 2 issues, truncation of values and some additional digits !! But the above changes resolved unwanted characters problem like "<" and "#". You can find a related post in my id. Hope this info helps..

  • Integrating MDMP and Unicode systems with IDoc interfaces

    Hi,
    We are working on integrating SAP R/3 6.20 (MDMP) with SAP PI 7.0  SP10 (Unicode) system.  The source will send MATMAS or CLFMAS IDocs with Thai / Japanese characters and PI should transform and post it to SAP ECC 5.0 Target system [IDoc to IDoc scenario].
    ( SAP R/3 Legacy 620 - Non-Unicode / MDMP ) |---->  ( XI  - Unicode) ->  (ECC - Unicode)
    Had a look at few SAP notes (745030 ,656350 and 613389)  and it looks like there is no standard way/best practice to handle this scenario. 
    References:
    1.PDF of TECHED Session ID: IM101 Dealing with Multi-Language Garbage?Data – Lessons Learned
    2.SAP Note 745030 - MDMP - Unicode Interface_Solution Overview.pdf
    3.MDMP_Unicode_Transfer_final.doc from SAP Note 745030
    4.SAP Note 656350 - Master Data Transfer UNICODE to MDMP Systems with ALE.pdf
    5.SAP Note 613389 - ALE SAP system group with Unicode systems (Solution-2)
    My understanding per SAP Notes: (Please correct me if I'm wrong)
    a. For MDMP integraton we can't use Standard ALE Config instead we have to use Custom Configuration (IDoc collection setting in Partner profile &Scheduling  RSEOUT002/RSEOUT00_MDMP and use Function module :IDOC_INBOUND_ASYNCHRONOUS_2 ).
    b. The RFC Destinations should use proper logon langauage for correct MDMP IDoc transfer (For sending IDocs with Japanese character, the logon language should be JP).
    c. If we want to transfer IDocs in more than one language we need to create multiple Partner profiles/RFC destinations each with specific logon language.
    Please guide us in integrating these systems if you have done a similar integrations, following are my questions:
    1] Is there any configuration change required at PI layer?
    2] Do we need to install codepages in PI Unicode system for all languages used or Unicode system is capable of handling all the languages?
    3] Is it necessary to install any SAP Add-on package in R/3 MDMP system inorder to support MDMP to Unicode data transfer?
    3] If we want to send MATMAS/CLFMAS IDoc with Thai/Japanese characters from same system, what are the changes required at the Source system?
        (Source may send MATMAS/CLFMAS IDoc with either Thai/Japanese characters but not both of them in a single IDoc)
    4] Can we use regualr ALE & Partner profile settings for handling multi-byte characters or we need to use IDoc collection and RSEOUT002/RSEOUT00_MDMP report for transfer?
    5] Is there any restrictions on the IDoc types (MATMAS,CLFMAS etc) supported in MDMP-Unicode integration solution.
    5] Is there any best practice document available for this scenario?
    6] Do we need to involve SAP AG for MDMP to Unicode system integrations(As per SAP Note: 656350) ?
    Thanks and Regards,
    Ananth

    Hi Ananth,
    as you have already mentioned, you need differents RFC destinations for each language. So you have to make sure, that the IDocs use the right destination according to there content.
    If you have messages from PI to MDMP it is the same, you need different channels with different logon languages as well. You need an identifier in the message, that can be used for selecting the correct channel.
    It should not be a restricting to any IDoc type, but it is not possible to post a message with different languages (which require different codepages) in one IDoc.
    For correct conversion from a non unicode system to unicode, the codepages have to be installed in th OS of the PI server.
    Regards
    Stefan

  • Tehnical upgrade from 4.6C MDMP to ECC 6.0 using CU &UC approach

    Hi SAP Experts,
    We are embarking on a project for Technical upgrade from SAP R/3 4.6C MDMP system to ECC 6.0 using the CU & UC approach for our Asian operations (comprising of India, Japan, Taiwan & Singapore).
    Please share with me the Best Practices you followed in your projects, checklists, DOs, DONTs, typical problems you encountered and the possible safeguards, any other useful information, etc.
    Since very little information is available from Organizations who have used this upgrade successfully, I am raising this in SDN to share your experiences during the Technical upgrade.
    All useful information will be surely awarded with suitable points.
    Looking for your invaluable inputs which will be a BIG help for our project.
    Best regards,
    R.Rajaraman

    Pls go thru the below mentioned notes and its related notes
    Note 928729 - Combined Upgrade & Unicode Conversion (CU&UC) FAQ
    Note 73606 - Supported Languages and Code Pages
    Note 79991 - Multi-Language and Unicode support of SAP applications
    Note 959698 - Twin Upgrade & Unicode Conversion FAQ

  • MDMP to  UNICODE  Whats the big diff?

    Hello All,
    What are the major differences in with MDMP UNICODE
    Conversion versus a SINGLE CODEPAGE UNICODE conversion?  We have already had a successful single code page  unicode conversion.  We have all the documents and the links to all the documents so we ask for your professional insight and original thoughts.
    Kindest Regards
    Lawrence

    Hi Lawrence,
    the big difference is to build the vocabulary. This is not necessary for a single code page system because all characters could be converted with the standard code page. In a system with multiple code pages you have to build a vocabulary which is used during migration for tables with character data but without a language or code page field. Because of the fact, that the conversion is done table by table, therefore the code page (to be used), which could be normally determined by application logic, is not available in the migration tool. The migration has to provide a work around which is implemented by the vocabulary. This is created by transaction SPUMG and with help of your users. The users have to choose the correct code page for all entries which could not be related to a code page by the transaction. The  amount of work depends on the number of code pages you have in your system and the amount of data in it. Check the Unicode migration guides for details.
    May be the description is not correct in any detail, but I think it covers your question.
    Regards
    Ralph

  • RFC connection problem in ECC 4.7 and ECC 6.0

    Hello friends,
    Created RFC connection in 4.7 version of SAP and used it in a java program.  (STFC_connection) was the function module used.
    it all worked fine. Later when i tried the same thing in ECC 6.0 it failed and throws communication_failure exception .
    please guide me in this issue.

    Hi...
    Check the Unicode/Non unicode status in MDMP&UNICODE  tab present in transaction SM59 while chking the connection.
    if your ECC6.0 is unicode system then you need to check the radio button present for Unicode in  MDMP&UNICODE   tab.
    Regards,
    Lokeswari.

Maybe you are looking for

  • How to control access to a document library based on fields of the list?

    Hi,  I need to give access to a document library but the users who need permission are unknown until the form is submitted to the document library.  The 2 users information are in 2 fields of the form.  I created a infopath form and a workflow with i

  • Multimedia Offerings for Think Center on Windows 7

    I am trying to install  Multimedia Offerings for Think Center on my new installation of Windows7 and I'm getting following message: This program can only be used with recording hardware that is not attached to this computer Any solution?

  • How To Add InfoObject in the Data Selection Tab of InfoPackages in 2004s

    Hi Y'all! I would like to ask how to add an infoobject in the Data Selection Tab of the InfoPackage in 2004s.  It says its optional, this definintion of selection criteria for the infopackage. My apologies, I don't understand how this is done.  Surel

  • How to create a user by a script?

    Dear All, i have sql statments to create a user, how i can do that? the statments like this: -- Create the user create user DPE_AMS identified by "DPE_AMS" default tablespace USERS temporary tablespace TEMP profile DEFAULT; -- Grant/Revoke role privi

  • ST12 performance tuning background job

    I would like to trace a program which will run about 3 hours.  In ST12, I cannot run the program in "current mode" since the execution time is too long.  How can I still use ST12 to trace the program execution when the program execute is submitted as