Unicode convertion for Czech Language
Hi all,
my system is not unicode and I have to conver it into a unicode one because we are going to plan a roll-out project for our czech branch. We have an ECC5.0 with English, German, Italian and Spanish languages.
I have understood, more or less, what we have to do in a technical way, but I would like to know something more about the growing of hardware space needs.
how much will my system grow reasonably after uniocde conversion and czech language installation? 10%, 20%?
Thanks a lot.
Hi,
Have a look at the link below which will help to answer the question.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10317ed9-1c11-2a10-9693-ec0d9a3bc537
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/589d18d9-0b01-0010-ac8a-8a22852061a2
If the Unicode encoding form is UTF -8 database size growth will be 10% of its original size
If it is UTF 16 then it may grow up to 60 to 70% of the original size
Rgds
Radhakrishna D S
Similar Messages
-
Bug in T9 dailer for czech language
Hi here,
I have Z3 Compact for some time, there is still issue I reported like 2 years ago, when using T9 Dailer, the number 8 do not include char "ů", all other chars is correct, but not the one I mentioned... I do not belive, this is so big issue to fix.
Firmware: 23.0.A.2.93
Also I will love to contact someone, who can answer me, or give me contact to some devs, I can speak with about this. I reported this bug like 2 years ago, in time I use Xperia S, here is thread:
https://talk.sonymobile.com/t5/Xperia-S-SL-acro-S/Bug-in-T9-dailer-for-czech-language/m-p/153144#M35...
Regards
Radovan Kepák23.0.1.A..5.77 <- BUG is still here...
-
Special characters being read from the unicode file for greek language
Hi All,
I have a report which would upload a unicode file and then update the vendor master data accordingly.
The file contains greek characters too.
when the file is being read in the code, some special characters are being added up to the vendor number which is the first entry. The special characters are not included in the file , but are added up only to the first vendor number and not any other vendor numbers.
The logic that is being used is as follows :
TRY.
IF unicode IS INITIAL.
IF codepage IS INITIAL.
*--> For backward compatibility where this FM might be called from
* other dependent objects (FMs/dynamic subroutines)
* which donot have access to user's input w.r.t Unicode parameters
OPEN DATASET filename FOR INPUT
IN LEGACY TEXT MODE
MESSAGE msg
REPLACEMENT CHARACTER repl_char
IGNORING CONVERSION ERRORS
FILTER filter.
ELSE.
*--> System in non-unicode and Unicode Environment (Phases I and II)
OPEN DATASET filename FOR INPUT
IN LEGACY TEXT MODE CODE PAGE codepage MESSAGE msg
REPLACEMENT CHARACTER repl_char
IGNORING CONVERSION ERRORS
FILTER filter.
ENDIF.
ELSE.
*--> Extract File in Unicode format - Phase III
OPEN DATASET filename FOR INPUT IN TEXT MODE ENCODING UTF-8
MESSAGE msg
FILTER filter.
ENDIF.
IF sy-subrc NE 0.
MESSAGE e001(zuni) WITH filename sy-subrc
RAISING file_open_error.
ENDIF.
the unicode parameters used are : codepage = 8000.
An early reply is most appreciated.
Regards,
Manu.Please check SAP notes for Eastern European Characters in Unicode system. and may be below code helps you
DATA:
ltp_bom TYPE sychar01,
ltp_encoding TYPE sychar01,
ltp_codepage TYPE cpcodepage.
Processing --------------------------------------------------------- *
TRY.
CALL METHOD cl_abap_file_utilities=>check_utf8
EXPORTING
file_name = itp_filename
max_kb = -1
all_if_7bit_ascii = abap_true
IMPORTING
bom = ltp_bom
encoding = ltp_encoding.
CATCH cx_sy_file_open .
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4
RAISING file_open_error.
CATCH cx_sy_file_authority .
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4
RAISING file_authority_error.
CATCH cx_sy_file_io .
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4
RAISING file_io_error.
ENDTRY.
CASE ltp_encoding.
WHEN cl_abap_file_utilities=>encoding_utf8
OR cl_abap_file_utilities=>encoding_7bit_ascii.
CASE ltp_bom.
WHEN cl_abap_file_utilities=>no_bom.
OPEN DATASET itp_filename FOR INPUT IN TEXT MODE
ENCODING UTF-8.
WHEN cl_abap_file_utilities=>bom_utf8.
OPEN DATASET itp_filename FOR INPUT IN TEXT MODE
ENCODING UTF-8
SKIPPING BYTE-ORDER MARK.
WHEN cl_abap_file_utilities=>bom_utf16_be.
ltp_codepage = '4102'.
OPEN DATASET itp_filename FOR INPUT IN LEGACY BINARY MODE
BIG ENDIAN CODE PAGE ltp_codepage.
WHEN cl_abap_file_utilities=>bom_utf16_le.
ltp_codepage = '4103'.
OPEN DATASET itp_filename FOR INPUT IN LEGACY BINARY MODE
LITTLE ENDIAN CODE PAGE ltp_codepage.
WHEN OTHERS.
OPEN DATASET itp_filename FOR INPUT IN TEXT MODE
ENCODING UTF-8.
ENDCASE.
WHEN OTHERS.
OPEN DATASET itp_filename FOR INPUT IN LEGACY TEXT MODE.
ENDCASE.
Edited by: Nilesh Shete on May 7, 2010 5:29 PM -
Voice Control in Czech language
Hi. Do you know any, if will be in any next iOs for iPhone Voice Control support for CZECH language also ?
Nobody here can tell you anything about what future iOS versions will include, and Apple almost never gives out such info in advance.
If you want it, tell Apple via the channel they have set up for that:
http://www.apple.com/feedback/iphone.html -
Internationalization for Hebrew language
hi,
I made an application, internationalized to be used with some different languages.
Til now I internationalized it only for languages that have Latin letters (ISO/IEC 10646-1)
and in this way it is working fine...
Recently I had a request to put it in Hebrew language, that doesn't use Latin characters.
Reading the documentation, I implemented the localization for the Hebrew language in the
follow way:
Someone wrote the words in Hebrew characters inside a property file (file_iw_IL.properties) ,
afterwards I transformed the Hebrew characters in the UTF-16 mode "\uxxxx", using the
native2ascii tool that come with jdk ....
But when the application for the Hebrew language run, it didn't work...
I should like to know if the procedure that I did for Hebrew
internationalization is right and, if is not, I would like to know what is the right precedure
to make internationalization for languages that don't have Latin characters.
thank you in advance for a kind help
regards
tonyMrsangeloNo one answer me so I try again for some help, explain better my problem.
I use this class to get strings in different languages:
public class SupplierOfInternationalizedStrings {
Locale localUsedIt;
Locale localUsedEn;
Locale localUsedHe;
Locale localizedCurrencyFormat;
Locale localUsedHere;
public SupplierOfInternationalizedStrings() { // constructor
localUsedIt = new Locale("it", "IT"); // specifica il file appartenente alla famiglia
localUsedEn = new Locale("en", "US"); // specifica il file appartenente alla famiglia
localUsedHe = new Locale("he", "HE"); // specifica il file appartenente alla famiglia
} // constructor
void setInternationalizationCountry(String langToUse) {
if (langToUse.compareToIgnoreCase("Italiano") == 0) {
localUsedHere = localUsedIt;
localizedCurrencyFormat = new Locale("it", "IT");
} else if (langToUse.compareToIgnoreCase("English") == 0) {
localUsedHere = localUsedEn;
localizedCurrencyFormat = new Locale("iw", "IL");
} else if (langToUse.compareToIgnoreCase("Hebrew")== 0) {
localUsedHere = localUsedHe;
localizedCurrencyFormat = new Locale("iw", "IL");
System.out.println("linguaggio impostato = " + localUsedHere);
} // initializeInternationalization()
public String getInternationalString(String keyForTheWord) {
ResourceBundle resourceBund = ResourceBundle.getBundle("properiesFile", // il nome del file .properties... la famiglia dei files
localUsedHere);
String word = resourceBund.getString(keyForTheWord);
return word;
} // getStringForMedidentStartClass()
} // class SupplierOfInternationalizedStringsOf course I have a file.property iw_IL - Hebrew (Israel) where for each key there is a value wrote with hebrew characters
the hebrew file property look in this way:
keyForLabel1= ה עברית מילה שתימ \n
keyForLabel2=ספה עברית מילה שתימ The class SupplierOfInternationalizedStrings works for the translation in english and in other languages that use latins letters ..,
but when it is setted for be used with the Hebrew language ( localUsedHere = localUsedHe)
the metod getInternationalString() don't return Hebrew words.
This didn't worry me, because I read in the documentation that properies file cannot
read characters that are different from latine... and I also learned that in this case it need to convert the letters
in Unicode format...
I made this work putting the hebrew words in an .rtf file and, using native2ascii utility
with the command native2ascii -encoding UTF8 file.rtf textdoc.txt, I got the unicode format.
Now the file property for Hebrew language looks in this way:
keyForLabel1 = \u00d4 \u00e2\u00d1\u00e8\u00d9\u00ea...
keyForLabel2 = \u00e1\u00e4\u00d4 \u00e2\u00d1.....At this point I expected the translation in Hebrew comes good... but instead I still I cannot get the Hebrew words..
I should like to know why the elaboration still doesn't works
and I would have too some help in order to do the program works good.
thank you
regards
tonyMrsangelo -
UTF8 character set conversion for chinese Language
Hi friends,
Would like to some basic explanation on UTF8 feature,what does it help while converting the data from chinese language.
Would like to know what all characters this UTF8 will not support while converting from chinese language.
Thanks & Regards
Ramya NomulaNot exactly sure what you are looking for, but on MetaLink, there are numerous detailed papers on NLS character sets, conversions, etc.
Bottom line is that for traditional Chinese characters (since they are more complicated), they require 4 bytes to store the characters (such as UTF-8, and AL32UTF8). Some mid-eastern characters sets also fall in this category.
Do a google search on "utf8 al32utf8 difference", and you will get some good explanations.
e.g., http://decipherinfosys.wordpress.com/2007/01/28/difference-between-utf8-and-al32utf8-character-sets-in-oracle/
Recently, one of our clients had a question on the differences between these two character sets since they were in the process of making their application global. In an upcoming whitepaper, we will discuss in detail what it takes (from a RDBMS perspective) to address localization and globalization issues. As far as these two character sets go in Oracle, the only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.
You may also consider posting your question on the Globalization Suport forum which pertains more to these types of questions.
Globalization Support -
Czech language support in Acrobat OCR
Hi, When I try to perform OCR after scanning a document written in Czech language, OCR does not recognise it. Is there any Czech language add-on or plug-in for Acrobat? I am using Acrobat Pro 6.0. Thanks!
Olaf,
before I wrote this posting there has been a whole lot of testing here. The situation in short words: I've created PDFs containing Japanese characters in Illustrator and in InDesign. Here on my PC I've not been able to copy/paste (e.g. to Word) any characters from the Illustrator PDF, that were created using OTF fonts and converted to "Type 1 CID/Identity H" fonts. A similar file created in InDesign worked without problems.
There have been a lot of strange things happening I was, for example, able to copy the characters in question (Illustrator PDF, OTF fonts) from Acrobat Pro on my Mac and paste them into Word (Windows) running in a Parallels VM. I could NOT do this within Windows from A3D to Word.
After sending my test PDFs to the German FrameUser mailing list, I got a response from one person (as noted above). And: after installing the Japanese Reader fonts, I suddenly was able to copy the characters within Windows. There must be something that has changed. Copying the characters over to Mac ceased to work in this moment, but has worked before. So the Reader font installation seems to do more than just install fonts although I don't know what it could be.
You'll get my test files within the next minutes
Thanks for caring :-)
Bernd -
Problem in sending the smartform as email for other language except english
Hi Experts,
I could not send the smartform as an attachment for other languages, but where as i could send it in english.
The program is working fine with print priview but not when sending the SF as an attachment
(in other languages except english).
Please do find the below code which i used to send the smartform as an attachment.
Please let me know if there is any mistake in the code.
wa_ctrlop-LANGU = nast-spras.
wa_ctrlop-getotf = 'X'.
wa_ctrlop-no_dialog = 'X'.
wa_compop-tdnoprev = 'X'.
CALL FUNCTION lf_fm_name "'/1BCDWB/SF00000197'
EXPORTING
control_parameters = wa_ctrlop
output_options = wa_compop
user_settings = 'X'
is_ekko = l_doc-xekko
is_pekko = l_doc-xpekko
is_nast = l_nast
iv_from_mem = l_from_memory
iv_druvo = iv_druvo
iv_xfz = iv_xfz
IMPORTING
job_output_info = wa_return
TABLES
it_ekpo = l_doc-xekpo[]
it_ekpa = l_doc-xekpa[]
it_pekpo = l_doc-xpekpo[]
it_eket = l_doc-xeket[]
it_tkomv = l_doc-xtkomv[]
it_ekkn = l_doc-xekkn[]
it_ekek = l_doc-xekek[]
it_komk = l_xkomk[]
EXCEPTIONS
formatting_error = 1
internal_error = 2
send_error = 3
user_canceled = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
*Convert the data from OTF to PDF format
it_otf[] = wa_return-otfdata[].
CALL FUNCTION 'CONVERT_OTF'
EXPORTING
format = 'PDF'
max_linewidth = 132
IMPORTING
bin_filesize = l_len_in
bin_file = lp_xcontent
TABLES
otf = it_otf
lines = it_tline
EXCEPTIONS
err_max_linewidth = 1
err_format = 2
err_conv_not_possible = 3
OTHERS = 4.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
TRY.
---------- create persistent send request ----------------------
send_request = cl_bcs=>create_persistent( ).
len = XSTRLEN( lp_xcontent ).
transform to solix tab
lt_solix =
cl_document_bcs=>xstring_to_solix(
ip_xstring = lp_xcontent ).
Create Body to the E-mail.
APPEND Text-005 TO l_text.
Attachment Name
l_ponumber = text-004.
CONCATENATE l_ponumber l_doc-xekko-ebeln INTO l_ponumber.
Subject for the E-Mail.
l_subject = text-001.
CONCATENATE l_subject l_doc-xekko-ebeln INTO l_subject.
*create document E-Mail.
*TRY.
CALL METHOD cl_document_bcs=>create_document
EXPORTING
i_type = 'RAW'
i_subject = l_subject
i_length = '13'
i_language = nast-spras
i_importance =
i_sensitivity =
i_text = l_text
i_hex =
i_header =
i_sender =
receiving
RESULT = l_email_object
CATCH cx_document_bcs .
*ENDTRY.
CALL METHOD cl_document_bcs=>create_document
EXPORTING
i_type = 'RAW'
i_subject = l_subject
i_length = '13'
i_text = l_text
RECEIVING
result = l_email_object.
*Create PDF Document
bcs_doc = cl_document_bcs=>create_document(
i_type = 'PDF'
i_subject = l_ponumber
i_length = len
i_language = nast-spras
i_hex = lt_solix
*Type casting
obj_pdf_file ?= bcs_doc.
Add PDF document as an attachment
CALL METHOD l_email_object->add_document_as_attachment
EXPORTING
im_document = obj_pdf_file.hi,
i tried with ur problem.but i am able send mail in other languages also.actaullly i wrote a msg whether the mail has been sent or not.i got success message. i am placing my code here please go thorugh it,and do relavant modifications.
*& Report ZPPS_SMARTFORM_TO_PDF
REPORT ZPPS_SMARTFORM_TO_PDF.
PARAMETER: p_date LIKE sy-datum.
PARAMETER: p_rea TYPE char255.
DATA: t_otfdata TYPE ssfcrescl,
t_lines LIKE tline OCCURS 0 WITH HEADER LINE,
t_otf TYPE itcoo OCCURS 0 WITH HEADER LINE,
t_RECORD LIKE SOLISTI1 OCCURS 0 WITH HEADER LINE.
Objects to send mail.
DATA:T_OBJPACK LIKE SOPCKLSTI1 OCCURS 0 WITH HEADER LINE,
T_OBJTXT LIKE SOLISTI1 OCCURS 0 WITH HEADER LINE,
T_OBJBIN LIKE SOLISTI1 OCCURS 0 WITH HEADER LINE,
T_RECLIST LIKE SOMLRECI1 OCCURS 0 WITH HEADER LINE.
DATA: w_filesize TYPE i,
w_bin_filesize TYPE i,
wa_ctrlop TYPE ssfctrlop,
wa_outopt TYPE ssfcompop,
WA_BUFFER TYPE STRING, "To convert from 132 to 255
WA_OBJHEAD TYPE SOLI_TAB,
WA_DOC_CHNG TYPE SODOCCHGI1,
W_DATA TYPE SODOCCHGI1.
DATA: form_name TYPE rs38l_fnam,
V_LINES_TXT TYPE I,
V_LINES_BIN TYPE I,
nast-spras type sy-langu value 'DE'.
CALL FUNCTION 'SSF_FUNCTION_MODULE_NAME'
EXPORTING
FORMNAME = 'ZSR_DEMO1'
VARIANT = ' '
DIRECT_CALL = ' '
IMPORTING
FM_NAME = form_name
EXCEPTIONS
NO_FORM = 1
NO_FUNCTION_MODULE = 2
OTHERS = 3
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
wa_ctrlop-LANGU = nast-spras.
wa_ctrlop-getotf = 'X'.
wa_ctrlop-no_dialog = 'X'.
wa_outopt-tdnoprev = 'X'.
CALL FUNCTION form_name
EXPORTING
ARCHIVE_INDEX =
ARCHIVE_INDEX_TAB =
ARCHIVE_PARAMETERS =
CONTROL_PARAMETERS = wa_ctrlop
MAIL_APPL_OBJ =
MAIL_RECIPIENT =
MAIL_SENDER =
OUTPUT_OPTIONS = wa_outopt
USER_SETTINGS = 'X'
MYDATE = p_date
REASON = p_rea
IMPORTING
DOCUMENT_OUTPUT_INFO =
JOB_OUTPUT_INFO = t_otfdata
JOB_OUTPUT_OPTIONS =
EXCEPTIONS
FORMATTING_ERROR = 1
INTERNAL_ERROR = 2
SEND_ERROR = 3
USER_CANCELED = 4
OTHERS = 5
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
t_otf[] = t_otfdata-otfdata[].
CALL FUNCTION 'CONVERT_OTF'
EXPORTING
FORMAT = 'PDF'
MAX_LINEWIDTH = 132
ARCHIVE_INDEX = ' '
COPYNUMBER = 0
ASCII_BIDI_VIS2LOG = ' '
PDF_DELETE_OTFTAB = ' '
IMPORTING
BIN_FILESIZE = w_bin_filesize
BIN_FILE =
TABLES
OTF = t_otf
LINES = t_lines
EXCEPTIONS
ERR_MAX_LINEWIDTH = 1
ERR_FORMAT = 2
ERR_CONV_NOT_POSSIBLE = 3
ERR_BAD_OTF = 4
OTHERS = 5
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
loop at t_lines.
TRANSLATE t_lines USING '~'.
CONCATENATE WA_BUFFER T_LINES INTO WA_BUFFER.
ENDLOOP.
TRANSLATE WA_BUFFER USING '~'.
DO.
t_RECORD = WA_BUFFER.
APPEND t_RECORD.
SHIFT WA_BUFFER LEFT BY 255 PLACES.
IF WA_BUFFER IS INITIAL.
EXIT.
ENDIF.
ENDDO.
Attachment
REFRESH: T_RECLIST,
T_OBJTXT,
T_OBJBIN,
T_OBJPACK.
CLEAR WA_OBJHEAD.
T_OBJBIN[] = T_RECORD[].
Create Message Body Title and Description
T_OBJTXT = 'test with pdf-Attachment!'.
APPEND T_OBJTXT.
DESCRIBE TABLE T_OBJTXT LINES V_LINES_TXT.
READ TABLE T_OBJTXT INDEX V_LINES_TXT.
WA_DOC_CHNG-OBJ_NAME = 'smartform'.
WA_DOC_CHNG-EXPIRY_DAT = SY-DATUM + 10.
WA_DOC_CHNG-OBJ_DESCR = 'smartform'.
WA_DOC_CHNG-SENSITIVTY = 'F'.
WA_DOC_CHNG-DOC_SIZE = V_LINES_TXT * 255.
Main Text
CLEAR T_OBJPACK-TRANSF_BIN.
T_OBJPACK-HEAD_START = 1.
T_OBJPACK-HEAD_NUM = 0.
T_OBJPACK-BODY_START = 1.
T_OBJPACK-BODY_NUM = V_LINES_TXT.
T_OBJPACK-DOC_TYPE = 'RAW'.
APPEND T_OBJPACK.
Attachment (pdf-Attachment)
T_OBJPACK-TRANSF_BIN = 'X'.
T_OBJPACK-HEAD_START = 1.
T_OBJPACK-HEAD_NUM = 0.
T_OBJPACK-BODY_START = 1.
DESCRIBE TABLE T_OBJBIN LINES V_LINES_BIN.
READ TABLE T_OBJBIN INDEX V_LINES_BIN.
T_OBJPACK-DOC_SIZE = V_LINES_BIN * 255 .
T_OBJPACK-BODY_NUM = V_LINES_BIN.
T_OBJPACK-DOC_TYPE = 'PDF'.
T_OBJPACK-OBJ_NAME = 'smart'.
T_OBJPACK-OBJ_DESCR = 'test'.
APPEND T_OBJPACK.
CLEAR T_RECLIST.
T_RECLIST-RECEIVER = 'MAIL-ID'.
T_RECLIST-REC_TYPE = 'U'.
APPEND T_RECLIST.
CALL FUNCTION 'SO_NEW_DOCUMENT_ATT_SEND_API1'
EXPORTING
DOCUMENT_DATA = WA_DOC_CHNG
PUT_IN_OUTBOX = 'X'
COMMIT_WORK = 'X'
TABLES
PACKING_LIST = T_OBJPACK
OBJECT_HEADER = WA_OBJHEAD
CONTENTS_BIN = T_OBJBIN
CONTENTS_TXT = T_OBJTXT
RECEIVERS = T_RECLIST
EXCEPTIONS
TOO_MANY_RECEIVERS = 1
DOCUMENT_NOT_SENT = 2
DOCUMENT_TYPE_NOT_EXIST = 3
OPERATION_NO_AUTHORIZATION = 4
PARAMETER_ERROR = 5
X_ERROR = 6
ENQUEUE_ERROR = 7
OTHERS = 8.
IF SY-SUBRC <> 0.
WRITE:/ 'Error When Sending the File', SY-SUBRC.
ELSE.
WRITE:/ 'Mail sent'.
ENDIF.
and i thought of one more soluyion u can write
wa_ctrlop-langu = T002-spras. i think it will also help for u.
revert back if any questions.
please reward me if helpful.
gupta.pullipudi -
Well, it looks like support for Asian Languages in Dynamic
Textfields wasn't included in this version as well. I have a simple
flash file embeded in an html file with 2 text boxes. Here's the
code I'm using:
var strJapanese = "あらわせて"
var strChinese = "我想你";
japanese_txt.text = strJapanese;
chinese_txt.text = strChinese;
I currently have these fonts installed in the \Windows\Fonts\
directory:
mingliu.ttc
msgothic.ttc
but all I see is a bunch of squares in the text boxes. I have
verified that my pocket pc can use these fonts in other programs.
Any ideas?
Thanks,
BackBayChefI used both the options but no one helps. Check the images for the understanding the problam. I try Arial unicode ms as other font but it does give correct visiblity of font.
tell some another option. -
Hi.
This is for Ebook conversion purpose, Currently am doing the text accuracy for English(EN) and Latin(EU) books using Acrobat 11 Pro application with Source and Converted PDF(final output of epub/mobi).
Similarly am trying to find the text accuracy for Chinese language books using PDF compare process, When i attempt, could see the compare report in PDF format for CN books, but i cannot catch the any text errors in it. Please advise and help me out whether i need to go for different version of acrobat or do i need to add any plug-ins with respect to language or chinese fonts here.Could any one please advise me on my question, so that it will be helpful to purchase the correct application before i go forward.
-
Junk symbols for asian language characters in PO PDF
Hi,
We have a custom program to generate the PO output and I have changed it to send the PO output to multiple emails/users using the steps given in the below link.
The issue I'm facing is that when the PDF is generated and emailed,if I open the PDF in the email,for asian language characters in the PO data,I see wierd symbols.
Has anyone faced this issue?
This is happening for even the standard PO output/email which is sent out which indicates that this is something with the WINDOWS/ADOBE settings .
Any pointers will be highly appreciated.
Thanks.Hello,
You have to make sure that the spool was created by the correct device type which does support
the language, you can change to the cascading device type SWINCF in SPAD, all unicode characters
can be handled by this device type, note 812821.
Secondly, goto SCOT-> Settings-> Device Types for Format Converstion, here you must set the
correct device type too, you can change the device type to also SWINCF here for the used form language.
Best regards,
Wen Peng -
Support for Malayalam language
I wish to buy a Mac. Whether there any support for Malayalam language there? If so which unicode version is using?
Good News! While reading the features of the upcoming OS X Lion, I found extended language support, including Malayalam! The fonts are probably the same as those by nickshanks, but they might be different since the original fonts were made for iOS devices.
Expanded language support
Twenty new font families for document and web display of text provide support for the most common languages in the Indian subcontinent, including Bengali, Kannada, Malayalam, Oriya, Sinhala, and Telugu. Devanagari, Gujarati, Gurmukhi, Urdu, and Tamil have been expanded. And three new font families support Lao, Khmer, and Myanmar.
http://www.apple.com/macosx/whats-new/features.html -
Specifying alphabetic sort order for other languages.
The alphabetic sort order differs slightly for other languages then the English one. I rewrite the alphabetics in the special text flow of the Reference Page of the index file, regenerate the index file, but the result is wrong. The special Group Title characters for greek, hungarian, czech, russian etc. languages are combined among latin Group Title characters.
We work in FM 8 to create 3000 pages technical documentation in 14 languages. The output will be pdf file and help system. Our main problem now the index Group Titles sort order. Could you help me to solve this problem?
Thank you so much
EvaMichael,
Thank you so much for your answer. It works great. I have a promlem only one letter, which is in the Czech alphabetics: "Ch". This letter would be after the "H". I have edited the sort order like this:
Symbols[\ ];Numerics[0];A;B;C;Č;D;E;F;G;H;Chch;I;J;K;L;M;N;O;P;Q;R;Ř;S;Š;T;U;Ú;V;W;X;Y;Z
<$symbols><$numerics>Aa Bb Cc *Čč* Dd Ee Ff Gg Hh Chch Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Řř Ss Šš Tt Uu Úú Vv Ww Xx Zz
The result in the generated index is, that the index entries beginning with Ch comes on below "C" and the GroupTitle "Ch" is disappearing. Take a look at the result in the generated index file, what I am talking about: -
UNICODE: more than six languages
Hello,
we have a worldwide hr system with a portal and an abap system using ess/mss. The system is running with ECC 6.0. At the moment six languages are installed. System is running with unicode. We have a request to install serveral additional languages. What would be the impact, if we for example install six or more languages? Is there a maxiumum of languages in one sap system sap supports? Has each additional language a huge impact in speed of importing support packages? Or database growth?
Thanks
AlexanderHi Alexander,
there is no limit for multiple languages, but SAP has only 36 (or 38 - I'm not sure) languages. We run almost 30 languages in our business systems. So I would say don't worry about your additional languages. You will import a bit more data during upgrades, but most time is normally spend in other routines than in main import. So no worry here either.
Your last question is a bit more complex to answer. Database growth depends on your database and on your Unicode encoding. If you run UTF-8 and you add Chinese code pages than your text columns could be much bigger than before, but usually you should no see a big difference. In our system we did a Unicode migration with 26 languages and 15 code pages which lead to almost no growth.
Regards
Ralph -
Arial Unicode font for Tamil, etc.?
Looking for some style alternatives to the InaiMathi font included in OS X for the Tamil language, I noted at the the Gallery of Unicode Fonts that the Arial Unicode font includes Tamil characters, and remembered that that font is now included with OS X (10.5). I checked with Character Palette and indeed the Tamil characters are there, but they aren't recognized as equivalent to those in InaiMathi. And when I try to change some Tamil words in a document from InaiMathi to Arial Unicode, nothing happens.
I don't understand a lot about Unicode, but I know the Arial Unicode font has been around for a while (it was more or less the original effort to create a single Unicode font including all the major alphabetic scripts), and don't understand why it would contain the Tamil character set but not work for that script. It doesn't seem to work for Tibetan either, though it contains that character set as well. In fact, according to Character Palette, it seems about 90% of Arial Unicode is "character variants" that "won't display correctly". What's it for, then, anyone have any idea?+I have always thought Windows fonts cannot be used for display. Could you describe how you use Character Palette to do that?+
Well, not "display" in the sense of opening a file using an OpenType font on the Mac and seeing it display properly. What I do in Character Palette with an OpenType font like Sanskrit2003 is scroll through the font to find the elements I want and double-click them to enter them in a document. Of course this workaround is feasible only for the occasional word, not for whole sentences or paragraphs. I'm not a big time user of any of this, so I can manage for the occasional word I want.
+The reason they do this is because a lot of documents and web pages could specify Arial in them and users would get the idea that OS X does not support Devanagari, Tamil, Tibetan, etc unless they switch the font, which is impossible in Safari.+
Yeah, I can see the reason; but wouldn't it be easier in the long run to just enable OpenType fonts so everyone can use the same fonts? And it is annoying that you can't specify fonts in Safari like in Camino.
+I agree, but have doubts this will happen soon. Perhaps more likely is an app like Mellel which does OpenType could be expanded to cover these scripts.+
Several years ago the developer of Mellel assured me that v.2 would support Indic scripts. But it never happened.
+In the meantime, OpenOffice/X11 does work for display I think.+
Yes, OpenType fonts can be used in OpenOffice/X11 (which is no longer being developed, BTW, now that there's a native OS X version). But not AAT fonts.
+This might interest you:+
http://discussions.apple.com/thread.jspa?threadID=1577715&tstart=0
Indeed, though much of it is over my head. I just want to be able to use the same fonts for Indic languages as everyone else, just as I can in Chinese, etc.
Maybe you are looking for
-
Desktop Synch with Outlook - this may be the final straw
I am running Blackberry Desktop on a machine running Windows 7 64-bit. I'm using Blackberry 7.1 Bundle 1737. I am using Outlook. I don't have an enterprise email. When I try to synch, it usually works on memos and tasks, fails on address (stalls
-
Appleworks does not work on Mountain Lion
Help please? I downloaded Mountain Lion on my 2.4 GHz Intel Core Duo 2 laptop and now I cannot use Appleworks! It says that I cannot open it because Power PC applications are no longer supported. I had Appleworks 6 on Lion before I swtiched? How
-
Make an object vanish in Flash CS4 (AS3.0)
I have a question on how i can remove an object when two objects hit. I have two objects on the stage overlapping. object #1 = character_mc object #2 = vanish_mc I want to make it so that the object #2 (vanish_mc) will vanish once i run it. I have th
-
E6-00: what's wrong with Belle refresh (and other ...
I've been following this forum and have always been up-to-date with E6-00 firmware. I've been almost happy with Belle and was expecting hardly the Belle refresh version for the Music player upgrade. What I got was - ? I don't know. Just the new versi
-
Unknown recorder error CS6 premiere pro
I know work with CS6 for over 7 months. Always smoothly till this week. I wanted to capture a film via tape (sony HVR-V1E), my system diretly gave the following message: unknown recorder error. I klicked "okay" and gladly, I thought, it started to c