Single byte languages

Hello,
I have a web application that is internationalized to a certain degree, in that it can be easily localized, as long as the language does not use a double-byte character set. For purposes of the product specification/documentation, how would I go about obtaining a complete list of the "single-byte" languages that such an application would support? (ie, English, Spanish, French, German, etc.) I haven't been able to obtain this list via a simple Google search, as I had originally suspected.
Thanks!

Hello,
I have a web application that is internationalized to
a certain degree, in that it can be easily localized,
as long as the language does not use a double-byte
character set. Huh? Why would that be a limitation? Neither java nor browsers are limited in that way.
For purposes of the product
specification/documentation, how would I go about
obtaining a complete list of the "single-byte"
languages that such an application would support?It doesn't have anything to do with "languages". It has to do with charsets.
It it possible that english alone has several hundred unique charsets. I would suspect that other languages have more than a couple each.

Similar Messages

  • Unable to generate single byte character when used TO_SINGLE_BYTE

    Hi All,
    Can anyone help me in getting the output for the below single byte query. When tried it says INVALID NUMBER.
    Step 1 :-
    select RAWTOHEX('2Z') from DUAL; -- 325A
    Step 2:-
    SELECT TO_SINGLE_BYTE(CHR('325A')) FROM DUAL;
    The above query when executed it says "ORA-01722: invalid number".
    I tried using VARCHAR2 instead of CHR it throuws the below exception,
    "ORA-00936: missing expression".
    But the same query if no characters are passed is working fine.
    SELECT TO_SINGLE_BYTE(CHR('3251')) FROM DUAL;
    Thanks,
    Ravi

    TO_SINGLE_BYTE is used to convert multi-byte characters to single-byte characters. '325A' is not a multi-byte character so can't be converted.
    Use HEXTORAW to convert the hex value back to a raw value.

  • How do I convert a double-byte encoded file to single-byte ASCII?

    Hello,
    I am working with XML files (apparently coded in UTF-8) which encoded in double-byte characters.
    The problem is the characters for end of line: 00 0D 00 0A
    This double byte end of line is causing a problem with a legacy conversion tool (which deals with 0D 0A). The file itself contains no
    accented/international characters, so in principle converting to single-byte should not cause any problems.
    I have tried to convert this file with tools like native2ascii and the conversion tools that are part of Notepad++ but without
    any luck - the "00 0D 00 0A" are still present in the output
    Can anyone point me to a tool or some code that can convet this file into single-byte?
    Thank you.

    Amiens wrote:
    native2ascii.exe -encoding UTF-16 -reverse INPUT.xml OUTPUT.xml
    gives 00 00 0 0D 00 00 00 0A
    so clearly that is not the required output.What you've got there is UTF-16 encoded text that's been converted to UTF-16. Get rid of the "-reverse" option and you should see the result you expect.

  • To find out whether a character is of Single byte or Double bytes

    hi,
    is there any built-in class to find whether a character is a single byte or double byte. is there any method to do so?? If possible can someone provide a sample code snippet to do a check for single byte and double byte characters.
    thanx in advance.....

    If you are asking what size the char primitive is, it's 16 bits.
    If you want to know the numerical value of a char, you can cast it to an int and compare it to 255 to see if it fits in 1 byte.

  • Acrobat replaces single-byte character codes with octal strings in content stream, how to prevent?

    I have thousands of PDFs, mainly text, all fonts are subsets, indexes starting from 1. In page content stream I see character indexes represented as single bytes: binary 1, 2, 3, etc. But, after optimizing files with Acrobat (I use it to save to 1.6 version with compressed object streams etc.) those single bytes become octal strings like \001, \002, ... . Raw stream size is increased by factor 1.5, and after a flate filter there is still 1-2 kB increase for each page. As there are thousands of pages, total (and totally useless) increase is considerable and doesn't go well with my goal of creating as small package as possible.
    Can this behavior be prevented (with something like registry setting)?

    what are you calling getBytes() on?he's calling getBytes() on his String object.
    @ puneet_k,
    I don't have a solution but note that on Unix-like systems a newline is a single byte 0x0a and on DOS-like systems (Windows) it is a sequence of bytes 0x0d and 0x0a.
    Some text editors handle both.
    I'm not sure but I think on Unix,
    new String("\n").getBytes() equals new byte[] { 0x0a }
    and on DOS
    new String("\n").getBytes() equals new byte[] { 0x0d, 0x0a }

  • Form English Char ( Single Byte  ) TO Double Byte ( Japanese Char )

    Hello EveryOne !!!!
    I need Help !!
    I am new to Java ,.... I got assignment where i need to Check the String , if that string Contains Any Non Japanse Character ( a~z , A~Z , 0 -9 ) then this should be replaced with Double Byte ( Japanese Char )...
    I am using Java 1.2 ..
    Please guide me ...
    thanks and regards
    Maruti Chavan

    hello ..
    as you all asked Detail requirement here i an pasting C code where 'a' is passed as input character ..after process it is giving me Double Byte Japanese "A" .. i want this to be Done ..using this i am able to Convert APLHA-Numeric from singale byte to Doubel Byte ( Japanse ) ...
    Same program i want to Java ... so pleas guide me ..
    #include <stdio.h>
    int main( int argc, char *argv[] )
    char c[2];
    char d[3];
    strcpy( c, "a" ); // a is input char
    d[0] = 0xa3;
    d[1] = c[0] + 0x80;
    printf( ":%s:\n", c ); // Orginal Single byte char
    printf( ":%s:\n", d ); // Converted Double Byte ..
    please ..
    thax and regards
    Maruti Chavan

  • Handling multi byte languages in Web Service

    Hi,
    I am calling a web service and its working fine with English language i.e web service is returning correct parameters with English.
    But it's returning me Junk characters when i call web service with multi byte languages like Japanese, Russian etc.,
    Generally while configuring Web Service or calling a web service using proxy - it asks for a user id/ Pwd but not a logon language (unlike while you login SAP it asks for Logon Language) so i am thinking since there is no option to enter log on language, its taking a default language i.e English so when i am passing japanese its returning me Junk Values.
    Can any one please help me with this? How to handle multi byte webservice call? I am using ECC 5.0.
    Thanks & Regards,
    Pavan.

    I appreciate your thought but our webservice must be able to handle multiple languages not only Japanese. Users might call webservice in any language they prefer. If i change it my default to Japanese, i will have problem when users call webservice in Russian.

  • Dose Indesign CS3's Data Merging List not surpport double byte language?

    Please help?
    Dose Indesign CS3's Data Merging List not surpport double byte language? But Indesign CS2 works well with Chinese Language!
    Thanks
    David

    I just did a test, in IDCS3 with a mixed English/Traditional Chinese/Simplified Chinese/Korean tab-delimited text file, and after some coaxing, it worked like a charm.
    1) I did it with a Unicode-encoded tab-delimited text file. When I went to "Select Data Source," I checked the "import options" box so that I could specify that the text file was Unicode. (What encoding is your data merge source file?)
    2) When I hit Preview, I realized that I'd set the generic form letter in a font that had no Korean support, so I had to select an appropriate font with Korean glyphs. (Are you using a font in ID that has Chinese support? I assume so.)
    So, yes, Data Merge works with Chinese characters. I didn't test it with csv, but I will if you're stuck using it. If you're starting with an Excel file, it's not hard to get it to spit out a tab-delimited Unicode text file.

  • Oracle Multi-Bytes vs Single-Byte

    Hi,
    We have to add japanese to our application, i had succesfully add japanese data in our single-byte database,
    so why should we use a Multi-byte DB?
    what is the gain to use a Multi byte DB vs a Single Byte?
    does intermedia work with japanese in Single Bytes?
    Is utf8 the best way to have an international DB?
    We will have to add a lot of other char-set in the future.
    Thanks

    so why should we use a Multi-byte DB?
    what is the gain to use a Multi byte DB vs a Single Byte? What you are doing is storing invalid multibyte characters into a single byte database. So each double byte Japanese characters are being treated as 2 separate single byte characters. You are using an unsupported but common garbage in garbage out approach, so in that sense you are using Oracle as a garbage container. :)
    Let's look at some of the issues that you are going to have :-
    All SQL Functions are based on the property of the single byte database character set WE8ISO8859P1. So LENGTH(), SUBSTR (), INSTR (), UPPER(), NLS_UPPER etc .. will yield incorrect results . For example a column with one Japanese character and one ASCII character will return a length of 3 characters rather than 2 characters. And if you want to locate a specific character in a mix ASCII and Japanese string using the SUBSTR() it will be very difficult, because to Oracle the string consists of all single byte characters, it will not skip 2 bytes for a Japanese character. Even if you don't have mix strings, you will need to write one routine for handling ASCII only and another for Japanese strings.
    Invalid Data conversion , if your need to talk to another db using dblink say ,all the character conversion will be based on the single byte character set to the target database character set mapping, so the receiver will lose all the source Japanses characters and will get 2 single byte characters for each Japanese char instead .
    Export and Import will have identical problems, character set conversion are performed during these operations, so all Japanese characters will be lost. This also means that you can not load correctly encoded Japanese data into your current single byte DB using IMPORT or SQLLOADER without data corruption ...
    does intermedia work with japanese in Single Bytes?No
    Is utf8 the best way to have an international DB?Yes
    null

  • Faster way to migrate from Single byte to Multi byte

    Hello,
    We are in the process of migrating from a 9i Single byte db to a 10g Multi byte db. The size of our DB is roughly 125 GB. We have fixed everything in the source database (9i) in terms of seamlessly migrating from a single byte to a multi byte db. The only issue is the migration window - curently we are doing an export/import since there is a character set migration involved and it's taking about 20+ hrs to do the import in 10g. The management wants to cut this down to less than 10 hours, if that's possible. I know the duration it takes to import depends on many factors like the system/OS configuration, SAN, etc but I wanted to know what , in theory, is considered the fastest method of migrating a database from single byte to multi byte.
    Have anybody here gone through this before?
    Thanks,
    Shaji

    If the percentage of user tables containing some convertible data (I am assuming you will not have any truncation or lossy data) is low, you can export only those tables, truncate them, and rescan the database. This should report no convertible data, except some CLOBs in Data Dictionary. Such database can be migrated to AL32UTF8 using csalter.plb. After the migration, you import only the previously exported subset of tables.
    Note, for this process to work, no convertible VARCHAR2, nor CHAR, nor LONG data can be present in the Data Dictionary.
    The process should be refined by dropping and recreating indexes on the exported tables as recreating an index is faster then updating it during import. You should also disable triggers so that they do not interfere with the migration (for example, they should not update any "last_updated" timestamp columns).
    If the number and size of affected tables is low compared to the overall size of the database, the time saved may be significant.
    There may also be tables that require even more sophisticated approach. Let's say you have a multi-gigabyte table that stores pictures or documents in a BLOB column. The table also has a single text column that keeps some non-ASCII descriptions of the stored entities. Exporting/truncating/importing such table may be still very expensive. A possible optimization is to offload the description column to an auxiliary table (together with ROWIDs), update the original column to NULL, export the auxiliary table, drop it, rescan the database, migrate with csalter.plb, re-import the auxiliary table, and restore the original column. If pictures alone occupy, for example, 30% of the whole database, such approach should yield significant time saving.
    -- Sergiusz

  • How to get a decimal number from three single byte numbers

    i'am in a difficult situation. I have motor which provides me the number of steps moved as three numbers of length one byte.I 'am attaching the program with this message. Here the commands from index 2-4 outputs will provide the number of steps moved. But these are numbers with a single byte length(<256). So how can I get the corresponding decimal number from these three single byte length numbers. please reply
    Attachments:
    stop2.vi ‏11 KB

    Without knowing what motor you are using, I'm guessing that you probably need to use the Join Numbers to combine the bytes into a U32.  The other option is to use the Type Cast function.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • UTF8 - Configuring two or more double-byte languages in the Same DB

    I would like to ask.
    a) Can the same DB configured to support more than one double-byte language, at the same time ? For example, I want the same set of tables to accept Simplified Chinese, Traditional Chinese, Korean and Japanese.
    b) If yes, then how are the code-bases of the WINNT would look like. I find that, changing the windows registry entry for various codebases, allows us to choose the latest entry only. Will the OS allow more than just one codebase to co-exisit. I dont know, if My question is correct or not. Pardon.
    Please help me in getting answers to these questions.
    Regards,
    Suganyaa

    Hi,
    Thanks for the reply. I just wanted to confirm the details. And, the answer to the second question...as far as I have gathered is ...
    " OS can support only one of the codebase. Not all at the same time. When I modify the codebase in the windows registry to support simplified chinese, i cannot use korean langpack. And, so on. But, I think this should not be a problem in developing the application. As, the multi-lingual users, will all have their OS codebases, in accordance to their language. The application , ( in java ) should be able to accept the double bytes and then store it in Oracle 8i. And, that is also not a problem.
    I am a new learner...sorry if I confused you.
    Regards,
    Suganyaa

  • Messages in Double Byte Language

    I had created a message in english and then by logging into the required Double Byte Languages translated the text into Double Byte Language.The message is a warning message. Now the user wanted a pop-up to be dispalyed during the warning message, so in the options setting activated the "Dialog Box at Waring Message".The Activate Multibye Functions to support is also enabled. Now the issue is that when the message is displayed , the pop-up displays the correct text for all languages except the double byte languages, for double byte it displays "???????????".If I disable the "Activate Multibye Functions to support", garbage values are displayed for the double byte languages.Can someone tell me how do I solve this?

    Hi!
    This does not sound like a programming issue. First check / update the sapgui to the newest version, if this does not help ask OSS. I guess, in general your double byte characters are displayed correct, otherwise I would check language installation of SAP, maybe also font installation of the GUI computer.
    Regards,
    Christian

  • Is there a single, multi-language translation application available?

    I am looking for a single application that allows you translate between the major European languages (English, French, German, Italian, Spanish, Portugese).
    The best example I can give is the Franklin Translator I had on my Palm TREO. It was one app, it allowed you to enter a word in English and translate it into French, German, Italian, Spanish, and Portugese, or go between any of the languages (i.e. French to German, etc.). It was a very useful application for people that travel in Europe frequently. I supplemented it with SmallTalk which contained useful phrases for travel, hotel, dining, etc. and I was set for survial in Europe.
    I have searched the App Store, the Franklin site, and paid for and downloaded numerous applications, but none seem to meet the criteria. After purchasing the Jourist Phrasebook, I saw that it allowed me to select a language to translate from, so I was hoping that as you added Jourist languages, it would be accessible froma single interface, but it is all individual applications. I also looked at the Ultralingua product.
    The web-based tools are not effective alternatives as you'll blow through your international data plan in one day when traveling.
    Any help, insights, or experience is appreciated!

    I just opened up Free Translator and it had English, French, German, Italian, Spanish and Portugese
    http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=293855167&mt=8
    Sorry if you already tried it

  • Double byte language i.e Japanese or Chinese text in non Unicode System

    Hi,
    I have translated text into Chinese and Japanese in a Unicode system and want to move it into a non Unicode system. Would Chinese/Japanese characters display correctly in a non Unicode system when moved from Unicode system.  I am doing translation in ECC60 or SAP 4.7 Unicode system and moving to SAP 4.7 non Unicode system.
    Thanks
    Balakrishna

    Hi Balakrishna,
    in general the transport between Unicode and Non-Unicode systems is supported.
    However there are restrictions, which are outlined in SAP note 638357.
    In your case it is a prerequisite that the objects to be transported are language dependent (text lang. flag is set on the language key - see SAP note 480671) and the languages are properly setup in the target systems.
    For double byte data there is a specific issue when transferring data from Unicode to Non-Unicode:
    In a Non-Unicode system, one double-byte character needs two bytes, therefore e.g. in a 10 char field, 5 double byte chars are fitting. In a Unicode system, you can insert 10 double-byte chars in a 10 char field.  Hence there is a risk of truncating characters in case of Unicode --> Non-Unicode communication.
    Please also have a look at SAP notes 1322715 and 745030.
    Best regards,
    Nils

Maybe you are looking for