Multilingual characters

hi friends..
How can i make my text box in my xsl sheet accept multiligual characters like germany,japanese,chinese .......and there by store it in oracle database.
Please point me out to any site which gives details about the same.
thanks
nadia

Presumably you mean, "how can my HTML text box that I'm generating using an XSLT Stylesheet accept foreign characters?"
Just make sure you deliver the page with the write encoding. The browser should auto-detect the encoding and allow you to enter the characters you want. At least IE5 does not. Not sure about netscape.

Similar Messages

  • Displaying/reading multilingual characters

    (1) how do you display universal characters in their native character sets? i have a multilingual file containing
    chinese, japanese, Korean and english characters (there may be different/more languages). This file is encoded in unicode for development purposes.I need to display these in their proper native glyphs. Problem i encountered was choosing the right font.properties, for e.g. when i use font.properties.ko , korean fonts are display properly but chinese/japanese are not...and when i use font.properties.ja then korean characters are displayed as black boxes.
    (2) Given a InputStream containing multi-lingual charactes such as CJK and english, is there a way to find out the locale of the character set. I need this so that numeric information is displayed in the proper local.
    i'm using JDK 1.3 on windows 2000 sp2.
    any help is greatly appreciated.
    thanks

    (1) Since the data is already in Unicode, just get a font that can display all unicode characters. Arial Unicode MS is one such font.
    (2) Not without some processing.

  • Textfield which accepts multilingual characters with embedded fonts

    Hi all,
    I have a textfield where I can paste text from different languages with device fonts. Now, I want to input multi-ligual text with embedded fonts.
    I have embedded set of fonts, where I can input only english alphabets but not other languages.
    Is there a way to input any language characters with embedded fonts?
    Thank you.

    Thanks for your reply.
    I have embedded font as follows,
    [Embed(source="myfont.ttf", fontName='myfont', mimeType='application/x-font',embedAsCFF="false")]
    With this, I was able to apply this font for english.
    I have tried to paste " मैं कौन हूँ? " on a textfield. But I have not shown मैं कौन हूँ? text.
    And setting text programmatically did not work at all as
    textField.text = "मैं कौन हूँ?";
    If I do this with device fonts, every thing work fine but not with embedded fonts.
    Is there any thing wrong with my code while embedding font?

  • How to encrypt characters with multilingual?

    Hi,
    I used DBMS_OBFUSCATION_TOOLKIT.DESENCRYPT to encrypt characters without problem in plsql. However, when I attempted to encrypt characters containing Chinese characters (combinations of ABC's and Chinese characters), somehow it will give me the following errors:
    ORA-28232: invalid input length for obfuscation toolkit
    ORA-06512: at "SYS.DBMS_OBFUSCATION_TOOLKIT_FFI", line 21
    ORA-06512: at "SYS.DBMS_OBFUSCATION_TOOLKIT", line 99
    Please advice if it is possible to encrypt multilingual characters and if so, how, if it is different from the normal encryption ways. My database is 10g with UTF8 and the mentioned data is retrieved from database.
    Thank you in advance.
    Regards,
    wongly

    Hi,
    This is because the input data to DESENCRYPT must be a multiple of 8 bytes. If the input is chinese characters then 8 characters will be longer than 8 bytes. You must use lengthb and substrb functions to ensure that the input is exactly a multiple of 8 bytes.

  • Jdom and multilingual xml files

    The J2EE application we are developing has to be multilingual and the user has to be able to edit the text. To do this, we created custom tags that use jdom to read an xml file that gets parsed as a String in the database. Everything works fine exept for when I try inserting special characters into the xml string, like � or �, for other languages and then try to parse the xml string into an org.jdom.Document. I did a google search and found on http://www.jdom.org/pipermail/jdom-interest/2003-April/011870.html that jdom "reject[s] all characters beyond the basic multilingual plane." I cannot find anywhere if using DOM or SAX2 will allow me to use multilingual characters. Does anyone know how I can parse the xml string into an xml document while still being able to use special characters? Thanks in advance.

    Those characters are firmly near the beginning of the basic multilingual plane. Your problem is more ordinary than that: you have to create your XML file in the same encoding you declare it to be in. If your XML prologue looks like this:<?xml version="1.0" ?>then you have implicitly declared it to be encoded in UTF-8 (or perhaps UTF-16). In that case you must tell your text editor to save it in the UTF-8 encoding. But if your text editor is one of those that doesn't understand encodings then you need to specify the encoding it does use. If it's ISO-8859-1, which is the Western European Latin script, then change the prologue to look like this:<?xml version="1.0" encoding="ISO-8859-1" ?>And read this: http://skew.org/xml/tutorial/

  • Converting Unicode to UTF-8 character set through Oracle forms(10g)

    Hi,
    I am working on oracle forms (10g) where i need to load files containing unicode character set (multilingual characters) to database.
    but while loading the file , junk characters are getting inserted into the database tables.
    while reading the file through forms , i am using utl_file.fopen_nchar,utl_file.get_line_nchar functions to read the unicode characters ...
    the application server , and database server characterset are set to american utf8 characteset.
    In fact , when i change the text file characterset to utf8 through an editor(notepad ++,etc) , in that case , data is getting inserted into database properly,(at least working for english characters) , but not with unicode ...
    Any guidance in this regard are highly appreciated
    Thank you in advance
    Sanu

    hi
    please check out the following link.
    http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm
    sarah

  • NLS_LENGTH_SEMANTICS parameter...

    Hi ,
    I want to insert multilingual characters in an Oracle XE 10g Db. To do that i connected as sys and issued the command
    insert into dept(deptno , dname)
       values(90,'ΛΟΓΙΣΤΗΡΙΟ')
    ORA-12899: value too large for column "SCOTT"."DEPT"."DNAME" (actual: 20, maximum: 14)
    SQL> ALTER SYSTEM SET NLS_LENGTH_SEMANTICS='CHAR' SCOPE=BOTH;
    System altered
    SQL>
    SQL> insert into dept(deptno , dname)
      2     values(90,'ΛΟΓΙΣΤΗΡΙΟ')
      3  /
    insert into dept(deptno , dname)
       values(90,'ΛΟΓΙΣΤΗΡΙΟ')
    ORA-12899: value too large for column "SCOTT"."DEPT"."DNAME" (actual: 20, maximum: 14)
    SQL> SHOW PARAMETER NLS_LENGTH_SEMANTICS;
    NAME                                 TYPE        VALUE
    nls_length_semantics                 string      CHARWhy does this problem persist...?????
    Thanks...
    Sim

    Why does this problem persist...?????Because your tables are already created with BYTE semantics i.e. your length of column is 14 bytes but your text (logistics? ;)) you'd like to insert is 10 symbols but 20 bytes, so you cannot do that. As previous poster suggested modify column to 14 char. And your alter system will take care of all your next DDLs but it is not modifying already existing tables and BTW compiled programm units as well.
    Gints Plivna
    http://www.gplivna.eu

  • Which SQL editor supports Multilanguage insertion

    Hi,
    Which editor will support multilanguage insertion to oracle database. I have tried Toad, it doesn't support insertion as well as display
    SQL Developer, will display multi language dispaly but not supporting the insertion. On insertion it shows junk characters.
    Please let me know which editor would be useful for inserting multilingual characters..
    thanks
    Jino

    If you set NLS_LANG and database character set correctly. almost all SQL utilities support multilanguage.
    including sqlplus
    check
    [NLS_LANG FAQ|http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm]
    C:\Documents and Settings\>sqlplus / as sysdba
    SQL*Plus: Release 10.2.0.1.0 - Production on Mon Sep 22 22:07:28 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL>
    C:\Documents and Settings\>set NLS_LANG=SIMPLIFIED CHINESE_CHINA.ZHS16GBK
    C:\Documents and Settings\>sqlplus / as sysdba
    SQL*Plus: Release 10.2.0.1.0 - Production on 星期一 9月 22 22:08:21 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL>

  • UTF8 - Urgent

    Hi All,
    My database character set is WE8ISO8859P1.
    Language : American
    Territory : America.
    Now, I tried converting client NLS_LANG in the registry(we use
    windows 95). I gave French_France.UTF8.
    Then I issued the query
    select userenv('language') from dual;
    FRENCH_FRANCE.WE8ISO8859P1
    I got the above output. How do I make my client characterset to
    UTF8.
    One of the report module needs to support multilingual.
    So, I want to store and retrieve data of other languagues.
    How do I achieve this?
    Thanks in advance.
    Vijay.

    Hi Vijay,
    To support Multilingual characters you need your data in UTF8
    (it depends on the languages you wish to support, of course).
    Check out the FAQ at
    http://technet.oracle.com/products/oracle8i/htdocs/faq_combined.h
    tm
    Now, I tried converting client NLS_LANG in the registry(we use
    windows 95). I gave French_France.UTF8.
    Then I issued the query
    select userenv('language') from dual;
    FRENCH_FRANCE.WE8ISO8859P1This may be because the UTF8 characterset is not a a complete
    superset of WE8ISO8859P1 characterset (not sure) or there are
    more than one registry entries/environ vars to be modifed for
    NLS_LANG.
    Regards,
    Shirish

  • Group by languages in multilingual table in sql server

    Hi ,
    I am having a multilingual table in SQL server 2008 , the table has two columns ID and Text
    The text column has English, Chinese and other language texts.
    I need a resultset grouped by the languages and count of id like
    Language   count_of_id
    English          25
    Chinese         10
    other languages 3
    Is this possible? Can you please help me?

    Good day SqlServer_learn
    I have a saying that I always uses: anything is possible in developing, if you have the appropriate resources (change the existing solution could be part of the way to solve...)
    regarding your question, there is simple solution, but for most cases, I highly recommend to change the table structure and add a column for the culture of the text (like: en-us for english,
    he-il for Hebrew and so on..).
    Since you are using unicode column like nvarchar to store multi language text,
    we can get the language from the text itself, as long as it include characters from that language (text which include only numbers for example we we consider as default language since it is the same in all languages). 
    step 1: First you need an accessory table (Named like UnicodeMapping) which include all unicode characters and the number of the char in unicode (a mapping unicode table). You can use ranges as well, but
    it will be faster for the queries if you actually have all the characters, and not just range.
    For example this table (I added English and Hebrew... Do the same with all the languages that you need):
    create table UnicodeMapping (Charecter nchar(1), UnicodeNum int, CultureN NVARCHAR(100), CollateN NVARCHAR(100))
    GO
    -- fill the table with main Hebrew characters, using a number table
    insert UnicodeMapping (Charecter, UnicodeNum, CultureN, CollateN)
    select NCHAR(n), n, 'He-IL', 'Hebrew_CI_AS'
    from _ArielyAccessoriesDatabase.dbo.ArielyNumbers
    where
    n between 1488 and 1514 -- Hebrew
    or n between 64304 and 64330 -- Hebrew
    GO
    -- fill the table with main English characters, using a number table
    insert UnicodeMapping (Charecter, UnicodeNum, CultureN, CollateN)
    select NCHAR(n), n, 'En-US', 'SQL_Latin1_General_CP1_CI_AS'
    from _ArielyAccessoriesDatabase.dbo.ArielyNumbers
    where
    n between 97 and 122 -- En
    or n between 65 and 90 -- En
    GO
    -- Do the same with all the languages that you need, and all the UNICODE ranges for those languages
    select * from UnicodeMapping
    GO
    Step 2: You can create a function which get NVARCHAR as input and return the culture as output, or work directly on the data using JOIN your table and this table.
    Assuming that each row is in specific language, In order to recognize the language, you just need to check 1 character from the original string (a text character and not a number for example which might be in any language) and examine which language this
    single character is, using the our UnicodeMapping.
    You can check this thread to see an implementation of this idea: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/ccc1d16f-926f-46c8-8579-b2eecf661e7c/sort-miultiple-language-data-in-sql-serevr-by-collation?forum=transactsql
    * dont forget to add to the table all the characters like numbers and chose them as your default language
    * in the link above I just select the first character using LEFT, but if the text start with number for example then you will get default language. If you sure that the text must start with real language character then it is best solution, but if not, than
    It is better to use a "user defined function" which will find the first character that is not in the default language. if the function do not find any char in non-default language than it return default language, else it check the language using
    the UnicodeMapping and return it.
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Non-ASCII Characters in QuickLook

    Using QuickLook on plain UTF-8 text files (AKA .txt files) displays garbled interpretations of various extended characters, including fancy quotes and other nice formatting characters... not to mention the entire Russian alphabet. Surprising given OS X’s general UTF-8/Unicode friendliness.
    Anyone know of a hack / fix ?
    I'm willing to go outside the box a bit if necessary, considering most of the text files I work with are multilingual.
    thanks -K

    I have the same problem, and it appears to be related to the use of extended attributes, see this thread on the vim_mac mailing list:
    http://www.nabble.com/MacVim-file-encoding-and-Quicklook-td17289501.html
    Using the 'xattr' command, as explained in the thread, fixes the Quicklook display (for me). But it's still not clear 1) why you have to do this to make quicklook recognize the encoding and 2) why isn't there more people affected?

  • Issue in Inserting Greek characters in Oracle DB

    We are facing the following issue in our project. Need urgent help.
    We have a multilingual string(message with Greek characters) coming in from KSD gateway. This is to be processed by MHL(message handling layer- which validates and converts the incoming message into SQL statements and vice versa). Once the validation and conversion of incoming message is completed and then SQL statement is prepared, it inserts this data in the Oracle database. We are using OCI interface for database communication. Client program is written in C language. While inserting data in Oracle DB, we are getting exception as explained below.
    EventVwr Output:
    SCREEN 1:
    ODI Oracle error report has been issued
    ODI Error: DbConnId 0 - Oracle error msg:
    ORA-00917: missing comma
    SqlStatement='INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (17549,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','')'
    SCREEN 2:
    Error executing OCI Statement
    Failed to execute PL/SQL command INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (17549,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','') on DbConnId 0 file ..\MHL - source\MhlOdi.c line 803
    SCREEN 3
    Inbound thread ODI update to table field failed
    SQL insert failed on DbConnId 0
    INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (17549,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','')
    in file ..\MHL - source\MhlInBound.c line 2342
    We have looked into the trace file generated by MHL and found that the MHL is unable to process Greek characters and hence is unable to generate a valid SQL statement.
    Trace File Output:
    09/02/2012 10:22:54.437 T2228>> ODIExecNonQuerySqlStatement
    09/02/2012 10:22:54.437 T2228
    09/02/2012 10:22:54.437 T2228 SqlStatement=INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (18175,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','')
    09/02/2012 10:22:54.437 T2228 In OCIStmtExecute
    09/02/2012 10:22:54.437 T2228 SQL STMTExecute=
    09/02/2012 10:22:54.437 T2228>> ODICheckError
    09/02/2012 10:22:54.437 T2228 Non-Fatal Database Error - msg: ORA-00917: missing comma
    , code: 917
    09/02/2012 10:22:54.437 T2228<< ODICheckError
    09/02/2012 10:22:54.437 T2228>> ODIOracleErrorReport
    09/02/2012 10:22:54.437 T2228<< ODIOracleErrorReport
    09/02/2012 10:22:54.437 T2228 SQL STMTExecute=INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (18175,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','')
    09/02/2012 10:22:54.437 T2228<< ODIExecNonQuerySqlStatement
    09/02/2012 10:22:54.437 T2228 Error: SQL insert failed on DbConnId 0
    INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (18175,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','')
    09/02/2012 10:22:54.437 T2228 SQLstatements---------->=INSERT INTO M_ENS_REJ_FUN_ERR (MSG_NUM,FUN_ERR_SEQ,ERR_TYPE,ERR_POINT,ERR_REASON,ORIG_ATTR_VAL) VALUES (18175,1,'1','Îάν Î¬Î½Î±Ï Î±ÏοÏÏÎ¿Î»Î¬Î±Ï ÎµÎ¯Î½Î±Î¹ δΕλÏÎ¼Î¬Î½Î¿Ï Î³Î¹Î± Ïλα Ïα είδΕ ÏÏÏε ÏÏάÏει να δΕλÏθεί Ïε εÏίÏεδο βαÏικÏν ÏÏοιÏείÏν ειδάλÏÏ Ïε εÏίÏεδο είδοÏÏ ÏÏÏÎ','1','')09/02/2012 10:22:54.437 T2228 Inbound thread - attempting rollback of 1 SQL insert statements on DbConnId 0
    09/02/2012 10:22:54.437 T2228>> ODIRollback
    However, when tried to insert the same string through the SQL developer it is successful in doing so.

    Globalization forum?
    Globalization Support
    It works for SQL Developer, which does not depend on NLS_LANG, so I suspect a problem with your NLS settings.

  • Thai language characters not displaying correctly in browser

    Hello all
    I am starting to develop a multilingual application which will include both english and thai.
    The problem Im having is that the browser displays only garbage when Im fetching a report on a table that includes Thai characters in a column.
    What Im seeing is something like this:
    Language Id     Short     Name
    0     ENG     English
    1     THA     à¸„นไทย
    Things that I have checked:
    - The database character set is AL32UTF8
    - The same is set for NLS_LANG when starting the database, listener, OHS
    - The same is configured in dads.conf
    - The browser is set to Unicode as well ( and can display normal thai websites without any problems)
    I have tested on a windows 7 client, as well as locally to the database on the linux box using firefox, both show the same result.
    In xterm I have no problems inserting and querying thai characters. Its all shown correctly.
    I tried with chinese characters as well, and I see the same result.
    Any ideas please let me know
    Cheers
    Stefan

    Hi Stefan,
    Sorry to interrupt you in this thread, Me too having the saming requirement of displaying my apex application in both Arabic and English.
    Hence i have accomplished of displaying the page name, labels, and all the other stuff related to page in arabic expect the data, im not sure of displaying the data in arabic soon after the user select the Arabic language.
    From your thread i can see that you are seeing your data in Thai it seems, if you dont mind can you share the stesp you did for showing the data in thai so that i can try the same for my Arabic too.
    Also sorry my post couldnt help your question.
    Thanks in advance.
    Brgds,
    Mini

  • URGENT HELP in multilingual data saving

    Hi,
    We are trying to create a multilingual application using Java and Oracle.
    In the browser, my charset encoding is UTF8, then my database charset is
    defined as WE8ISO8859P1. If i try to send a data (latvian language)
    and save it in the database, the data being saved is in the form of question
    marks (????????). What it is that i need to do in order to save the correct
    data.
    Another question is, what is the character set/encoding that can be used to
    save data in the database using Latvian, Slovenian, English and Russian languages
    besides the UTF8 encoding???? Is the WE8ISO8859P1 encoding considered also as
    Unicode????? One more thing is, if i send the data and access it in the
    servlet, do i need to perform the getBytes() for each form textfield elements in
    order to get the correct data to be saved??? What is the fastest way to test
    the correct saving of data in the database????
    Any help will be greatly appreciated. Thanks a lot in advance!!!!!!!!!!!!!!
    ayen

    Hello
    Here goes,
    Server side:
    Create or change your database to UTF-8 (see documentation on howto create or change an existing database with the UTF-8 characterset, quiet easy)
    Client side
    Change in your registry (I use Windows 2000), the key value NLS_LANG to AMERICAN_AMERICA.UTF8
    Your Oracle Forms and Oracle Reports and generated HTML pages using mod_plsql, or java, will now work in multilanguages. You will be able to input and output the correct characters (russian, french, ...), except in SQL*Plus.
    Anyway, it works for me.
    Good luck.

  • IF THE WEBSITE CONTENT IS OTHER THAN ENGLISH, THE PAGE SHOWS JUNK/UNKNOWN CHARACTERS.

    I INTEND TO VIEW THOSE SITES HAVING PAGE CONTENTS OTHER THAN ENGLISH LANGUAGE. FOR EXAMPLE hindi.moneycontrol.com BUT FIREFOX DISPLAYS JUNK/UNKNOWN CHARACTERS ON THE PAGE INSTEAD OF SHOWING PROPER TEXT IN HINDI LANGUAGE. I TRIED CHANGING VIEW->CHARACTER CODING, BUT, NO SOLUTION.
    == This happened ==
    Every time Firefox opened

    Do you have Indic font support and all needed fonts installed?
    See [http://en.wikipedia.org/wiki/Help:Multilingual_support_%28Indic%29 Wiki: Help:Multilingual support (Indic)]

Maybe you are looking for

  • MBP will not reboot after 10.6.8 update attempt

    This post may be a bit telescopic, since I'm posting from my phone, but here goes: Attempted 10.6.8 update from 10.6.7 on my late 2009 MBP. It went to a blue screen and then sat on that screen with a progress wheel for over an hour. I gave up at that

  • HTML response for HTTP adapter

    Hi All, I have a BPM scenario in which I have to send PO idoc to third party system using HTTP adapter and after I receive a response message from the system (it is synchronous process), I have to update PO idoc status using SYSTAT idoc. The problem

  • Transport collection issue

    I would like to Trasnport an infoobject from Dev to Quality. I have given Source system assignment correctly. But when I am trying to collect all related ( i mean all attributes and attributes' attribute), it is giving the following error: Edit objec

  • TS4644 When I click on iCoud Keychain settings nothing happens.

    Just got a new iPhone 4s and I set it up with little difficulty but when I go into Settings>iCloud Keychain the screen freezes.

  • Anyone know about Optiarc 7630A in Macbook Pro 15" 2008 pre-Unibody?

    We have to replace, for the 2nd time, the Superdrive in my sister's 15" Macbook Pro 2008. Very unreliable Panasonic drives. I don't have a replacement Panasonic on hand (which may be a good thing!), but I do have a Sony Nec Optiarc slot-loading AD763