Lion OS and Non-English langugae

Hi There,
I have some issues with Lion OS and non-english language like arabic. for instance when I want to go through some websitel like مهاجرت به کانادا and کانادا مهاجرت and مهاجرت کانادا and اقامت کانادا I run into some messy words and when I check with snow leopard, every thing work perfectely. How can I figure that out?
Thank you

Could you send me a screen shot of what you mean by "messy words"?
If you are using Safari, do you have the same problem with FireFox?

Similar Messages

  • How can the symbol and non-English diacritical marking, etc accessed with combinations of letters and functional keys prior to Snow Leopard be achieved in Snow Leopard?

    How can the symbol and non-English diacritical marking/punctuation pallet, available in pre-Snow Leopard OSes with various combinations of letter or number keys and functional keys, be accessed in Snow Leopard?  Those pre-Snow Leopard versions worked on the fly as one was making text in any pedestrian application and its native font (Mail, Text Edit, for example).  One didn't need to dig around in font libraries, change font preferences, etc.

    > One didn't need to dig around in font libraries, change font
    > preferences, etc.
    It hasn't worked like that since the Early Chalcolithic (ie, System 7 or thereabouts).
    You've already got plenty of answers. Briefly (and grossly oversimplified),
    - Mac OS X conforms to a standard known as Unicode; in its current incarnation, it defines over 100k characters.
    - A keypress is translated into a character according to the current keyboard layout.
    - The graphic representation of a character (ie, glyph), is provided by the current font.
    - If a font lacks a glyph for the requested character, either another font will be automatically chosen (Mac OS X text engine), or some form of feedback (empty box, question mark, etc) will be used.
    - To inspect the actual key codes, use a utility such as Key Codes.
    - To inspect the current keyboard layout, invoke Keyboard Viewer.
    - To inspect the full complement of glyphs of a font, invoke Character Viewer (also accessed with the Special Characters command).
    (Remember that both these utilities are resizable and zoomable -- you can enlarge them to a comfortable viewing size, then zoom out to see more of the screen for your original task.)
    - For a more detailed look, use a utility such as UnicodeChecker.
    - The default keyboard layout depends on your Mac OS X localisation.
    (Keep in mind that there's no need to stick with the default layout; choose whichever one makes sense to you, given your language, habits, and proclivities. Mac OS X comes bundled with quite a few, including some obviously designed for the huddled masses of refugees from the Dark Side, who, in their wretched ignorance, have the unmitigated gall of labelling our native ways "really uncomfortable". Oh well, this, too, shall pass.
    If none of the supplied keyboard layouts fits your needs -- if, for instance, you write your emails in Etruscan -- go out on the 'net, you'll find quite a few. Or write your own with Ukulele, it's not really all that difficult.)
    - Use Keyboard Viewer to familiarise yourself with the current layout and to enter the odd character; but, to be proficient, you should learn your layout to the point that KV is no longer needed.
    - Use Character Viewer to enter the odd character not available in the current keyboard layout.
    Neither Keyboard Viewer nor Character Viewer are effective tools for more extensive needs, eg, for writing and editing bilingual or multilingual texts. In such a case, you should enable the respective keyboard layouts and switch between them with a keyboard shortcut.
    A few interesting layouts bundled with Mac OS X have already been mentioned. Let me add three.
    - Dvorak: several layouts based on the Dvorak keyboard. It is claimed that the latter is more productive and lessens RSI risk.
    - US Extended: based on QWERTY, it offers a more extensive set of diacritics (eg, caron, breve) via dead keys.
    - Unicode Hex Input: also based QWERTY, it allows input by Unicode codepoint (in hexadecimal), so it's the most extensive layout of all; eg, to enter the character "Parenthesized Number Twelve" (U+247F), hold down Option, type "247f", release Option.

  • How can build help file for right to left and non english language

    hi erverybody, i have a big problem ,please help me
    i creat a c# appliction with vs.net2005 and i want to write
    help file for it .i need a software to creat help file for non
    english language(such as persian or arabic) and right to left
    support . i tried many helpware chm builder tools but each of them
    made a problem .i dont know what to do .what is the best solution
    for creating help files for right to left and non english languages
    how can i build my help file with f1 helping ability .thank u
    so much
    Text

    Hi Mr_Sia and welcome to the RH community.
    You could try looking at the HAT Matrix. This is a comparison
    of all the help tools around. Click
    here to go to it.
    RoboHelp itself, whilst it does support over 35 different languages
    does not support Persian or Arabic.

  • Cfqueryparam and non-English languages

    Hi,
    I want to store non-English characters in NCHAR NVARCHAR and
    NTEXT field types.
    In the examples below fields LastName and FirstName are
    NVARCHAR type and
    Comments is NTEXT
    In the past when I wanted to store non English (Greek)
    strings or texts in these field types I was writting
    <!-------------->
    <cfquery name="Q" datasource="Db">
    INSERT INTO Books
    ( LastName,
    FirstName,
    Comments )
    VALUES
    ( N '#var_LastName#' ,
    N '#var_FirstName#' ,
    N '#var_Comments#' )
    </cfquery>
    <!-------------->
    Now I want to use the cfqueryparam, so the syntax is
    <!-------------->
    <cfquery name="Q" datasource="Db">
    INSERT INTO Books
    ( LastName,
    FirstName,
    Comments )
    VALUES
    ( <cfqueryparam value="#var_LastName#"
    cfsqltype="CF_SQL_VARCHAR" MaxLength="40">,
    <cfqueryparam value="#var_FirstName#"
    cfsqltype="CF_SQL_VARCHAR" MaxLength="30">,
    <cfqueryparam value="#var_Comments#"
    cfsqltype="CF_SQL_LONGVARCHAR"> )
    </cfquery>
    <!-------------->
    but ...... the non English characters are not stored
    correctly.
    So my question is
    How can I use the cfqueryparam in order to store non English
    characters in NCHAR NVARCHAR and NTEXT field types ???
    The documentation has not any info on that.
    Thank you in advance for your help
    ANX
    Greece

    You didn't say what version of CF you were using.
    Also, which database driver are you using (ODBC or JDBC
    version #)?
    Anyway, it looks like you do not have the unicode support
    turned on. See
    http://www.sustainablegis.com/blog/cfg11n/index.cfm?mode=entry&entry=F9553D86-20ED-7DEE-2A 913AFD8651643F,
    etc.
    If that doesn't help, post
    sample data that inserts ok without cfparam but not ok with
    cfparam.

  • Reading .txt file and non-english chars

    i added .txt files to my app for translations of text messages
    the problem is when i read the translations, non-english characters are read wrong on my Nokia. In Sun Wireless Toolkit it works.
    See the trouble is because I don't even know what is expected by phone...
    UTF-8, ISO Latin 2 or Windows CP1250?
    im using CLDC1.0 and MIDP1.0
    What's the rigth way to do it?
    here's what i have...
    String locale =System.getProperty("microedition.locale");
    String language = locale.substring(0,2);
    String localefile="lang/"+language+".txt";
    InputStream r= getClass().getResourceAsStream("/lang/"+language+".txt");
    byte[] filetext=new byte[2000];
    int len = 0;
    try {
    len=r.read(filetext);
    then i get translation by
    value = new String(filetext,start, i-start).trim();

    Not sure what the issue is with the runtime. How are you outputing the file and accessing the lists? Here is a more complete sample:
    public class Foo {
         final private List colons = new ArrayList();
         final private List nonColons = new ArrayList();
         static final public void main(final String[] args)
              throws Throwable {
              Foo foo = new Foo();
              foo.input();
              foo.output();
         private void input()
              throws IOException {
             BufferedReader reader = new BufferedReader(new FileReader("/temp/foo.txt"));
             String line = reader.readLine();
             while (line != null) {
                 List target = line.indexOf(":") >= 0 ? colons : nonColons;
                 target.add(line);
                 line = reader.readLine();
             reader.close();
         private void output() {
              System.out.println("Colons:");
              Iterator itorColons = colons.iterator();
              while (itorColons.hasNext()) {
                   String current = (String) itorColons.next();
                   System.out.println(current);
              System.out.println("Non-Colons");
              Iterator itorNonColons = nonColons.iterator();
              while (itorNonColons.hasNext()) {
                   String current = (String) itorNonColons.next();
                   System.out.println(current);
    }The output generated is:
    Colons:
    a:b
    b:c
    Non-Colons
    a
    b
    c
    My guess is that you are iterating through your lists incorrectly. But glad I could help.
    - Saish

  • Problem with Vcard and non-English character

    VCard feature is what I would like to use, but I have quite a few contacts with Non-English name (Korean).
    I know Ipod can display in Korean, but when I create a v-card with Korean character and copy the vcard file over into /contacts folder, I can see the filename as the person's name (From windows explorer), but I can ONLY see first character of the file when I display contacts in iPod.
    Does anyone have tips/tricks on displaying all the filename in IPod contacts?
    Thanks.
      Windows XP Pro  

    Because i use the string nota into a jsp page and i print the string nota into a textarea and the text is with no newline, example:
    <textarea name="nota" rows="4" cols="60"><%= nota %></textarea>
    the text into textarea is:
    first linesecond linethird line
    but i want that the text displayed into textarea is equal the text into the CDATA section:
    first line
    second line
    third line

  • Flex, xml, and non-English characters

    Hello! I have a Flex web app with AdvancedDataGrid. And I use httpService component to load some data to grid. The .xml file contains non-english characters in attributes (russian in my case) like this:
    <?xml version="1.0" encoding="utf-8" ?>
       <Autoparts>
        <autopart  DESCRIPTION="Барабан">
    </Autoparts>
    And when i run app, AdvancedDataGrid display it like "Ñ&#129;ПÐ". How can i fix it? I try to change encoding="utf-8" with some another charsets, bun unsuccesfully. Thank you.

    Try changing the xml structure by using CDATA instead of having the russian part as an attribute and see if that makes any difference.
    What I meant is use something like this:
    <?xml version="1.0" encoding="utf-8" ?>
       <Autoparts>
        <autopart>
           <description><![CDATA[Барабан]]></description>
      </autopart>
    </Autoparts>
    instead of the current xml.

  • Cfimage and non-english characters

    I've been googling for hours and just can not find anything
    related to this, very strange, I am trying to use ImageDrawText
    draw Chinese onto an image but could never get it to work, tried so
    many things, setting the page encoding to UTF-8, cfcontent,
    setEncoding etc., it displays the Chinese fine on the page, but
    passing it to the ImageDrawText method, writetobrowser gives me an
    image with all square boxes instead of the chinese characters.
    anyone else having same problem with other non-english
    languages?
    thanks

    edwardch wrote:
    > thanks, tried Arial unicode MS, but unfortunately my
    hosting server doesn't
    > have the font :(, CF gives an error: Unable to find
    font: Serif.
    serif? not sure where that's coming from. can you try "@Arial
    Unicode MS"?
    unfortunately this is image work so you *have* to have a
    unicode capable font
    physically available (unlike PDF/flashpaper where you might
    "poke & hope" that
    the font's on the client & simply not embed the font).
    > Have also tried Lucida Sans Unicode, doesn't work for
    Chinese.
    no it doesn't.
    > and can't find the Arial unicode MS.ttf file, even if I
    have the file, how can
    > I install it using coldfusion codes?
    you can't, either have to add it via windows control panel
    (or whatever for
    linux) or via cfadmin font management. see these for
    potential fonts, etc.
    http://en.wikipedia.org/wiki/Arial_Unicode_MS
    http://en.wikipedia.org/wiki/Free_software_Unicode_typefaces
    http://en.wikipedia.org/wiki/Unicode_typefaces
    http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&cat_id=FontDownloads
    or you might simply ask your host what fonts are on the
    server that can support
    chinese.

  • [SOLVED] mkvmerge. making mkv's from avis and non-english srt files

    hi! im converting all my huge amount of .avi's to .mkv's with mkvmerge, but i found that with non-english characters, mkvmerge make wrong subtitles in the resulting mkv file, all letters such as á,é,í,ó,ú,ñ and ¿/¡ results in the subtitle not displaying correctly, any advice?
    i think i need some sort of "Options that only apply to text subtitle tracks: --sub-charset <TID:charset>" but don't know how to implement it in the cli
    i only do a "mkvmerge -v -o mkvfile avifile srtfile"
    Last edited by leo2501 (2008-09-01 23:40:53)

    well i come around with this bash script
    avi2mkv:
    #!/bin/bash
    INPUT=$(basename $1 .avi)
    SRT=$(ls | grep $INPUT.srt)
    SUB=$(ls | grep $INPUT.sub)
    SSA=$(ls | grep $INPUT.ssa)
    if [ "$SRT" != "" ]; then
    # mkvmerge -v -o $INPUT.mkv $1 $INPUT.srt
    mkvmerge -v -o $INPUT.mkv $1 --sub-charset 0:UTF8 -s 0 -D -A $INPUT.srt
    # mkvmerge -v -o $INPUT.mkv $1 $INPUT.eng.srt --sub-charset 0:UTF8 -s 0 -D -A $INPUT.spa.srt
    elif [ "$SUB" != "" ]; then
    mkvmerge -v -o $INPUT.mkv $1 $INPUT.sub
    elif [ "$SSA" != "" ]; then
    mkvmerge -v -o $INPUT.mkv $1 $INPUT.ssa
    else
    mkvmerge -v -o $INPUT.mkv $1
    fi

  • Error writing file name which contain both English and non-English name

    Hello
    I have this simple vbscript code which suppose to write all file names in some directory to a text file
    Dim FSO
    Dim FileDirectory
    FileDirectory = "C:\temp"
    Dim FileList
    FileList = "list.txt"
    Dim Fname
    Set FSO = CreateObject("Scripting.FileSystemObject")
    set FileDirectory = FSO.GetFolder(FileDirectory)
    Set objFile = FSO.CreateTextFile(FileDirectory & "\" & FileList ,True)
    for each file in FileDirectory.files
    Fname = file.name
    objFile.Write( ChrW(34) & Fname & ChrW(34) & vbCrLf)
    objFile.Write( ChrW(34) & FileDirectory & "\" & Fname & ChrW(34) & vbCrLf)
    Next
    objFile.Close
    Everything goes fine while file names are in English but when some file name is non English or both  English with non English name (right to left languages) an error raised, so how to deal with this writing issue without changing the file name
    thanks in advance

    Thanks
    jrv for replying
    I tested you code but it didn't worked for me (works fine for English file names but doesn't work with right to left language )

  • SIGSEGV and non-english?

    I'm also having problems with installing Oracle8iR2. While searching through the discussion archives here for a solution, I couldn't help but noticing the unusually high number of non-native English speakers that had similar problems. It seems that there could be some sort of connection, e.g. different fonts or language settings.
    Maybe somebody more knowledgeable than myself could give that some thought?
    /Per

    It seems that there could be some sort of
    connection, e.g. different fonts or
    language settings.I'm not using any special NLS settings myself. Playing with it a little more, it seems random. Sometimes it dies in libc, sometimes in libXt, sometimes it doesnt die at all, but works! A few of the scripts like dbassist I've got to work with jdk 1.2.2 by hacking the script, but netasst throws errors under jdk122 and jdk13. The OEM stuff works about half as well.

  • GetFirstFile() and non-english chars

    Hi,
    When using GetFirstFile() / GetNextFile(), if a file is encountered with Chinese chars in it's filename, each of these chars is replaced with a "?".
    As a result, I cant open the file as I dont know its full name.
    Does anyone know of a way around this? Some Windows SDK function maybe?
    cheers,
    Darrin.

    Hi Diz@work,
    Thanks for posting on the NI Discusson Forums. 
    I have a couple questions for you in order to troubleshoot this issue:
    Which language is your Windows operating system set to? Chinese or English?
    When you say that the filename returned contains '?' characters instead of the Chinese characters, do you mean you see this when you output to a message popup panel or print to the console? Are you looking at the values in fileName as you're debugging? Can you take a look at the actual numerical values in the fileName array and see which characters they map to? It's possible that the Chinese characters are being returned correctly, but the function you're using to output them doesn't understand the codes they use.
    Which function are you using to open the file with the fileName you get from GetFirstFile()? Can you take a look at what's being passed to it?
    CVI does include support for multi-byte characters. Take a look at this introduction:
    http://zone.ni.com/reference/en-XX/help/370051V-01/cvi/programmerref/programmingmultibytechars/
    As far as the Windows SDK goes, I did find that the GetFirstFile() and GetNextFile() functions are based on the Windows functions, FindFirstFile() and FindNextFile(). According to MSDN, these functions are capable of handling Unicode characters as well as ASCII:
    http://msdn.microsoft.com/en-us/library/windows/desktop/aa364418(v=vs.85).aspx
    There may be a discrepancy between how these functions are being called and/or what they're returning to the CVI wrapper functions.
    Frank L.
    Software Product Manager
    National Instruments

  • INSO_FILTER and non-English characters

    Hello!
    CLOB colomn contains various formated documents (doc, pdf, plain
    text) mostly in win1251 charset.
    After succesfull indexing, I have "no result rows" when using
    CONTAINS with russian word query, but everething's fine with
    english words. Why did iMText index CLOB incorrectly and what
    should I do?
    Also when I try to retrieve document with highlighting
    (ctx_doc.markup), all russian characters are replacing by '*',
    NLS_LANG = AMIRICAN_AMERICA.CL8MSWIN1251
    Oracle 8.1.7
    Linux Red Hat 6.2
    create index idx_files on files(content)
    indextype is ctxsys.context parameters
    ('filter ctxsys.inso_filter');

    Did you try your NLS_LANG set to Russian?

  • WMIC - different output on English (non-localized) and non-English (localized) OS language

    Hi, I have strange behavior when running WMIC script to get serial numbers of storage devices across network:
    wmic /node:"w10","w16" /failfast:on diskdrive get model,serialnumber,description /format:list
    and get different results:
    C:\>wmic /node:"w10","w16" /failfast:on diskdrive get model,serialnumber,description /format:list
    Description=Дисковый накопитель
    Model=KINGSTON SV300S37A120G ATA Device - drive the same
    SerialNumber=303532304236323741333830383143[HIDEBYME] <<< Russian language OS
    Description=Disk drive
    Model=KINGSTON SV300S37A120G - drive the same
    SerialNumber=50026B[HIDEBYME] <<< English language OS
    I check many of them, and it's for sure, that SerialNumber output depends of OS language.
    If system are English, than we have SN, when not, we have something that 40 char lenght.
    Maybe someone know, how to fix it? Thanks.
    as is

    WMIC is not a script.  It is as ystemutility.  You should post in the forum for your OS.
    Language settings can affect everything. If a device is not designed t osupport languages then it may not report correctly. We cannot fix this in a script as it usually requiers low-level API calls.
    If you use PowerShell you could change langauge to English when accessing that device.
    ¯\_(ツ)_/¯

  • Problem with Freehand eps and non english char

    HI, I'm new in this forum and newbie in Freehand
    I'm trying to make a eps file with Freehand. This eps is for faxcover, I make it and works fine, but the problem is when I want to use a special charcater, non englis, for example Ñ.
    Somebody knows how can I do to when I make the eps in Freehand permit special characters?
    Any idea?
    Thanks

    Special characters can't be done as FreeHand doesn't support unicode (extended letters with accents, etc)
    This may help: http://freefreehand.org/forum/viewtopic.php?f=5&t=268

Maybe you are looking for