Non-latin language

Hi all,
I'm developing a website in croatian language, I usually use
charset utf-8 and try to put characters codified in the code (for
example è to let appear è in the page), but I can't
find croatian characters code, I've seen they use combination like
ž but I couldn't find a list with all the combinations...
someone can help me?
Thanks :-)

On 17 May 2007 in macromedia.dreamweaver, k4yb4 wrote:
> I'm developing a website in croatian language, I usually
use charset
> utf-8 and try to put characters codified in the code
(for example
> è to let appear Š in the page), but I
can't find croatian
> characters code, I've seen they use combination like
ž but I
> couldn't find a list with all the combinations...
This may help:
http://webdesign.maratz.com/lab/utf_table/
If that doesn't have enough, this search should come up with
something:
http://www.google.com/search?q=html+utf-8+character+codes+croatian
Although if the page charset is UTF-8, you should be able to
enter text
directly into the page and it should display correctly.
Joe Makowiec
http://makowiec.net/
Email:
http://makowiec.net/contact.php

Similar Messages

  • Coding in non-latin languages

    I manage a website that is translated into 100+ languages and they keep adding more. Right now I have to add Sinhala, I have the font, but the problem is that when I copy from the PDF or Word to dreamweaver the text is converted into gibberish as I am in code view. Does anyone have experience working in these type of languages? The other problem is that I do not speak these languages, so I cannot simply type it in. I have to rely on the translated text to copy it over.
    I should also mention that the text is within variables that are on a php includes page, (i.e. $header_description = "sample text where the translation goes.";)
    Dreamweaver CC on Windows 7

    I have built a cms (content management system) that can be used with multiple languages. Although the maximum number of languages that have been used is four, the system can(in theory) handle any number of languages using any alphabet. I have worked with Vietnamese, Japanese, Chinese Simplified, Spanish and German, but not Sinhala.
    I do not use font files for any language. I use the UTF8 character set, which, as far as I know, has it all. The translators (who are in their native countries) paste the text into form  textareas. I assume they work from MS Word often. We have not tried pasting from a PDF, which doesn't sound like a good approach.
    Some people convert the text with non-latin characters to html entities, but I think that is a really bad idea. If your entire web workflow is set to UTF-8, there should not be a problem.

  • New mail applications/reading mails in non-latin alphabet foreign languages

    I work with China and Korea a lot. And I receive emails in Korean and Chinese language.
    How do I allow my mail to read emails in Korean and Chinese. I have a friend who can read his iPhone emails in Japanese but I cannot read my iPhone emails in Korean and so I wonder if he has some kind of setup thats different than me.
    Also, I am curious to know some opinions about any other mail applications for Mac that anyone can suggest and reasons why they prefer it over the standard mail application, if they do at all.
    Thanks
    Dan

    I cannot read my iPhone emails in Korean and so I wonder if he has some kind of setup thats different than me.
    If your problem relates to the operation of your iPhone, please use this forum instead:
    http://discussions.apple.com/forum.jspa?forumID=1139
    The iPhone lacks various language capabilities which are in the full OS X. I think emails may have to be in UTF-8 encoding to be received correctly.
    If you have problems reading non-latin emails in the full OS X, let us know. Usually they can be solved by switching the encoding in Message > Text Encoding.

  • [Solved]Ctrl+V not working in libreoffice on gnome(non-latin layout)

    When using gnome-shell&libreoffice or calligra, universal keyboard shortcuts such as ctrl+v or ctrl+z don't do anything while on a non-latin layout(hebrew in my case).
    If I want to paste something with ctrl+v I have to change layout to English and only then will it work.
    Under MATE the shortcuts work fine regardless of the layout in both applications(and all others i have tried).
    Under GNOME Shell all other applications I tried accept the shortcuts regardless of the layout(Firefox, gedit, Empathy, Nautilus)
    Anyone has an idea as to what might cause this behavior when using GNOME?
    Thanks.
    EDIT: Solved for libreoffice by removing the package "libreoffice-gnome". UI is not as pretty now but at least the keyboard shortcuts work.
    Last edited by shimi (2013-08-09 09:00:50)

    After months of switching layouts and banging my head against this bug, I thought I should check LibreOffice settings (I'm using 4.1.5.3 now). What figures? I did find something. And in just a few clicks.
    This is not a bug! It's simply a matter of configuration.
    For the regular keyboard shortcuts (like Ctrl+C, Ctrl+V, etc.) to remain operational in LibreOffice applications while using a non-latin keyboard layout (like Greek or Russian), go to Tools -> Options -> Language Settings -> Languages, check the Ignore system input language option, save, and Bob's your uncle.
    Hope this helps.
    Cheers!
    PS
    Technically, though, shortcuts still remain language-dependent. This means if you enable this option, you will have to set your document languages manually.

  • Non-Latin characters no displaying properly in Safari on iPhone

    I am having an issue where non-English (Chinese, Japanese etc) characters are displaying as squares on the iPhone's Safari browser. I've seen this issue on Windows when you haven't installed support for Asian languages.
    It would appear that the version of OS-X on the iPhones does not have support for non-Latin character sets.
    Has anyone else experienced this problem?

    I am having an issue where non-English (Chinese,
    Japanese etc) characters are displaying as squares on
    the iPhone's Safari browser.
    Could you provide the urls? It may be the pages have bad coding. Correctly coded pages will display according to tests I've seen:
    http://homepage.mac.com/thgewecke/iphonesafarilang.jpg
    But unlike OS X the iPhone has no way yet to manually correct for miscoded pages via a View > Text Encoding menu.

  • Passwords in non-english languages

    Hi all,
    I have some passwords in non-english languages, but enter them in English layout. Looks like "FylhjblDHfpsKexit".
    Standard keyboard doesn't display another language while it in English layout. How to enter such passwords?
    Remake passwords in English - bad idea.

    Again:  What language are you talking about?
    It sounds to me like your passwords are in fact in Latin characters, but you want to enter them as if you were using some other language keyboard.   I think the only way to do that would probably be to use a bluetooth keyboard.
    You could ask Apple to add the feature you want via
    http://www.apple.com/feedback
    but I cannot imagine they would ever do so.

  • Non latin characters in .cfm filename

    Hi - I have users who want to name files with non latin characters.  i.e.
    Логотип_БелРусь_2500x1.cfm
    We get a file not found error, it is not an IIS issue and we have UTF-8 encoding and are running CF8.
    Yes we can rename the files but for now would like to know if non latin characters are allowed in .cfm file names.
    Thank you!
    Sapna

    PaulH wrote:
    en_US is the JRE locale. is that the same as the OS? and what file encoding?
    (check via cfadmin).
    i ask, because pretty sure you can't use non-ascii file names w/cf. there's an
    open bug on that:
    http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=77177
    only can guess that file encoding isn't latin-1, etc. and/or OS locale equals
    the same language as the file name.
    cfadmin gives pretty much the same information. Here's a direct copy
    Server Product
    ColdFusion
    Version
    9,0,0,241018  
    Edition
    Developer  
    Serial Number
    Operating System
    Windows 2000  
    OS Version
    5.0  
    Update Level
    /C:/ColdFusion9/lib/updates/hf900-78588.jar  
    Adobe Driver Version
    4.0 (Build 0005)  
    JVM Details
    Java Version
    1.6.0_12  
    Java Vendor
    Sun Microsystems Inc.  
    Java Vendor URL
    http://java.sun.com/
    Java Home
    C:\ColdFusion9\runtime\jre  
    Java File Encoding
    Cp1252  
    Java Default Locale
    en_US  
    File Separator
    Path Separator
    Line Separator
    Chr(13)

  • Non-english languages available?

    hi. i am not sure if this is the correct forum, but my question did not seem to fit anywhere else.
    i need to do emails, names, etc. in korean. does the iphone allow me to type in non-english? if so, how do i got about switching languages?
    thanks.

    i need to do emails, names, etc. in korean. does the iphone allow me to type in non-english?
    No. Non-Latin input is not yet available from Apple. For a web app workaround for Korean, see:
    http://www.zannavi.com/iphone/hanson2.html

  • Embedded glyphs for alpha not working for non-Latin letters

    Even after reading through all the posts here, I'm still
    confused about why embedding is necessary for alphas on text and
    how embedding works, so maybe someone can sort it out and help me
    with my problem:
    I have a FLA that is trying to present the same script in
    (the user's choice of) several languages -- including non-Latin
    alphabets like Korean, Japanese, Russian, Chinese and Arabic. I'm
    using the Strings library to load translations into my text movie
    clips on the stage.
    The language stuff works great except that alpha tweens
    weren't working. So I selected the movie clip symbols (three of
    them) in the library and told Flash to embed all glpyhs. Each of
    these symbols is using Trebuchet MS, so my thinking is that I'm not
    exceeding the 65K limit because I'm embedding the same glyphs for
    each. (But I'm not sure if that limit is for the SWF or per
    symbol.) I only have one other font in the FLA and for that one,
    I'm just embedding English alpha characters.
    (Perhaps as an aside, I also included these fonts Stone Sans
    and Trebuchet MS as fonts in my library, but I don't understand
    whether this is absolutely necessary or if embedding glyphs
    eliminates the need for these.)
    So with those glyphs embedded, my text alpha tweens work
    beautifully. However -- and this is where I begin to shed tears --
    all the Korean and Cyrillic text is gone. It's just punctuation and
    no characters. I don't have Chinese, Japanese or Arabic text loaded
    yet, but I imagine that these would suffer from the same problem.
    What am I doing wrong? Do I need to create more than one
    movie to achieve my multilanguage goal? (Or worse, render each
    language separately? -- yuck!) In my old version of Flash (just
    up'd to CS3) I could tween alpha text with no problem, so why is
    all this embedding even necessary?
    Thanks,
    Tim

    Is this just impossible? Or really hard?

  • Non-Latin Character Display

    I have several Java web applications on Tomcat where non-Latin characters function properly with only one exception. Non-Latin characters, Chinese in this case, can be displayed properly thorough the JSTL message tag. The related configuration are followings:
    HTML: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    JSP: <%@ page contentType="text/html;charset=UTF-8" language="java" %>
    The applications can take Chinese character inputs correctly with a filter of converting request character encoding to "UTF-8".
    The only problem is that Chinese characters don't displayed properly when they are directly typied on a JSP file. I have set the Eclipse file text encoding to utf-8 and the characters are shown correctly in the IDE.
    I believe that is a TC configuration related issue. After having "set JAVA_OPTS= -Dfile.encoding=UTF-8" in the catalina.bat file, nothing has changed.
    How to solve this problem?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Solved!
    Just need to have the JSP page tag on the related JSP file.
    Message was edited by:
    vwuvancouver

    I have several Java web applications on Tomcat where non-Latin characters function properly with only one exception. Non-Latin characters, Chinese in this case, can be displayed properly thorough the JSTL message tag. The related configuration are followings:
    HTML: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    JSP: <%@ page contentType="text/html;charset=UTF-8" language="java" %>
    The applications can take Chinese character inputs correctly with a filter of converting request character encoding to "UTF-8".
    The only problem is that Chinese characters don't displayed properly when they are directly typied on a JSP file. I have set the Eclipse file text encoding to utf-8 and the characters are shown correctly in the IDE.
    I believe that is a TC configuration related issue. After having "set JAVA_OPTS= -Dfile.encoding=UTF-8" in the catalina.bat file, nothing has changed.
    How to solve this problem?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Solved!
    Just need to have the JSP page tag on the related JSP file.
    Message was edited by:
    vwuvancouver

  • Template content displayed in non-standard language

    System preferences set to English.  When I view & select the templates in Pages, the content is in a non-English language (maybe German?).  How can i change that?  Do I need to reinstall the Pages application?  Unable to find this topic in the User Guide.

    German?
    You're kidding!
    It's Latin and it won't bite you, it is just there to fill up the page and let you know what the text looks like.
    That's called placeholder text and when you click in it it is selected and replaced by whatever you type.
    see p16 of the Pages09_UserGuide.pdf
    Placeholder text shows you how your text will look on the page. If you click placeholder text, the entire text area is selected. When you begin typing, the placeholder text disappears and is replaced by what you type. To learn more, see “Using Placeholder Text” on page 75.
    p75
    Using Placeholder Text Templates contain placeholder text, which shows you what text will look like and where it will be placed in the finished document. Most placeholder text appears in Latin (for example, lorem ipsum) in the document body, text boxes, headers, and elsewhere. Other predefined text, such as the title of a newsletter, appears in the language you’re using.
    Peter

  • Non latin 1 .CSV support ?

    Hello,
    When I enable .csv export in my reports everything work fine with latin1 languages.
    For non-latin1 languages (Russian for example) the .csv export contains ¿¿ (BF hex).
    The APEX application displays the non-latin1 strings correctly.
    The “Automatic CSV encoding” doesn’t seem to help (Strange characters in report .csv export in this case.
    I haven’t found any solution up to now (I am using APEX 3.1) and I am wondering if the .CSV is working for non-latin 1.
    Any help is welcome.
    Cheers
    Valéry

    Hello Valéry,
    >> We are using Oracle SE1 with following versions … Oracle Database 10g Release 10.2.0.1.0 – Production …Should we switch to Apache ?
    Yes, you should switch to Apache (or to be exact Oracle HTTP Server) because the EPG is not supported under 10g, unless you are using the special 10XE version.
    >> Do you know a configuration using the 'PL/SQL Gateway' mode, where latin 1 .CSV export works?
    When using the proper universal version of Oracle XE, with APEX, it’s support CSV exports in any language. I didn’t have a chance to specifically check it under 11g (the only “regular” database version which support APEX with EPG), but I’m sure it is also supported in that configuration.
    >> For example many import functions doesn't work properly (application import, icon upload, etc) with the configuration above.
    All the problems you are describing are closely related to the way APEX is communication with the database. I’m pretty sure it has nothing to do with the database version you are using (patched or not) but to configuration issues, which in your environment should be part of the OHS configuration (mainly the DAD).
    You should disable the EPG and install and configure the OHS - http://download.oracle.com/docs/cd/E10513_01/doc/install.310/e10496/db_install.htm#CIHJBAGG – and I’m pretty sure that all your problems will disappear.
    BTW, you don’t have to patch your database in order to patch your APEX version. Just download, from MetaLink, the patchset for 3.1.2 and install it - http://joelkallman.blogspot.com/2008/08/applicaiton-express-312-released.html .
    Regards,
    Arie.

  • BUG?? UTF-8 non-Latin database chars in IR csv export file not export right

    Hello,
    i have this issue: my database character set is UTF-8 (AL32UTF8) and contains data in a table used in IR that are Greek (non-Latin). While i can see them displayed correctly in IR and also via select / in Object Browser in SQL Workshop when i try to Download as csv the produced csv does not have the Greek characters exported correctly, while the Latin ones are ok.
    This problem is the same if i try IE or Firefox. Also the export in HTML works successfully and i see the Greek characters there correctly!
    Is there any issue with UTF-8 and non-Latin characters in export to csv from IRs ? Can someone confirm this, or has a similar export problem with UTF-8 DB and non-Latin characters ?
    How could i solve this issue ?
    TIA

    Hello Joel,
    thanks for taking the time to answer to my Issue. Well this does not work for my case as the source of data (Database character set) is UTF-8. The Data inside the database that are shown in the IR on the Screen is UTF-8 and this is done correctly. You can see this in my example. The actual Data in the Database are from multiple languages, English, Greek, German, Bulgarian etc that's why i selected the UTF-8 character set when implementing the Database and this requirement was for all character data. Also the suggested character set from Oracle is Unicode when you create a Database and you have to support data from multiple languages.
    What is the requirement, is that what i see in the IR (i mean in Display) i need to export in CSV file correctly and this is what i expect from the Download as CSV feature to achieve. I understand that you had in mind Excel when implementing this feature but a CSV is just an easy way to export the Data - a Comma Separated Values file, not necessarily to open them directly in Excel. Also i want to add here that in Excel you can import the Data in UTF-8 encoding when importing from CSV, which is fine for my customer. Also Excel 2008 and later understands a UTF-8 CSV file if you have placed the UTF-8 BOM character at the start of the file (well, it drops you to the wizzard, but it's almost the same as importing).
    Since the feature you describe and if i understood correctly is creating always an ANSI encoded file in every case, even when the Database character set is UTF-8, it is impossible to export correctly if i have data that are neither in Latin, not in the other 128 country specific characters i choose in Globalization attributes and these data is that i see in Display and need to export to CSV. I believe that this feature in case the Database character set is UTF-8 should create a CSV file that is UTF-8 encoded and export correctly what i see i the screen and i suspect that others would also expect this behaviour. Or at least you can allow/implement(?) this behaviour when Automatic CSV encoding is set to No. But i stongly believe - and especially from the eyes of a user - to have different things in screen and in the depicted CSV file is a bug, not a feature.
    I would like to have comments on this from other people here too.
    Dionyssis

  • Writing non-latin Character to Log file from Java application

    Hi Everyone,
    I'm encountering a very strange localization issue.
    I'm executing the following code from a J2EE application (although the behavior is replicated exactly from a Java console application):
    File testFile = new File("D:\\Temp\\blah.txt");
    testFile.createNewFile();
    Writer output = null;
    try {
    output = new BufferedWriter( new FileWriter(testFile) );
    output.write( "אחד שתיים שלוש" ); // Non-Latin characters
    output.write("one two three");
    finally {
    if (output != null) output.close();
    I'm writing non-latin characters (Hebrew in my case) to the file, but these characters are written as question marks:
    ??? ????? ????one two three
    I'm running the project with the following java arguments:
    -Dfile.encoding=UTF8 -Duser.region=IL -Duser.language=he
    And the Project Properties -> Compiler -> Character Encoding = UTF8
    Also, the local Windows Regional Settings are set to Israel / Hebrew.
    Can someone please help with this issue?
    I've tried this with UTF-8 (instead of UTF8) and still no results.
    I suspect this is a Java / Environment issue as it reproduces on a J2EE app and on a console app.
    Any help will be greatly appreciated.
    Thanks,
    Tal.

    You should open/close it somehow at every write from different processes.
    But I personally prefer different file names to your forced merging, though.

  • How to send non-latin unicode characters from Flex application to a web service?

    Hi,
    I am creating an XML containing data entered by user into a TextInput. The XML is sent then to HTTPService.
    I've tried this
    var xml : XML = <title>{_title}</title>;
    and this
    var xml : XML = new XML("<title>" + _title + "</title>");
    _title variable is filled with string taken from TextInput.
    When user enters non-latin characters (e.g. russian) the web service responds that XML contains characters that are not UTF-8.
    I run a sniffer and found that non-printable characters are sent to the web service like
    <title>����</title>
    How can I encode non-latin characters to UTF-8?
    I have an idea to use ByteArray and pair of functions readMultiByte / writeMultiByte (to write in current character set and read UTF-8) but I need to determine the current character set Flex (or TextInput) is using.
    Can anyone help convert the characters?
    Thanks in advance,
    best regards,
    Sergey.

    Found tha answer myself: set System.useCodePage to false

Maybe you are looking for

  • HT1918 I am trying to change the country on my iTunes account, and I have a msg please contact Itune support, any help?

    I am trying to change the country on my iTunes account, and I have a msg please contact Itune support, any help?

  • Who created Doc.

    Hi, My client requirement is, when user made a payment via F-53 tcode then it will generated document number. My question is who generated this document? I had checked BSEG, BSAK , BSIS and BSAS table where no user id column to determine that who gen

  • Query correction

    hi..good evening... i have with sale_order as (         select 25 sd_code, 'P01045645' sd_prod_code, 10 sd_qty from dual union all         select 25,'P01400025'  ,  35 from dual union all         select 25,'P01033691.3',  40 from dual union all      

  • Reconciliation of data

    Hi Experts, Can anyone explain what is reconciliation of data and why do we do that  and what are the steps for that process. Pease reply.Points will be assigned. Thsnks, Sai.

  • Windows service does not stop when DB is shutdown from SQL*Plus

    I have a 11g XE DB in a Windows 7 machine. The installation has created a Windows service called OracleServiceXE. It's status is STARTED. I then login to SQLPLUS as sysdba and do a shutdown immediate; SQL> shutdown immediate Database closed. Database