Hebrew characters on SW2

hello, 
i've been experiencing problems with hebrew language characters in the various sw2 apps (such as messaging app, missed calls app, gmail app etc,)  hebrew characters just shows up as question marks (??????) which makes them (and the SW2) quite useless for hebrew language users. however the call handeling app works flawlesly with hebrew and phonebook and contacts shows hebrew characters without any problem. kinda weird all the other apps dont support it, when android 4+ versions nativley support hebrew and the call handling app handles it.
is there any fix for it, or a future fix/support intended ?????? (not hebrew, actual question marks this time...)

As I understand there's no work around for this at the moment. But I will pass on this feedback internally.
 - Community Manager Sony Xperia Support Forum
If you're new to our forums make sure that you have read our Discussion guidelines.
If you want to get in touch with the local support team for your country please visit our contact page.

Similar Messages

  • I use classical Hebrew for my work, and Pages will only display English characters even with a Hebrew font selected. If I cut and paste Hebrew characters from another document, as long as the font is supported, it will appear in Pages.  If I type it won't

    I use classical Hebrew for my work, and Pages will only display English characters even with a Hebrew font selected. If I cut and paste Hebrew characters from another document, as long as the font is supported, it will appear in Pages.  If I type it won't continue in Hebrew.  I have tried downloading several fonts, including those from professional societies, but the only way to get Hebrew in my document is to cut and paste.  Does anyone know how to fix this?  I use an older MacBook running OS 10.9.1.  I used to do my Hebrew work in Word, but it is no longer supported by Mac OS.

    Just clarifying:
    Pages '09 has bad support for Hebrew, Arabic etc but will accept pasted text.
    Pages 5 has much better support but with bugs.
    If you have columns they are in the wrong order ie Text starts in the left column and ends in the right column.
    If you type English into Hebrew text it tends to fall in the wrong position eg instead of to the left of Hebrew punctuation it goes to the right.
    As Tom recommends the only real solution on the Mac is Mellel.
    Peter
    btw Tell Apple, they are amazingly slow to fix this running sore which has been broken since RtoL was supposedly introduced in OSX 10.2.3 over a decade ago.
    Peter

  • Re: Hebrew Characters...Chars display as junk after import from 8i to 10g

    Hi Team,
    We have similar problem with our DBs , Was going through a thread and found helpful.
    We have source DB which contains table having column with hebrew characters , They  are converting it and then sending us a dump supporting WE8ISO8859P1 .
    Now they are asking to load at our end in Database supporting WE8ISO8859P1 character set and then use CONVERT function and then load it into destination Database which supports AL32UTF8 format.
    We have all DBS on AL32UTF8 only.
    Raja has not confirmed if this solution worked for him.
    I want to try ir but need confirmation as we have scheduled a regular export import. I will have to ask a data in flat file .
    Thanks .

    Hi Team,
    We have similar problem with our DBs , Was going through a thread and found helpful.
    We have source DB which contains table having column with hebrew characters , They  are converting it and then sending us a dump supporting WE8ISO8859P1 .
    Now they are asking to load at our end in Database supporting WE8ISO8859P1 character set and then use CONVERT function and then load it into destination Database which supports AL32UTF8 format.
    We have all DBS on AL32UTF8 only.
    Raja has not confirmed if this solution worked for him.
    I want to try ir but need confirmation as we have scheduled a regular export import. I will have to ask a data in flat file .
    Thanks .

  • Hebrew Characters...Chars display as junk after import from 8i to 10g

    Gurus,
    I have a problem with a customer upgrade...
    Background of the issue... the customer is an Agile PLM customer of version 8.5 (Agile PLM version 8.5). The database was hosted on oracle 8i. He is intending to upgrade from Agile 8.5 to Agile 9.2.2.4. During this process he has upgraded he db from Agile 8.5 to Agile 9.2.2.4, and has also shifted the DB platform from 8i to 10g.
    Problem: There were hebrew characters entered in Varchar2 columns (Oracle 8i), which after upgrade are not displaying correctly. Newly entered hebrew characters after upgrade display correctly in UI...
    Customer DB Parameters : The nls parameters on the source db before upgrade(8i) are American_America.WE8ISO8859P1, and the destination db parameters are American_America.UTF8.
    What i have done to deal with the issue: i have tried exporting the db using UTF8 and importing the db to 10g on UTF8, but still the characters show as garble characters..., have tried various options of exporting/importing using the combinations of WE8ISO8859P1 char set as well as IW8ISO8859P8 charsets, as i have learnt during my research abt the charsets that Hebrew Characters are supported in IW8ISO8859P8 charset and not WE8ISO8859P1. My suspicion here is that the problem is with the export and import from 8i to 10g, and the Char conversion which is happening during this process..(this is my guess and i might be wrong tooo...)
    Currently this is a hot issue with the customer, and needs an immediate fix (to display the Hebrew characters properly after upgrade)
    I am a layman on the NLS Settings and couldnt figure out what else to do....I would request all the Gurus out there to help us figure out the problem and try resolve it...
    Thanks for your Help in Advance
    Regards,
    Raja.

    Hebrew characters aren't supported using the ISO 8859-1 character set. In the original system, what must be happening is that the NLS_LANG on the client matches the database character set, which tells the Oracle client not to do character set conversion. This causes Oracle to treat character data just as a binary data stream and to store whatever bits the client sends. So long as the client is really sending ISO 8859-8 data, telling Oracle it is ISO 8859-1, and on the return side asking Oracle for ISO 8859-1 and treating it internally as ISO 8859-8 while Oracle is doing no character set conversions, this will appear to work. But the data in the database is fundamentally wrong. You can configure things so that you can, apparently, store Chinese data in a US7ASCII database using this sort of approach, but you end up with screwed up data.
    This sort of problem usually comes to light when you start using tools (like export) that don't know how to mis-identify the data they are sending and retrieving like your application is doing or when character set conversion is necessary. Since the data in the database isn't valid ISO 8859-1, Oracle has no idea how to translate it to UTF8 properly.
    As previously suggested, the safest option is to move the data with a solution that replicates the behavior of the application. So
    - Set the client NLS_LANG to match the database character set (WE8ISO8859P1)
    - Extract the data to a flat file using SQL*Plus or a C/C++ application
    - This data file will, presumably, really be ISO 8859-8 encoded
    - Use SQL*Loader to load the data into the UTF8 database, specifying the actual character set (ISO 8859-8) in the control file.
    If you're rather more adventurous and working with Oracle Support, it is potentially possible to change the character set of the source database from ISO 8859-1 to ISO 8859-8 and then export and import will work properly. But this would require some undocumented options that shouldn't be used without Oracle Support being involved and introduces a number of risks that the flat file option avoids.
    Justin

  • Toad linked to Oracle 11g dosent show hebrew Characters

    Hi all
    I tring to connect oracle 11g DB contains hebrew Characters using TOAD
    but for some reason its deosent show the hebrw Characters (display a question marks)
    I have been told to check the NLS_LANG varible, but its looks OK
    Plase help
    Yaniv

    Hi Alex 2023,
    Did you restart SQL Server service (MSSQLSERVER if default instance) without restarting the database engine service? Otherwise the newly installed Oracle ODAC will not work.
    For more information, perhaps they can be of help: Installing 64-bit Oracle ODAC 11.2 to Microsoft SQL Server 2008 R2 x64: http://nampark.wordpress.com/2011/01/20/installing-64-bit-odac-11-2-to-microsoft-sql-server-2008-r2-x64-for-replication/
    OraOLEDB.Oracle is Oracle's provider, please have a look at here and the other links referred from there.
    Meanwhile you can refer to this thread with the similar scenario as yours: OraOLEDB.Oracle.1 provider is not registered on the local machine
    http://social.msdn.microsoft.com/Forums/en-US/adodotnetdataproviders/thread/1cd543ac-930e-4c32-b1ba-8a4f2beb9999/
    If it still does not work, you can link to oracle forum for further help.
    Regards, Amber zhang

  • Unable to type Hebrew characters intoTextInput control

    Hi All,
    I've created a project using Flash Builder beta 2 and SDK 4.0.
    When testing my project locally I'm able to type Hebrew characters into TextInput fields.
    In a strange way, after uploading release build to website I'm not able to type Hebrew anymore.
    The direction is "rtl", also when pasting Hebrew text into same fields it is displayed correctly.
    Anyone has an idea what is wrong?
    Thanks, Yan

    Can you explain what the problem in the HTML wrapper was? Were you using a custom wrapper?
    Gordon Smith
    Adobe Flex SDK Team

  • I have received a ppt with hebrew characters but OS can't read/recognize hebrew characters. what can I do?

    I have received a ppt presentation with hebrew characters but OS doesn't recognise them? what can I do?

    Thanks for sending the file.
    I think it uses non-standard Hebrew fonts that map hebrew to latin.   Nobody uses these any more except some old bible study courses.  I think you may be able to get them here  (bwhebb, bwtransh):
    http://www.bibleworks.com/fonts.html
    I would strongly recommend that anything you create yourself you do with Unicode Hebrew fonts, which have been the international standard for a while now.  Apple provides a number of these with OS X, no need to download anything.

  • View Hebrew characters via a virtual InfoCube with services?

    Hi there!
    Is it possible to view Hebrew characters via a virtual InfoCube with services from an external database?
    Requirement
    Data which is physically stored in an external database needs to be shown in a SAP BW report to the end users. The movement of data must be avoided.
    Concept
    We use a virtual InfoCube with services in connection with UDC in order to retrieve data from an external database.
    System Architecture
    We use BW 3.5 and Web AS 6.40 J2EE on the SAP side. The external database is a Teradata database. All systems communicate in Unicode UTF-8.
    Problem
    The access of the data using SAP queries in this architecture works fine with “regular” Latin characters and numeric values. When we try to transfer a field into an InfoObject of the virtual InfoCube, which contains Hebrew characters in the external database, we are getting the following error message:
    Messages:
    Value '' (hex. '') of characteristic contains invalid characters
    System error in program SAPLRRK0 and form RSRDR;SRRK0F30-01-
    Can anybody explain this?
    Thank you and best regards
    Klaus-Peter

    Hi Klaus-Peter,
    Execute transaction code <b>RSKC</b>. In the "<b>Permitted Extra Characters</b>" field, input all your Hebrew characters and save your entries by executing the "<b>Execute</b>" icon.
    Then go back and re-run your upload program. You shouldn't have any invalid characters again, provided you have added them all.
    I hope the above helps.
    Do not forget to award the points please.
    Thanks and Regards,
    Jacob

  • How to get hebrew characters to work with data merge?

    I'm trying to work with data merge with Hebrew characters and get gibrish on the panel, merging and export.
    I've tried to change the CSV file to Unicode and change the language setting but it still don't work.
    I've worked with data merge before in English and it work perfectly.
    Any Ideas? is this a bug? software constrains?

    Try this:
    Save your file as a UTF-16 BE (Big Endian) file.
    Import showing options in ID. I'm on a PC, choose the below regardless.
    Should look like this in ID.
    Apologies to Farsi-speaking people everywhere. I pulled some text out of a Farsi text file to make up this tab-delimited merge file. I don't speak it, so it is likely servely non-sensical.
    The text editor being used here (first screen shot) is the OpenSource NotePad++. I am also not using the ME version of ID.
    Take care, Mike

  • AE text replacement script, Hebrew characters and punctuation marks

    Hi,
    I'm using a script to replace text on one of my AE projects. When I use English characters it work prefect, the problem is once I'm trying to replace the text with Hebrew I get small arrows (as shown in the attached screen grab) before and after punctuation marks.
    Any idea what can cause this kind of behavior, is it script related or the base AE file?

    These appear to be glyphs that should be substituted. Could be something to do with specific text flow or multi-variate glyphs that are substituted based on the context of neighboring characters. In any case, short of manually fixing it I don't think there is any way to get this right. AE is quite limited in dealing with any of this stuff and in the past you actualyl needed the Middel East versiosn to get correct RTL writing at all...
    Mylenium

  • A problem with inserting into DB hebrew strings

    Hi,
    I am working with a 8.1.7 DB version, and use thin driver.
    I have my DB Charest configured to iso 8859P8 (which is visual Hebrew)
    I have no problem in making a connection and retrieving strings, using SELECT * FROM ..
    and then use the ResultSet.getString(String columnName) method .
    I also have no problem in inserting the Hebrew characters , and retrieving them back ( I represent them in a servlet ),
    The only problem I do have, is when I try to insert into DB a row in the following manner
    INSERT into table_name values( Hebrew_String_value1, Hebrew_String_value2, Hebrew_String_value3, Hebrew_String_value4)
    the insertion works fine , but somehow the insertion misplaces the strings order and actually the insertion is in opposite order :
    Hebrew_String_value4, Hebrew_String_value3, Hebrew_String_value2, Hebrew_String_value1.
    If I use the same insert with English Strings , there is no problem.
    does any one have the solution how I insert the strings in the right order ?
    one solution I have is to insert only one column and then update the table for each column , but then , instead of one execute() action , I have to make ,
    1 execute() + 9 executeUpdate() for a 10 column table

    Can you try specify the column order in your INSERT statement, i.e.
    INSERT INTO mytable( column1_name,
                         column2_name,
                         column3_name,
                         column4_name )
                 VALUES( column1_string,
                         column2_string,
                         column3_string,
                         column4_string)My wild guess, though I can't understand why at the moment, is that there may be a problem because Hebrew is read from right to left, that may be causing a problem.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Encoding non english characters with utf 8 on jsp (Critical!!)

    I am inserting hebrew characters from JSP into oracle db and everything is fine until this point. But when I try to retrieve the information from the database, the characters are not displayed properly (I get some garbage characters). I am sure that the data stored in the database is correct, but not sure why there is a problem in displaying the data in the JSP.
    I came across a thread on TSS
    http://www.theserverside.com/discussions/thread.tss?thread_id=28944
    and followed the suggestions given there like having
    <%@ page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %>
    <META http-equiv="Content-Type" content="text/html; charset=UTF-8">and also this
    <%
    //Some JDBC and sql statement query UTF-8 data and then ...
    String str = rs.getString("utf8_data");
    str = new String(str.getBytes("ISO-8859-1"),"UTF-8");
    %>
    <%= str %>Now, the data getting displayed is partly correct, I mean to say, some characters are still coming as squares.
    Any ideas will be of great help.

    even i doubt the database charset for this issue. But what I dont understand is how only certain hebrew characters are getting stored properly and why others are corrupted?
    Also, can anyone let me know how i can view the Non-English characters present in the database directly, as TOAD is not able to display them

  • Font Book is showing fonts in non-english characters

    Been installing my old fonts and the like to Font Book on Mountain Lion but I seem to be running into a small issue as I install.
    Besides the fact that I'm getting the 'A-image' for some of the fonts, a fair few of the fonts themselves also seem to be showing up in what looks to be some sort of alternate language? 
    I uploaded a few examples here as well to help explain the problem I'm running into.
    This is actually a fairly new computer, only a few days old, so there's hasn't been a lot that I've done that I could assume would be doing this.
    Any help or suggestions would be great, thank you in advance for any assistance you can provide!

    The first and third fonts are showing their Hebrew characters as a sample, the second one is showing its Thai characters.  The 4th one is showing a character from the Apple Last Resort Font, which indicates a problem accessing the proper character.
    I don't know why Fontbook does this, but I haven't seen any reports so far that it affects the usability of the fonts in question in any way.   Does it for you?

  • LR 2.1 Unable to Open as Smart Object in PS CS3 due to swedish characters in folder name

    When I try to open a raw file from Lightroom 2.1 as a smart object in Photoshop CS3 and the folder contains swedish characters, like åäöÅÄÖ, it will not open at all -> PH starts up and presents a "Open As" dialog.
    It works fine when I open images that is in a folder only having english characters...
    As I read that some apostrophe-related problems were fixed in the 2.1 release I hoped that it included all special characters allowed in folder names... The same problem occurs in version 2.

    My problem might be of a similar origin.
    I have also posted it on a different thread http://www.adobeforums.com/webx/.59b6903c
    I use mostly Hebrew characters for Keywording (the file names are in English) after opening in CS3 and saving, the keywords are converted from Hebrew fonts to accented fonts like åäöÅÄÖ.
    This didn't happen in LR1 and does not happen if I use CS3 as my secondary editor (not using the new LR2 edit in CS3 functionality)
    Shai

  • '?' instead of National Characters

    Hello,
    I am using a 8.1.6 database with UTF8 character set on a Win2000 machine. I have inserted some Hebrew characters into a table, and have been able to read them back properly using SQL+.
    However, when I try to retrieve these values from Java, using Oracle JDBC driver 8.1.6, I always get the '?' character. The problem arises when I use the 'getString' method of the result set object.
    What I've noticed though, is that if I call the 'getBytes' method of the result set, and create a new String instance from that data, it works fine, which makes me think maybe there is a bug in the 'getString' method...
    What to do?

    My hebrew string is four characters long, and takes eight bytes...is this ok?
    I checked the db NLS settings (still UTF8 ;-)
    But I discovered something new - when I change my computer's regional locale (through regional settings in the control panel) to English-US the result works! but when I change it back to Hebrew locale, it returns '?' again.
    null

Maybe you are looking for