Predictive text non-English characters to be made ...

I just filled an enhancement request on this feature, please vote for it HERE:
Predictive text non-English characters to be made optional
The story is: while using predictive text for non-English languages (Polish in my case) the dictionary words are grammar correct which include special characters like: ą,ę,ć,ś,ż,ź,ó,ł etc. For texting (SMS) operators count these as 3 characters making a message much longer than it looks. Therefore I can tell you no one uses these characters while texting and people use EN only characters instead a,e,c,s,z,o,l... which makes using the predictive text useless for eg Polish language.
I'd like to have an option to switch using these non-en chars off for predicting text, which is grammatically not correct but in real life that's how people type.
So basically if there's an option to disable lang specific characters I would be getting an example suggestion of 'Prosze' instead of grammatically correct 'Proszę'. 'Prosze' is a 6 character word, 'Proszę' is a 5+3=8 character word. Considering a single SMS message a 300 chars, than it really makes a difference.
Simple solution would be to replace every char ą with a, ć with c, ó with o etc... in each word suggested for the ones who have this option enabled.

HI,
You can write a code in PAI of main screen. there by using loop at screen you can make that field editable or disabled.
Code sample:
loop at screen.
****condition for value check
if screen-name = 'TEXT_EDIT_NAME'
screen-output = 1.
screen-input = 0.
modify screen.
endif.
endloop.
Hope this will help you.

Similar Messages

  • How to Identify non-english characters in a Text

    Hi Experts,
    I have a text coming from KNA1-NAME1 which contains non-english characters / language at times. I want to identify them in my code so that I can skip them.
    Can you please guide with some Command / FM that help to identify these non-english characters?
    Regards,
    Nirmal

    Hi,
    I am fine with english characters A-Z, a-z or 0-9 or special characters. But it contains some chinese, japanes or non-english language characters which I dont want.
    The logic explained by you above would expect me to list all the valid characters. Also it would be a performance constraint. Hence i wanted something as FM or standard procedure. Can we use ASCII somehow ?
    Regards,
    Nirmal

  • Text to speech non english characters

    I like Alex a lot.  And I like how I can high text, press a hotkey, and Alex will start reading away.
    However, I read material that includes some non English characters.  It's -very- annoying when Alex announces what language and alphabet he's reading before saying the sound.  It would be so much better if he just said the sound.  Or just skipped it.  I could live with the latter just fine.  Are there any options anywhere to adjust this?

    see this article:
    http://homepage.mac.com/thgewecke/iwebchars.html
    max

  • [AS] Problem with non English characters in file path

    I wrote a script that exports a pdf file from ID, rasterizes it in PS, applies an action, saves it as another pdf file, and finally creates a Mail message, and attaches the file to it (the last part is written in AppleScript).
    The problem is that it doesn't work when the path to this file contains non English characters.
    This works:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Tetard/Test.pdf"}
    but this doesn't:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Têtard /Test.pdf"}
    I remember vaguely that I read somewhere that AppleScript can work with Unicode — in other words with such characters — starting from some version, don't remember which exactly, but it seems to me — Leopard.
    I am on Mac OS X 10.4.11 right now. Will updating solve this problem? Does anybody know any solution to this problem: a scripting addition, some hidden setting, etc.
    I made a little test: used a Russian character — ё and it works, but when I use — ê (Dutch) it doesn't. May it have something to do with the Region setting in International panel?
    Thanks in advance,
    Kasyan

    Kasyan, as of Leopard AppleScript treats all text as Unicode pre this you can specify 'as Unicode text'. Try a test with these.
    -- Leopard
    set x to POSIX path of (path to desktop)
    -- Pre Leopard
    set x to POSIX path of (path to desktop as Unicode text)
    -- Leopard
    set x to POSIX path of (choose file without invisibles)
    -- Pre Leopard
    set x to POSIX path of ((choose file without invisibles) as Unicode text)

  • Removing non-English characters from data.

    Ours is global system with some data with non-English characters. We want to download file by removing this non-English characters.
    Any suggestions how we can remove these non-English characters from file..?

    The FM u said
         Replace non-standard characters with standard characters
       Functionality
         SCP_REPLACE_STRANGE_CHARS processes a text so that it only contains
         simple characters. Special characters and national characters are
         replaced in such a way that the text remains reasonably legible.
         The character set 1146 is used by default. In this case the following
         replacements are made, for example:
          Æ ==> AE        (AE)
          Â ==> A         (Acircumflex)
          Ä ==> Ae        (Adieresis)
          £ ==> L         (sterling)
         Note that the new text can be longer than the old.
    So i dont think it ll be useful for eliminating the sp. chars.
    U have to check each and every alphabet with std 26 alphabets
    Thanks & Regards
    vinsee

  • Non English characters in BIP email

    Hi, my report contains Japanese characters, when I view the output in HTML format. It is displayed properly. But when I click on send button , enter email parameters like to, cc, bcc, subject , etc and send it, in the mail I receive, the japanese characters are not getting displayed properly. The same problem occurs for spanish and portugese texts-in general to all non english characters. I am using Oracle Business Intelligence Publisher Release 10.1.3.4. If someone has faced a similar issue, kindly help. Thanks in advance

    Suggestions
    1) Try with NLS_LANG as
    SWEDISH_SWEDEN.WE8DEC
    2) Make a paramform and enter via paramform (unencoded)
    (This is just for testing purpose)
    3) Change machine locale to swedish and try
    4) Which reports version is this ?
    Please see
    BUG 2713695 - NLS CHARACTERS FOR PARAMETERS CHANGE TO QUESTION MARKS WHEN PASSED ON URL BAR
    Get in touch with Support to see if this is the issue and if "yes" get a one-off patch.
    [    All Docs for all versions    ]
    http://otn.oracle.com/documentation/reports.html
    [     Publishing reports to web  - 10G  ]
    http://download.oracle.com/docs/html/B10314_01/toc.htm (html)
    http://download.oracle.com/docs/pdf/B10314_01.pdf (pdf)
    [   Building reports  - 10G ]
    http://download.oracle.com/docs/pdf/B10602_01.pdf (pdf)
    http://download.oracle.com/docs/html/B10602_01/toc.htm (html)
    [   Forms Reports Integration whitepaper  9i ]
    http://otn.oracle.com/products/forms/pdf/frm9isrw9i.pdf
    ---------------------------------------------------------------------------------

  • Encoding non english characters with utf 8 on jsp (Critical!!)

    I am inserting hebrew characters from JSP into oracle db and everything is fine until this point. But when I try to retrieve the information from the database, the characters are not displayed properly (I get some garbage characters). I am sure that the data stored in the database is correct, but not sure why there is a problem in displaying the data in the JSP.
    I came across a thread on TSS
    http://www.theserverside.com/discussions/thread.tss?thread_id=28944
    and followed the suggestions given there like having
    <%@ page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %>
    <META http-equiv="Content-Type" content="text/html; charset=UTF-8">and also this
    <%
    //Some JDBC and sql statement query UTF-8 data and then ...
    String str = rs.getString("utf8_data");
    str = new String(str.getBytes("ISO-8859-1"),"UTF-8");
    %>
    <%= str %>Now, the data getting displayed is partly correct, I mean to say, some characters are still coming as squares.
    Any ideas will be of great help.

    even i doubt the database charset for this issue. But what I dont understand is how only certain hebrew characters are getting stored properly and why others are corrupted?
    Also, can anyone let me know how i can view the Non-English characters present in the database directly, as TOAD is not able to display them

  • Support issue for non-English characters (in html forms)

    Hi group!
    I just want to post an issue here and see if anyone else has the same problem. First off, Im running Windows XP MCE but the French version (not the english version). This may help find out where the problem really is.
    Second, I know a bit of html and such, and I'm referring to HTML Character entities for this thread, there's a quite complete list here for reference: http://www.faqs.org/docs/htmltut/characterentitiesfamsupp69.html
    I noticed that some, not all, non-English characters written in a textarea (which is, basically, a multi-lined input box) doesnt pass well or at all to the server when sending the form from Safari. Most of the time, the content of the text area is reduced to the beginning and ends where the first accentued character is met.
    The most used French accents (&eacute;, &agrave;) are usually well interpreted (but may, once in a while, produce that bug too) by safari, but &ocirc; and &icirc; doesnt do that well.
    Oddly, this bug doesnt happen all the time and doesnt "crash" in the same manner everytime.
    So I started a thread just to see if there's anyone else having issues with any non-english characters mostly in forms. Probably flash/shockwave does work, but I'm not sure- I have not tested yet.
    Acer Aspire 5044   Windows XP   Turion 1.8GHz, 1Gb SDRam, ATI 200M xpress

    Yes, it is a known issue. I also noticed that it sometimes works, but most of the time it does not. It will hopefully be solved in the future. According to http://www.apple.com/safari/download/ changes that will come include:
    # Support for International users
    # International text input methods
    # Advanced text (contextual forms, international scripts)
    Sony Vaio   Windows XP  

  • Formula to show non english characters from clob in crystal report

    Hi
    I am using the database Oracle 11g with a field clob in one of the table that i want to show in the crystal report
    But the problem is when i put the clob field in crystal report it is outputting the results perfectly for English characters but not for the Arabic ones and returning the string like (¿¿¿¿ ¿¿¿¿ ¿¿¿ ¿¿¿¿ ¿¿¿¿ ¿¿¿ ¿ ¿¿¿¿¿ ¿¿¿)
    so is there any way to show the arabic (non english) characters perfectly on crystal report with clob field

    Hi Azeem,
    Make sure the arabic font should install in your system.
    Try this:
    Create a text field in your Crystal Report (a label).
    Place Arabic characters into that field (just by typing them into it on the report definition).
    Run the report. If they display correctly then its probably not Crystal, but rather would point to an issue in the data retrieval and supply to Crystal via your dataset (or whatever datasource you are using).
    If they don't display - then its definitely Crystal.

  • Can't CF use non English characters in URLs ???? (critical for SEO)

    Hi all,
    I want to use non English characters (Greek characters) for folders of the URLs.
    eg   http://www.mysite.com/Φάκελος/index.cfm
    where "Φάκελος" is a non English word (Greek).
    When the called page is simple HTML  eg   www.mysite.com/Φάκελος/index.HTM
    it's displayed just fine.
    When the called page is CF page  eg   www.mysite.com/Φάκελος/index.CFM
    I get a "FILE NOT FOUND" error.
    In the page where the link exists everything is UTF-8.
    What's the problem ? Can't CF use non English characters in URLs ????
    It's critical for SEO issues.
    I use CF9.  Any ideas ???
    Thanks in advance.
    Anastassios

    I don't have this setting in the email application. But as I know, html with Exchange is working only with the 2007 version, my server is still 2003 so I think in my case it's plain text only.
    But I'm telling again: good old (and now starting to miss) E60 with MfE worked very well!

  • Display non-english characters in its own corresponding language in excel

    Hello Experts,
    I have description texts in chinese and other languages which is visible properly in the debugger in my internal table.
    After downloading the data into an excel sheet into my file path, when opened the non-english description is displayed as ####
    Please help me in displaying the non-english descriptions in the excel sheet in its own corresponding language.
    Note:  Function module used : GUI_DOWNLOAD
                 File type assigned       : 'ASC'
    Edited by: keerthi shanker on Mar 14, 2008 11:02 AM

    Hello Vasanth,
    Please explain about what did you mean by 'Last Button in SAP screen'
    Well, to re-iterate my problem, I have data retrieved from SAP database that has values of multi languages which is displaying properly in the internal table as checked in the debugger.
    After the execution of FM 'GUI_DOWNLOAD', when i open the file from my desktop, the non-english characters like the chinese and japanese are each character is displaying in HASH symbol.

  • Formatting non-English Characters in Database Extracts

    Hi
    I am trying to create data flat file with Oracle SQL extracts. The data file is position based i.e. Position 1-30 for First Name, 31-50 for Last name, etc. I encountered problems when the data fields contain non-English characters. The position shift right by 1 with every non-English characters. For example, if there is one Spanish char in First Name, Last Name will start from 32, instead of 31. If there are two Spanish chars, Last Name will start from 33 (shift 2). Is there any way, in a database session, to restrict the formats of these text fields such that non-English characters will not affect the data field position in the flat files?
    Thanks in advance
    Jason

    An alternative might be to tab or comma delimit your data.
    Eric

  • Encrypting non-English characters

    Hi,
    I have this application which has to do the following
    Scenario (i)
    - Read ENCODED string SE from Network Source NS1 (Native,Non-JAVA)
    - Decode SE to SD using the same charset as NS1
    - Apply some transformation to SD to get SD2
    - Encode SD2 to get SE2 using the same charset as Network Source NS2
    (Native, Non-JAVA)
    - Send SE2 to NS2
    - NS2 gets what it expects without any problems :))
    Scenario (ii)
    - Read ENCODED string SE from Network Source NS1 (Native,Non-JAVA)
    - Decode SE to SD using the same charset as NS1
    - Apply some transformation to SD to get SD2
    - Get the bytes from SD2 as BSD2
    - Encode BSD2 to get BSE2 using the same charset as Network Source NS2
    (Native, Non-JAVA)
    - Encrypt BSE2 to BSE2_Enc
    - Send BSE2_Enc to NS2 (Native,Non-JAVA)
    - NS2 does not gets what it expects :((
    (It recieves English text OK but it gets ???? for non-English)
    The charset being used is windows-1256 (at NS1,NS2 and my application)
    Encryption is being done using BouncyCastle TwoFish w/ 256 bit keys
    Reading/Writing from/to network is being done over SocketChannel
    Get the bytes from SD2 as BSD2 => byte [] BSD2 = SD2.getBytes()It seems the non-Enlish characters are getting lost when I go SD2.getBytes()
    and they get encrypted as 'lost-non-English characters' ;)
    And when they get decrypted at NS2, they are displayed as 'lost-non-English
    characters' :)) i.e. ??????? .. so on
    Is there a way I can encrypt non-English plain text without losing information ?
    (without having to implement a TwoFish engine in my application itself)

    1) Bytes are not characters. Characters are UNICODE
    and have a byte representation defined by an encoding
    scheme. It is usually wrong to use the default
    encoding given by String.getBytes(). One should realy
    use String.getByte(encoding) eg
    "fred".getBytes("UTF-8");Awwllright ... got that :)) Thanks buddy
    2) Not having access to your code makes it difficult
    but make sure you are not converting encrypted bytes
    to a String using new String(encrypted bytes); No .. I am not doing that.
    3) Again, not having access to your code makes it
    difficult , but when you display your Strings make
    sure that you use a Font that has representations for
    all the UNICODE characters you wish to display. It is
    normal for any character that does not have a valid
    glyph in a gien font to display as a box.That infrastructure exists and is working fine ... as I mentioned
    this is working OK when plain text is being used.
    The problem was with using the getBytes() rather than getBytes("windows-1256")
    Its working now ... thanks alot .. again. I wonder how that never occurred to me.

  • SetMnemonic for non-english characters

    Does anybody knos how to set JButtons mnemonic for non-english characters?
    My mnemonic is loaded from a resource bundle, and in the documentation the setMnemonic(char) is only limited to english and it is written that the user should call setMnemonic(int) instead.
    So what value should this int contains in order to display the non-english char which is loaded from resource bundle?
    Thanks in advanve,
    Hanoch

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • I have non English characters showing up.

    This morning I wanted to make a change in my Security &amp; Privacy settings and when prompted to authenticate the dialogue box contained non-English characters.  Even the OK button appeared to be in Chinese are the like.
    Any ideas welcome.  OS X 10.8.1

    I'ts not all of the text though.  It's just the authentication popup that asks you for your password when you want to unlock the padlock to make changes. It comes up with "System Preferences" then two lines of non-english characters followed by "Type your password to allow this".  Then of course you have the Name and Password text fields.  The Cancel button is normal and the what would normally be the OK button, is in non-english text.
    Everything else system-wide is fine.
    It's got me...

Maybe you are looking for