Entry of non-English characters into database

Hi
We are facing a problem in inserting non-English characters into the database.For example, we have a company name field which can accept German characters. This field has been defined as of varchar2 type of size 50 in the db. When we enter 49 English characters and then one German character, the database is throwing the error that the inserted value is too large for the column.Is it that the German character is taken as equivalent to two English characters ? Or is there any database level setting that can be done for this ? For the time being we have identified certain critical fields and have doubled the size of their fields in the db. But I guess there has to be another solution to this....
Please help.
TIA
Vinoj

Indeed, your German character is using two bytes to store itself. Consult the Oracle JDBC Developer's Guide.
null

Similar Messages

  • Entry of non-English characters into the db

    Hi
    We are facing a problem in inserting non-English characters into the database.For example, we have a company name field which can accept German characters. This field has been defined as of varchar2 type of size 50 in the db. When we enter 49 English characters and then one German character, the database is throwing the error that the inserted value is too large for the column.Is it that the German character is taken as equivalent to two English characters ? Or is there any database level setting that can be done for this ? For the time being we have identified certain critical fields and have doubled the size of their fields in the db. But I guess there has to be another solution to this....
    Please help.
    null

    Indeed, your German character is using two bytes to store itself. Consult the Oracle JDBC Developer's Guide.
    null

  • Formatting non-English Characters in Database Extracts

    Hi
    I am trying to create data flat file with Oracle SQL extracts. The data file is position based i.e. Position 1-30 for First Name, 31-50 for Last name, etc. I encountered problems when the data fields contain non-English characters. The position shift right by 1 with every non-English characters. For example, if there is one Spanish char in First Name, Last Name will start from 32, instead of 31. If there are two Spanish chars, Last Name will start from 33 (shift 2). Is there any way, in a database session, to restrict the formats of these text fields such that non-English characters will not affect the data field position in the flat files?
    Thanks in advance
    Jason

    An alternative might be to tab or comma delimit your data.
    Eric

  • Non english characters in DN cannot be retrieved

    We are using Netscape directory server 4, protocal V3. We have a problem related to non-english characters appearing in RDN.
    We publish to Ldap entries using the values from database. For example, we have pubulished an entry to Ldap, based on DB values, the entry should have a DN like: ou=Liege BELGIUM ... LGG1a, <other components of DN>. However, when we call netscape search API (search against uid attribute which does not have non-english characters), the search return the entry, but when further call getDN() method on the returned Ldap Entry, it only returns Li, instead of the complete DN value.
    It seems the entry is corrupted in Ldap. I wanted to delete the corrupted entry and re create new one to test. I tried many ways, but none of them worked, I think it is because DN is corrupted, there is no key value to identify the Ldap entry for any operation(modify, delete).
    You help and insights are much appreciated.
    Thanks.
    Han Shen

    LDAP uses the UTF8 encoding. You must store data in the directory using the UTF8 encoding. This includes DN values. This also means that if you want to be able to view the values in your native character set and font, you must use an application that can convert the UTF8 LDAP data back to the native character encoding. The directory console by default should work for LATIN-1 (ISO 8859) languages if the LOCALE is set correctly.

  • Encoding non english characters with utf 8 on jsp (Critical!!)

    I am inserting hebrew characters from JSP into oracle db and everything is fine until this point. But when I try to retrieve the information from the database, the characters are not displayed properly (I get some garbage characters). I am sure that the data stored in the database is correct, but not sure why there is a problem in displaying the data in the JSP.
    I came across a thread on TSS
    http://www.theserverside.com/discussions/thread.tss?thread_id=28944
    and followed the suggestions given there like having
    <%@ page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %>
    <META http-equiv="Content-Type" content="text/html; charset=UTF-8">and also this
    <%
    //Some JDBC and sql statement query UTF-8 data and then ...
    String str = rs.getString("utf8_data");
    str = new String(str.getBytes("ISO-8859-1"),"UTF-8");
    %>
    <%= str %>Now, the data getting displayed is partly correct, I mean to say, some characters are still coming as squares.
    Any ideas will be of great help.

    even i doubt the database charset for this issue. But what I dont understand is how only certain hebrew characters are getting stored properly and why others are corrupted?
    Also, can anyone let me know how i can view the Non-English characters present in the database directly, as TOAD is not able to display them

  • Formula to show non english characters from clob in crystal report

    Hi
    I am using the database Oracle 11g with a field clob in one of the table that i want to show in the crystal report
    But the problem is when i put the clob field in crystal report it is outputting the results perfectly for English characters but not for the Arabic ones and returning the string like (¿¿¿¿ ¿¿¿¿ ¿¿¿ ¿¿¿¿ ¿¿¿¿ ¿¿¿ ¿ ¿¿¿¿¿ ¿¿¿)
    so is there any way to show the arabic (non english) characters perfectly on crystal report with clob field

    Hi Azeem,
    Make sure the arabic font should install in your system.
    Try this:
    Create a text field in your Crystal Report (a label).
    Place Arabic characters into that field (just by typing them into it on the report definition).
    Run the report. If they display correctly then its probably not Crystal, but rather would point to an issue in the data retrieval and supply to Crystal via your dataset (or whatever datasource you are using).
    If they don't display - then its definitely Crystal.

  • Display non-english characters in its own corresponding language in excel

    Hello Experts,
    I have description texts in chinese and other languages which is visible properly in the debugger in my internal table.
    After downloading the data into an excel sheet into my file path, when opened the non-english description is displayed as ####
    Please help me in displaying the non-english descriptions in the excel sheet in its own corresponding language.
    Note:  Function module used : GUI_DOWNLOAD
                 File type assigned       : 'ASC'
    Edited by: keerthi shanker on Mar 14, 2008 11:02 AM

    Hello Vasanth,
    Please explain about what did you mean by 'Last Button in SAP screen'
    Well, to re-iterate my problem, I have data retrieved from SAP database that has values of multi languages which is displaying properly in the internal table as checked in the debugger.
    After the execution of FM 'GUI_DOWNLOAD', when i open the file from my desktop, the non-english characters like the chinese and japanese are each character is displaying in HASH symbol.

  • SetMnemonic for non-english characters

    Does anybody knos how to set JButtons mnemonic for non-english characters?
    My mnemonic is loaded from a resource bundle, and in the documentation the setMnemonic(char) is only limited to english and it is written that the user should call setMnemonic(int) instead.
    So what value should this int contains in order to display the non-english char which is loaded from resource bundle?
    Thanks in advanve,
    Hanoch

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • My Firefox cannot display non-English characters, even though I have tried every language encoding I have!

    I am a big fan of Japanese songs and websites, so I was very disappointed when I saw that Firefox could not handle any non-English characters. I have tried every encoding I can, but none work and I just see boxes with numbers and letters inside. I have only just got this older laptop for my birthday - my old laptop which ran Windows Vista and had Firefox 4 had no trouble at all. Please help me!

    hello muoshui, please enter '''about:config''' into the firefox location bar (confirm the info message in case it shows up) & search for the preference named '''network.http.accept-encoding''' - right-click and reset that entry to the default value.
    if this does not resolve the issue already, please also go through the steps offered at [[Websites look wrong or appear differently than they should]].

  • Word Replacements for Non- English Characters

    Hi
    Does anyone have an idea on implementing Word Replacements for non- english characters in TCA- DQM 11i.
    We are trying to identify, capture and cleanse common accented characters like à, â , ê
    However, the default language for replacement is American English , So even if we add these in the existing lists it will not take any effect
    Is creating a new Word replacement list for every language the solution ?? any patch recommendations???
    Thanks in advance

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • Non-english characters in sqlplus

    Friends,
    9.0.1,10.2.0.1
    CAn the non-english characters be displayed in sqlplus prompt?
    I have to print the data with some hindi charcaters like name from the report generated via proc.
    What is teh workaround for this?
    Thanks

    Hi,
    I have RH Linux 7.3.
    Here is my situation.
    I have some tables that have some columns.In these columns i need to store data in hindi.
    The database character set(nls_characterset) is US7ASCII.
    The datatypes of the columns is varchar2 type.
    The tables are accessed through Pro*c and the proc code generates the report.
    The data in these reports needs to be printed out and the corresponding data has to be in hindi.
    So i have to do two things
    1) On query the tables, the user should see hindi data along with english data.
    2) The printed report should also contain hindi data.
    Questions:
    1) If i change the charcater set by the command:
    alter database character set AL32UTF8;
    and OS locale to UTF-8
    Would these chnages solve the purpose?
    2) Or do i need to recreate the database with AL32UTF8 and exp/imp all the data?
    3)Any other advice/option?
    Thanks

  • Parsing Non English characters from OPEN DATASET

    Hi Team,
    I'm try to download the .txt file using the OPEN DATASET in the windows environment from application server, system language is set up for English.
    The problem is, in the .txt file some times I will be getting the non-English characters (Spanish). When I try to download the data into SAP it's not downloading properly eg: ñ when it downloaded into sap it took something like A+_.
    My goal is I need to download the Spanish characters properly into sap.
    Please advice how to solve this problem.
    Thanks,
    Selvaraj

    Hi Selvaraj!
    I haven't checked the situation for 4.5B, nor can I predict exact behavior, but you can give following statement a try:
    SET LOCALE LANGUAGE lg.
    This should even change content of sy-lange for whole role area (internal session), so change the value back afterwards.
    Regards,
    Christian

  • External Tables & Non English Characters

    Hi
    I am new to external tables, i wanted to know what i need to do to upload the data which is of non-English characters,
    Example
    i need to upload the data where the single file will have data for more than one non-english characters.
    1. English
    2. Chinese
    3. Korean
    let me know if any other alternative exists for the same.
    Regards
    Yram

    Hi,
    a good reference for External Tables is the manual: http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/part_et.htm
    If they are in one file, you can upload them all at once. Important will be how the data is stored in the file (UTF-8, UTF-16). This information you should set in the external table by using the CHARACTERSET clause, see the manual.
    Your database also needs to support these characters!
    Herald ten Dam
    htendam.wordpress.com

  • Non-English Characters (Encoding)

    Using the XMP Toolkit I'm having problems reading and writing non-English characters.  For example: keywords read which should be "casa campesina, cultivos agrí colas, zona cafetera, café, plátano" read as "casa campesina, cultivos agrÃcolas, zona cafetera, café, plátano".   I have the same problem with other languages such as Norwegian.
    Does anyone else have this type of problem?  Or perhaps a suggestion as to what I might be doing to cause such a problem?
    Best regards,
    Glenn Rogers
    Developer of DBGallery: Photo DATAbase System

    Hi Glen,
    if you write non ASCII characters using our toolkit, you have to make sure to encode your string in UTF8.
    If you see this while reading, the data in the file might not be valid UTF8. If it's local encoding (for example mac local encoding) our library will try to convert it based  on the OS you are running it on. So it you got mac local encoding in the EXIF of the file and you are using the toolkit on windows, this might cause the wrong characters you are seeing.
    In order to avoid this please always use UTF8 encoded strings.
    Regards,
    Samy
    XMP Team

  • Loading Non-English Characters using VBA and BAPI

    Hi Experts,
    I am trying to load Non-English characters (Chinese, Korean, Japanese, etc.) into a SAP Table using BAPI and VBA. I have set the connection language and codepage values but when I run the tool, the non-English characters display as ????? or #####. Do you know how to fix this issue?
    Thanks!

    If your language is a unicode tehn you need to change the options  like IN SAP you need to change it to unicode  in the initial screen Customize local layout(ALT F12) options 118  --> Encoding ....

Maybe you are looking for

  • Track response of .ICS file(Meeting request) in SAP R/3

    Dear Experts, I am able to send the meeting request to MS-Outlook using the following thread. [http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b035a861-5f61-2c10-d086-c4dd779dd253] When the user opens the .ICS file and accepts the meeting requ

  • Problems with MfE on a E61i

    I have a problem with one of my clients E61i. It was configured using MfE and was working fine until now. i have checked the security certificate. i have reset the phone and reconfigured MfE, i have removed MfE several times and installed it several

  • HT1296 IPHONE 4 PROBLEM

    iphone 4 frozen try to do restore on itunes comes up with an error message when i unplug from itunes and plug into a power supply its as if the phone is frozen and still thinks its in recovery mode and plugged into itunes

  • Need urgent help: OS wiped out

    I accidentally wiped out my OS while doing an assignment trying to install an OS to my UBS drive. I followed some guide online and now my OS is wiped along with thousands dollars worth of softwares. Not to mention ton of work iam currently doing for

  • Queries in ASCP

    Hi, Is there a way to include additional columns in the Query Output.. Currently you are given a selection criteria, ie Planner equals = Raw Material, Buyer equals = John Smith,Item, etc.. .. The Query will provide back the output data with the items