Multi character set support
we plan to implement Interconnect to get new customer records created in many databases (at distributed locations) to be populated into a central system.
all the databases are running on Oracle Database 9.x and we plan to use DB adapters to access each of them.
3 of these databases store data in double-byte format (Traditional Chinese, Simplified Chinese, Japanese). others are in standard US/American English
my questions are:-
1) does my central system's database have to be set to use a certain character set too ?
2) do i have to configure Interconnect to ensure that data from non-English systems are retrieved and populated correctly into my central database
thanks for any response
Hi,
We are also facing similar problem. We are using IC version 9.0.2. We have AQ adapter at source and DB adapter at destination. The characterset at DB application database is UTF-8. Where as in AQ application database is AR8MSWIN1256. We are using request/reply scenario. But the reply message which gets enqueued to the queue by AQ adapter contains '??????' (arabic data)
We have specified the encoding type in both adapter.ini files but no use.
We also followed the steps as stated in Metalink bug id 2375248. But nothing works. Can somebody help us???
-Vimala
Similar Messages
-
Hi,
I need to open my application for international users and support multi lingual languages, mostly confined to Europe.
I am using Forms 4.5 with Oracle 8.1.6 on HP Unix.
My questions are
Is UTF8 character set supported by Forms 4.5? I know forms 6i can.
Has anyone tried to achieve it? Someone can share his experience here?
Thanks in advance
Sanjay
nullAny one...
-
Character set supported by post and util.zip
hi friends
in my application i was sending a file(in client) using post method and in server getting using servlets.i what that file to be compressed using util.zip in client and uncompresed in server but i was unsucessful .i think the file before post (compressed one) and retrieving one are not same. is it problem different character set supported by http and zip files? or any thing else?
plz help me.......Just ran into this myself yester day.
There is a Metalink Note on it.
http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=190281.1
Problem Description
You are using OC4J and trying to connect to a database using JDBC OCI and
are getting:
"java.sql.SQLException: Character Set Not Supported !!: DBConversion"
Solution Description
Replace the <OC4J_HOME>\jdbc\lib\classes12dms.jar with
<ORACLE_HOME>\jdbc\lib\classes12.jar and rename it with classes12dms.jar.
Explanation
It seems there is a mismatch of classes12.zip supplied with OC4J 9.0.2
and the Oracle9i client libraries ocijdbc8.dll or ocijdbc8.so.
OC4J 9.0.2 does not use jdbc\lib\classes12.jar instead it uses
jdbc\lib\classes12dms.jar. So, in order to use the 9.0.1 client with OC4J, you
will need to make a copy of classes12.jar and rename it to classes12dms.jar
References
[NOTE:108876.1] Creating Connection gives "No ocijdbc8 in java.library.path"
[NOTE:174808.1] JDev9i and OCI Connections
I copied and renamed the jar (classes12.jar) as they stated.
Note: It should be in the directory you set in the JDev.conf, mine is
AddNativeCodePath D:\OraNT\9iDS\bin
Didn't try the other reply's suggestion of setting an environment variable. -
BIG5 and HKSCS Character Set Support
Hi,
We're experiencing some problems inserting a string containing both BIG5 and HKSCS characters to a 7.3.4 Oracle DB using JDBC. The underlying character set used by the DB is ZHT16BIG5 (this cannot be changed). The characters can be inserted correctly if we use SQLPlus/WorkSheet.
Take note that the BIG5 character set can be inserted correctly. The problem occurs if we include HKSCS characters in the statement.
We have tried a number of ways already but failed to convert the data properly.
We tried converting the data using ByteToCharConverter.getConverter("Big5") but this cannot handle the HKSCS properly.
We even tried using the CharacterSet.ZHT16BIG5_CHARSET provided by the NLS character set but it cannot convert all HKSCS characters correctly.
Any ideas on how to solve this problem? Or is it because the HKSCS character set is NOT supported by the JDBC driver?
Below is a sample text containg both BIG5 and HKSCS characters:
'i$h%49D$G$Q$T89 Ize _ ^ S( R @ A Y q
Any help/suggestion is most welcome.
Thanks,
Cis
nullI got the exact same problem as you.
(The Oracle I using is 8.1.7)
Can any one help?? -
9iLite and multibyte character set support
Does 9iLite support a character set that will allow for accented characters?
for example: i"NLS Character Integrity Issues for Consolidator
When Mobile Sync synchronizes with an Oracle database which has a
multibyte character set other than UTF8, the character integrity issue
occurs. Mobile Sync retrieves data from the server database through Oracle
8.1.7 OCI JDBC Driver for Oracle9iAS version 1.0.2.2, and 9i for Oracle9iAS
version 2.0. Character sets are converted from database character sets to
UTF8 by Oracle Servers NLS functions. In the code conversion, some
multibyte characters are garbled because of the difference of the character
mapping. This is not a bug of Mobile Sync.
For more Information, see "Character Integrity Issues in NLS Environment"
technical paper on Oracle Technology Network (technet.oracle.com)
Java/SQLJ & JDBC section in Technologies category."
from the Readme file with the media (read the manual I guess) -
Hi,
i am trying to execute an Oracle procedure from JDBC. The procedure accepts a Nested table as an input parameter. Definition of the nested table is given below.
Database – Oracle 10g.
Application Server – JBOSS 4.2.1
I get the following exception._
java.sql.SQLException: Non supported character set: oracle-character-set-178
at oracle.gss.util.NLSError.throwSQLException(NLSError.java:46)
I. JDBC Code_
Session s = HibernateUtil.getSession();
Transaction tr = s.beginTransaction();
con=s.connection();
oraConn = (OracleConnection) con.getMetaData().getConnection();
TableObject obj=new TableObject();
obj.setId(new Integer(123));//Tested ok, stroing in DB
obj.setDescr("test"); // this line throwing error
obj.setCre_user(new Integer(456));
obj.setUpd_user(new Integer(789));
obj.setXfr_flag("Y");
ArrayList al=new ArrayList();
al.add(obj);
Object[] objAray = al.toArray();
ArrayDescriptor arrayDescriptor =ArrayDescriptor.createDescriptor("T_TEST_SYN", oraConn);
ARRAY oracleArray = new ARRAY(arrayDescriptor, oraConn, objAray);
cs = (OracleCallableStatement)oraConn.prepareCall("call PKG_OBJ_TEST.accept_ui_input(?) ");
cs.setArray(1, oracleArray);
cs.execute();
tr.commit();
public class TableObject implements SQLData{
private String sql_type = "T_OBJ_TEST";
private int id;
private String descr;
//private Date cre_date;
private int cre_user;
//private Date upd_date;
private int upd_user;
private String xfr_flag;
public TableObject()
public TableObject (int id,String descr,int cre_user,int upd_user,String xfr_flag)
// this.sql_type = sql_type;
this.id = id;
this.descr = descr;
// this.cre_date=cre_date;
this.cre_user=cre_user;
//this.upd_date=upd_date;
this.upd_user=upd_user;
this.xfr_flag=xfr_flag;
public String getSQLTypeName() throws SQLException {
return "T_OBJ_TEST";
public void readSQL(SQLInput stream, String typeName) throws SQLException {
//sql_type = typeName;
id=stream.readInt();
descr=stream.readString();
//cre_date=stream.readDate();
cre_user=stream.readInt();
//upd_date=stream.readDate();
upd_user=stream.readInt();
xfr_flag=stream.readString();
public void writeSQL(SQLOutput stream) throws SQLException {
try {
stream.writeInt(this.id);
System.out.println("Iddddd");
stream.writeString(this.descr);
System.out.println("Desccccccccccccccc"+":"+descr);
//stream.writeDate(cre_date);
stream.writeInt(this.cre_user);
System.out.println("userrrrrrrrrrrr");
//stream.writeDate(upd_date);
stream.writeInt(this.upd_user);
System.out.println("upd uiserrrrrrrrrrr");
stream.writeString(this.xfr_flag);
System.out.println("flagggggggggggggggggggg"+xfr_flag);
}catch (SQLException se) {
System.out.println("Table object sql exception");
se.printStackTrace();
catch (Exception e) {
System.out.println("Table object exception");
* @return the id
public int getId() {
return id;
* @param id the id to set
public void setId(Object obj) {
Integer iobj= (Integer)obj;
this.id =iobj.intValue();
* @return the descr
public String getDescr() {
System.out.println("getDescr "+descr);
return descr;
* @param descr the descr to set
public void setDescr(Object obj) {
System.out.println("setDescr "+obj);
String sobj = (String)obj;
this.descr=sobj.toString();
System.out.println("setDescr "+obj);
* @return the cre_user
public int getCre_user() {
return cre_user;
* @param cre_user the cre_user to set
public void setCre_user(Object obj) {
Integer iobj=(Integer)obj;
this.cre_user = iobj.intValue();
* @return the upd_user
public int getUpd_user() {
return upd_user;
* @param upd_user the upd_user to set
public void setUpd_user(Object obj) {
Integer iobj=(Integer)obj;
this.upd_user = iobj.intValue();
* @return the xfr_flag
public String getXfr_flag() {
return xfr_flag;
* @param xfr_flag the xfr_flag to set
public void setXfr_flag(Object obj) {
this.xfr_flag = (String)xfr_flag;
II. Oracle database object details
Details of Object and Nested table created in the database.
T_TEST_SYN is a public synonym created for t_tab_obj_test
CREATE OR REPLACE TYPE t_obj_test as object (
id number(10),
descr varchar2(100),
--cre_date date,
cre_user number(10),
--upd_date date,
upd_user number(10),
xfr_flag varchar2(1),
CONSTRUCTOR FUNCTION t_obj_test ( id IN NUMBER DEFAULT NULL,
descr IN varchar2 default null,
--cre_date in date default null,
cre_user in number default null,
--upd_date in date default null,
upd_user in number default null,
xfr_flag in varchar2 default null ) RETURN SELF AS RESULT ) ;
CREATE OR REPLACE TYPE BODY t_obj_test as
CONSTRUCTOR FUNCTION t_obj_test ( id IN NUMBER DEFAULT NULL,
descr IN varchar2 default null,
--cre_date in date default null,
cre_user in number default null,
--upd_date in date default null,
upd_user in number default null,
xfr_flag in varchar2 default null ) RETURN SELF AS RESULT IS
BEGIN
SELF.id := id ;
SELF.descr := descr ;
--SELF.cre_date := cre_date ;
SELF.cre_user := cre_user ;
--SELF.upd_date := cre_date ;
SELF.upd_user := cre_user ;
SELF.xfr_flag := xfr_flag ;
RETURN ;
END ;
END ;
CREATE OR REPLACE TYPE t_tab_obj_test AS TABLE OF t_obj_test ;
CREATE OR REPLACE PACKAGE BODY PKG_OBJ_TEST AS
PROCEDURE accept_ui_input ( p_tab_obj_test in T_TAB_OBJ_TEST ) IS
BEGIN
FOR row IN p_tab_obj_test.First .. p_tab_obj_test.LAST
LOOP
INSERT INTO OBJ_TEST ( ID,
DESCR,
CRE_DATE,
CRE_USER,
UPD_DATE,
UPD_USER,
XFR_FLAG )
VALUES ( p_tab_obj_test(row).ID,
p_tab_obj_test(row).DESCR,
NULL,
p_tab_obj_test(row).CRE_USER,
NULL,
p_tab_obj_test(row).UPD_USER,
p_tab_obj_test(row).XFR_FLAG ) ;
END LOOP ;
COMMIT ;
END accept_ui_input ;
END PKG_OBJ_TEST;
/Check your CLASSPATH enviroment variable. Try to add something like c:\Ora10g\jlib\orai18n.jar.
From "JDBC Developers Guide and Reference":
orai18n.jar
Contains classes for globalization and multibyte character sets support
This solved the same error in my case. -
Use of UTF8 and AL32UTF8 for database character set
I will be implementing Unicode on a 10g database, and am considering using AL32UTF8 as the database character set, as opposed to AL16UTF16 as the national character set, primarily to economize storage requirements for primarily English-based string data.
Is anyone aware of any issues, or tradeoffs, for implementing AL32UTF8 as the database character set, as opposed to using the national character set for storing Unicode data? I am aware of the fact that UTF-8 may require 3 bytes where UTF-16 would only require 2, so my question is more specific to the use of the database character set vs. the national character set, as opposed to differences between the encoding itself. (I realize that I could use UTF8 as the national character set, but don't want to lose the ability to store supplementary characters, which UTF8 does not support, as this Oracle character set supports up to Unicode 3.0 only.)
Thanks in advance for any counsel.I don't have a lot of experience with SQL Server, but my belief is that a fair number of tools that handle SQL Server NCHAR/ NVARCHAR2 columns do not handle Oracle NCHAR/ NVARCHAR2 columns. I'm not sure if that's because of differences in the provided drivers, because of architectural differences, or because I don't have enough data points on the SQL Server side.
I've not run into any barriers, no. The two most common speedbumps I've seen are
- I generally prefer in Unicode databases to set NLS_LENGTH_SEMANTICS to CHAR so that a VARCHAR2(100) holds 100 characters rather than 100 bytes (the default). You could also declare the fields as VARCHAR2(100 CHAR), but I'm generally lazy.
- Making sure that the client NLS_LANG properly identifies the character set of the data going in to the database (and the character set of the data that the client wants to come out) so that Oracle's character set conversion libraries will work. If this is set incorrectly, all manner of grief can befall you. If your client NLS_LANG matches your database character set, for example, Oracle doesn't do a character set conversion, so if you have an application that is passing in Windows-1252 data, Oracle will store it using the same binary representation. If another application thinks that data is really UTF-8, the character set conversion will fail, causing it to display garbage, and then you get to go through the database to figure out which rows in which tables are affected and do a major cleanup. If you have multiple character sets inadvertently stored in the database (i.e. a few rows of Windows-1252, a few of Shift-JIS, and a few of UTF8), you'll have a gigantic mess to clean up. This is a concern whether you're using CHAR/ VARCHAR2 or NCHAR/ NVARCHAR2, and it's actually slightly harder with the N data types, but it's something to be very aware of.
Justin -
How to change the character set encoding not being a superset one
HI, i have a fresh installed database, but i realize the character set support in my client is not good enough, since its a not production database i want to change the character set encoding to anothor one not being a superset of the old one.
I have tried the "ALTER DATABASE CHARACTER SET WE8MSWIN1252" but it fails claiming i need a superset one.
I guess its just a single steps to do it, i have all the privileges, just not the time to reinstall and setup everything.Do you know what that does to the existing data, though? My hunch would be that any characters which have a different binary representation in the source & target character sets would be corrupted.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Does anyone know if the oracle odbc drivers support multi language character sets?
I am trying to retrieve Chinese (prc) characters from the database (it is stored correctly and I have the Microsoft Multilanguage service pack installed). Odbc won't retrieve them correctly (actually stops after 1 row).
If I use the OLE DB driver, it does retrieve them. Is there a converter inside the OLE DB driver that ODBC doesn't have or is there a setting I'm missing? (The tool I want to use this with does not recognize OLE DB, is there a way top make it use oledb but defining an odbc connection??)
Cheers
ChrisThe version number you're providing doesn't seem to make any sense to me. Oracle's ODBC drivers are versioned to match the version of the Oracle client they work with, i.e. 8.1.7.8 is the latest Oracle ODBC driver for the 8.1.7 Oracle client. In the Oracle 7 days, there was a 2.5x series of Oracle ODBC drivers. So far as I'm aware, there's never been a 4.x series of Oracle ODBC drivers.
AMERICAN_AMERICAN.UTF8 would be the option I'd tend to prefer on the client, particularly if you'll be working with more than just Chinese data (i.e. English & Chinese). I'm not sure what AMERICAN_AMERICAN.<some Chinese character set> would end up doing. There's a lot of info out there about NLS settings (including an NLS discussion forum) that might be helpful to you.
What OLE DB provider are you using that works?
Justin -
Message uses a character set that is not supported by the internet service
Does any one have any advice on how to fix this problem?
E-mails sent from my iphone 3G periodically arrive in an unreadable form at the recipient. The body of the e-mail has been replaced with the message "This message uses a character set that is not supported by the internet service...." The problem e-mails also include an attachment that contains an unformatted text file containing the original message surrounded by what appears to be lots of formatting data that is displayed as gibberish.
This occurs sometimes, but not always, even with the same recipients. I am sending e-mail through a G-mail account that is configured on the iphone using IMAP. I have tried the gmail account to use the two available formatting options for mail, but neither fixes the problem.
I have also upgraded to 2.01 and restored a few times without impact.Hi,
I got somewhat similar problem with special charecters(German umlaud �,�,�..).
I create a file with java having special charecters in it. Now if I open this file I am able to view the special charecters in it.But If I attach this file send it using following code then receiver can not see the umlaud charecters in it.They get replaced by _ or ?
MimeBodyPart mbp2 = new MimeBodyPart();
FileDataSource fds = new FileDataSource(fileName);
mbp2.setDataHandler(new DataHandler(fds));
mbp2.setFileName(output.getName());
Multipart mp = new MimeMultipart();
mp.addBodyPart(mbp2);
msg.setContent(mp);
Transport.send(msg);
From you message it looks like you are able to send the mail attachment correctly(by preserving special charecters).
Can you tell me what might be wrong in my code.
I appriciate your efforts in advance.
Prasad -
Need suggestion on Multi currency and Unicode character set use in ABAP
Hi All,
Need suggestion. In one of the requirement I saw 'multi-currency and Unicode character set experience in FICO'.
Can you please elaborate me how ABAPers are invlolved in multi currency as I think this is FICO fuctional area.
And also what is Unicode character set exp.? Please give me some document of you have any.
Thanks
Sreedevi
Moderator message - This isn't the place to prepare for interviews - thread locked
Edited by: Rob Burbank on Sep 17, 2009 4:45 PMUse the default parser.
By default, WebLogic Server is configured to use the default parser and transformer to parse and transform XML documents. The default parser and transformer are those included in the JDK 5.0.
The built-in WebLogic Server DOM factory implementation class is com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl.
The DocumentBuilderFactory.newInstance method returns the built-in parser.
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); -
Character Set Migration - Arabic & English Language Support
Hi,
Sofware Specifications:
OS Version : Windows 2003 EE Server, SP2, 32-Bit
DB Version : 9.2.0.1
Application : Lotus Domino 6.5
Existing Set Up:
DB CHAR SET : WE8MSWIN 1252
National Character Set : AL16UTF16
NLS_LANG : NA
Now the customer extended their business in EGYPT.
They need the existing database to support ARABIC & ENGLISH Languages.
Kindly let me know how to do this character set migration and achieve the client specification.
Regards
SureshCheck Metalink
Note:179133.1
Subject: The correct NLS_LANG in a Windows Environment
Note:187739.1
Subject: NLS Setup in a Multilingual Database Environment
Note:260023.1
Subject: Difference between AR8MSWIN1256 and AR8ISO8859P6 characterset
Also, please list all the steps you have performed till now -
How to set Multi Byte Character Set ( MBCS ) to Particular String In MFC VC++
I Use Unicode Character Set in my MFC Application ( VC++) .
now i get the output ठ桔湡潹⁵潦獵 (like this )character and i want to convert this character in english language (means MBCS),
But i need Unicode to My Applicatiion. when i change the Multi-Byte Character set It give Correct output in English but other Objects ( TreeCtrl Selection ) will perform wrongly . so i need to convert the particular String to MBCS
how can i do that ? In MFCI assume your string read from your hardware device is an plains "C" string (ANSI string). This type of string has one byte per character. Unicode has two bytes per character.
From the situation you explained I'd convert the string returned by the hardware to an Unicode string using i.e. MultibyteTowideChar with CP_ACP. You may also use mbstowcs or some similar functions to convert your string to an Unicode string.
Best regards
Bordon
Note: Posted code pieces may not have a good programming style and may not perfect. It is also possible that they do not work in all situations. Code pieces are only indended to explain something particualar. -
Cdrtools package, support for nls/utf8 character sets
Hello ppl,
I've been trying desperetly to burn a cd/dvd(k3b) with greek filenames and directory names. I ended up with file names like "???????????????????? (invalid unicode)"
After a lot of searching, i managed to isolate and solve the problem. There has been a patch(http://bugs.gentoo.org/attachment.cgi?id=52097) for cdrtools to support nls/utf8 character sets.
I guess that 90%+ of people using arch and burning cd's/dvd's, ignore the problem cause they just burn cd's/dvd's using standard english characters.
For all others here it is :
# Patched cdrtools to support nls/utf8 character sets
# Contributor: Akis Maziotis <[email protected]>
pkgname=cdrtools-utf8support
pkgver=2.01.01
pkgrel=3
pkgdesc="Tools for recording CDs patched for nls/utf8 support!"
depends=('glibc')
conflicts=('cdrtools')
source=(ftp://ftp.berlios.de/pub/cdrecord/alpha/cdrtools-2.01.01a01.tar.gz http://bugs.gentoo.org/attachment.cgi?id=52097)
md5sums=('fc085b5d287355f59ef85b7a3ccbb298' '1a596f5cae257e97c559716336b30e5b')
build() {
cd $startdir/src/cdrtools-2.01.01
msg "Patching cdrtools ..."
patch -p1 -i ../attachment.cgi?id=52097
msg "Patching done "
make || return 1
make INS_BASE=$startdir/pkg/usr install
It's a modified pkgbuild of the official arch cdrtools package (http://cvs.archlinux.org/cgi-bin/viewcv … cvs-markup) patched to support nls/utf8 character sets.
Worked like a charm.
If u want to install it, u should uninstall the cdrtools package
pacman -Rd cdrtools
P.S.: I've issued this as a bug in http://bugs.archlinux.org/task/3830 but nobody seemed to care... :cry: :cry: :cry:Hi Bharat,
I have created a Oracle 8.1.7 database with UTF8 character set
on WINDOWS 2000.
Now , I want to store and retrieve information in other languages
say Japanese or Hindi .
I had set the NLS Language and NLS Terrritory to HINDI and INDIA
in the SQL*PLUS session but could not see the information.You cannot view Hindi using SQL*Plus. You need iSQL*Plus.
(Available as a download from OTN, and requiring the Oracle HTTP
server).
Then you need the fonts (either Mangal from Microsoft or
Code2000).
Have your NLS_LANG settings in your registry to
AMERICAN_AMERICA.UTF8. (I have not tried with HINDI etc, because
I need my solution to work with 806,817 and 901, and HINDI was
not available with 806).
Install the language pack for Devanagari/Indic languages
(c_iscii.dll) on Windows NT/2000/XP.
How can I use the Forms 6i to support this languages ?I am not sure about that.
Do write back if this does not solve your problem.
--Shirish -
Oracle 10G support for both Cyrillic and Western European Character Sets
Dear all,
Our DB currently supports western EU characters sets but we need to also support Russian Characters.
Is there a common character set for both? or some trick that does the job?
Thanks.
DB: Oracle 10G R2
OS: Linux
Current Char Set:
NLS_CHARACTERSET WE8ISO8859P1
NLS_CALENDAR GREGORIAN
NLS_NCHAR_CHARACTERSET AL16UTF16AL32UTF8 will always do the job.
CL8ISO8859P5
CL8MSWIN1251
could to the job according to http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/applocaledata.htm#sthref1960.
Edited by: P. Forstmann on 9 août 2011 17:41
Maybe you are looking for
-
Hi, When copying a GPO I get this error [Warning] The security principal [Local Admins] cannot be resolved. The task will continue; however, there might be unresolved security principals in the destination GPO. This has never occurred before so not s
-
Hp 6600 wireless won't scan; can't find computer; MS 8.1
Officejet 6600 wireless works fine as printer, but can't find computer to scan to. OS is 8.1. Also there is no printer control software on my desktop. Software is up to date and firmware appears up to date.
-
hi all, I am facing issue with below update statemetn. taking huge time to update. in xx__TEMP table I have index on Project id column. and all underlying table hase index. Please look into plan and let me how I can reduce Cost for the blow update st
-
I copied my Visual Studio 2008 projects to a new computer and am having trouble getting the database to attach to my website to continue with it. I uninstalled SQL 2005 express and installed SQL 2008 R2 Express. When I looked in the SQL Server config
-
Goto next frame in a movieclip from main timeline button
So what I have is a button (named Next) on the main timeline. I have a movie clip on that frame (on the main timeline) and the movie clip has 3 frames that I want to be able to cycle through using the "Next" button that is on the main timeline. How c