Arabic character encoding issue
Hi,
Our JMS plugin receives xml text messages encoded in ISO-8859-1. Messages are originally in ISO-8859-6 and converted to ISO-8859-1 before put in the queue. I convert the message to ISO-8859-6 on receipt as below.
For some unknown reason, some of the arabic characters show ? marks but some gets properly converted.
���� � ������ original
��?�� ������ after conversion
Our plugin runs on Solaris 10. JDK used is 1.5. Solaris local set to en_US.ISO8859-15
Code
String in = ((TextMessage)message).getText ();
String msgISO6 = new String (in.getBytes("ISO-8859-1"), "ISO-8859-6");
Does anyone have any thoughts on the possible cause of the issue ?
Thanks in advance
Sohan
String in = ((TextMessage)message).getText ();
String msgISO6 = new String (in.getBytes("ISO-8859-1"), "ISO-8859-6"); That's wrong. Read the javadoc for the String class. in.getBytes gives a byte array with encoding ISO-8859-1, you are then trying to decode that array using "ISO-8859-6". The second argument in the string is not the encoding of the string that you are creating. A String is always encoded using Unicode/UTF-16.
That means that you don't need to convert the encoding at all if you are gettings Strings, but you need to specify encoding when you are displaying them in e.g. a web page.
Kaj
Similar Messages
-
Character encoding issue in sql server
Hi Team,
We have a table with more than 20 columns...In that we have some columns which will have data extracted from the Datawareshouse applicances and its one time load.
The problem is we have two columns which will may have same set of values from some records and for different set of records
the below values are the example for same set of records but the values are changed while importing into the sql server database..
2pk Etiquetas Navide‰as 3000-HG
2pk Etiquetas Navideñas 3000-H
Is there anyway to change the first column values into the second column value ?
By looking at the data we can say its the code page issue..(Character encoding issue)..but how to convert it?
Convertting(2pk Etiquetas Navide‰as 3000-HG)
to get 2pk Etiquetas Navideñas 3000-H in the select query?Then it seems that you can do the obvious: replace it.
DECLARE @Sample TABLE ( Payload NVARCHAR(255) );
INSERT INTO @Sample
VALUES ( N'2pk Etiquetas Navide‰as 3000-HG' );
UPDATE @Sample
SET Payload = REPLACE(Payload, N'‰', N'ñ');
SELECT S.Payload
FROM @Sample S; -
I'm using the below give code to send mail in Trukish language.
MimeMessage msg = new MimeMessage(session);
msg.setText(message, "utf-8", "html");
msg.setFrom(new InternetAddress(from));
Transport.send(msg);
But my customer says that he gets sometime unreadable characters in mail. I'm not able to understand how to solve this character encoding issue.
Should i ask him to change his mail client's character encoding settings?
If yes which one he should set.Send the same characters using a different mailer (e.g., Thunderbird or Outlook).
If they're received correctly, come the message from that mailer with the message
from JavaMail. Most likely other mailers are using a Turkish-specific charset instead
of UTF-8. -
FF character encoding issue in Mageia 2 ?
Hi everyone,
I'm running Mozilla Firefox 17.0.8 in a KDE distro of Linux called Mageia 2. I'm having problems in character encoding with certain web pages, meaning that certain icons like the ones next to menu entries (Login, Search box etc.) and in section headlines don't appear properly. Instead they appear either in some arabic character or as little grey boxes with numbers and letters written in it.
I've tried experimenting with different encoding systems: Western (ISO 8859-1), (ISO 8859-15), (Windows 1252), Unicode (UTF-8), Central European (ISO 8859-2) but none of them does the job. Currently the char encoding is set to UTF-8. The same web page in Chrome (UTF-8) gives no such problem.
Can you help me, please?Thank you!
I solved my problem, however I find fonts are too small for certain web pages when compared to Chrome (see attached pictures of nytimes.com).
Chrome's font size are set to "Medium". -
JSF myfaces character encoding issues
The basic problem i have is that i cannot get the copyright symbol or the chevron symbols to display in my pages.
I am using:
myfaces 2.0.0
facelets 1.1.14
richfaces 3.3.3.final
tomcat 6
jdk1.6
I have tried a ton of things to resolve this including:
1.) creating a filter to set the character encoding to utf-8.
2.) overridding the view handler to force calculateCharacterEncoding to always return utf-8
3.) adding <meta http-equiv="content-type" content="text/html;charset=UTF-8" charset="UTF-8" /> to my page.
4.) setting different combinations of 'URIEncoding="UTF-8"' and 'useBodyEncodingForURI="true"' in tomcat's server.xml
5.) etc... like trying set encoding on an f:view, using f:verbatim, specifying escape attirbute on some output components.
all with no success.
There is a lot of great information on BalusC's site regarding this problem (http://balusc.blogspot.com/2009/05/unicode-how-to-get-characters-right.html) but I have not been able to resolve it yet.
i have 2 test pages i am using.
if i put these symbols in a jsp (which does NOT go through the faces servlet) it renders fine and the page info shows that it is in utf-8.
<html>
<head>
<!-- <meta http-equiv="content-type" content="text/html;charset=UTF-8" /> -->
</head>
<body>
<br/>copy tag: ©
<br/>js/jsp unicode: ©
<br/>xml unicode: ©
<br/>u2460: \u2460
<br/>u0080: \u0080
<br/>arrow: »
<p />
</body>
</html>if i put these symbols in an xhtml page (which does go through the faces servlet) i get the black diamond symbols with a ? even though the page info says that it is in utf-8.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:ui="http://java.sun.com/jsf/facelets"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html"
xmlns:rich="http://richfaces.org/rich"
xmlns:c="http://java.sun.com/jstl/core"
xmlns:a4j="http://richfaces.org/a4j">
<head>
<meta http-equiv="content-type" content="text/html;charset=UTF-8" charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />
</head>
<body>
<f:view encoding="utf-8">
<br/>amp/copy tag: ©
<br/>copy tag: ©
<br/>copy tag w/ pound: #©
<br/>houtupt: <h:outputText value="©" escape="true"/>
<br/>houtupt: <h:outputText value="©" escape="false"/>
<br/>js/jsp unicode: ©
<br/>houtupt: <h:outputText value="©" escape="true"/>
<br/>houtupt: <h:outputText value="©" escape="false"/>
<br/>xml unicode: ©
<br/>houtupt: <h:outputText value="©" escape="true"/>
<br/>houtupt: <h:outputText value="©" escape="false"/>
<br/>u2460: \u2460
<br/>u0080: \u0080
<br/>arrow: »
<br/>cdata: <![CDATA[©]]>
<p />
</f:view>
</body>
</html>on a side note, i have another application that is using myfaces 1.1, facelets 1.1.11, and richfaces 3.1.6 and the unicode symbols work fine.
i had another developer try to use my test xhtml page in his mojarra implementation and it works fine there using facelets 1.1.14 but NOT myfaces or richfaces.
i am convinced that somewhere between the view handler and the faces servlet the encoding is being set or reset but i havent been able to resolve it.
if anyone at all can point me in the right direction i would be eternally greatful.
thanks in advance.UPDATE:
I was unable to get the page itself to consume the various options for unicode characters like the copyright symbol.
Ultimately the content I am trying to display is coming from a web service.
I resolved this issue by calling the web service from my backing bean instead of using ui:include on the webservice call directly in the page.
for example:
public String getFooter() throws Exception
HttpClient httpclient = new HttpClient();
GetMethod get = new GetMethod(url);
httpclient.executeMethod(get);
String response = get.getResponseBodyAsString();
return response;
}I'd still love to have a solution to the page usage of the unicode characters, but for the time being this solves my problem. -
Special Character Encoding issue
Hi all
Am using OAS9i. i ve deployed a webservice. i submit a payload request data that has some unicode characters like "§". The data is base64binary encoded. The type of the element mentioned in the schema is base64binary. When i retrieve the payload in java implementation code the character is displayed as � in the console. Please advice how to fix this issue. I tried setting JVM option file.encoding=utf-8 it didnt work out.
Thanks
ShinyWhen you use an UDF and you have programmed a Sax parser, then make sure, that the parser works with the correct encoding. So when the result of the webservice is ISO-8859-1, then assign this to the parser.
In principle the encoding should be part of XML header. Make sure that the encoding of the response is the same as declared in XML header. -
Urgent :SQL Loader Arabic Character Set Issue
HI all,
I am loading arabic characters into my database using SQL Loader using a fixed length data file. I had set my characterset and NLS_LANG set to UTF8.When I try to load the chararacter 'B' in arabic data i.e. ' لا ' , it gets loaded as junk in the table. All other characters are loaded correctly. Please help me in this issue and its very urgent.
Thanks,
KarthikHi,
Thanks for the responses.
Even after setting the characterset to arabic and the problem continues to persist. This problem occurs only with the character "b".
Please find my sample control file,input file and nls_parameters below:
My control file
LOAD DATA
characterset UTF8
LENGTH SEMANTICS CHAR
BYTEORDER little endian
INFILE 'C:\sample tape files\ARAB.txt'
replace INTO TABLE user1
TRAILING NULLCOLS
name POSITION(1:2) CHAR(1),
id POSITION (3:3) CHAR(1) ,
salary POSITION (4:5) CHAR(2)
My Input file - Fixed Format
?a01
??b02
?c03
The ? indicates arabic characters.Arabic fonts must be installed to view them.
NLS_PARAMETERS
PARAMETER VALUE
NLS_LANGUAGE ARABIC
NLS_TERRITORY UNITED ARAB EMIRATES
NLS_CURRENCY ?.?.
NLS_ISO_CURRENCY UNITED ARAB EMIRATES
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD/MM/RR
NLS_DATE_LANGUAGE ARABIC
NLS_SORT ARABIC
NLS_TIME_FORMAT HH12:MI:SSXFF PM
NLS_TIMESTAMP_FORMAT DD/MM/RR HH12:MI:SSXFF PM
NLS_TIME_TZ_FORMAT HH12:MI:SSXFF PM TZR
NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH12:MI:SSXFF PM TZR
NLS_DUAL_CURRENCY ?.?.
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS CHAR
NLS_NCHAR_CONV_EXCP FALSE -
Integration Gateway - JDBC - Character Encoding Issue
Hello,
I'm using SMP 3.0 SP06 and I'm getting data from MS SQL using JDBC interface and I can get all data successfully.
The problem is:
there is a column in database contain "Arabic" data "right-to-left" language.
and when executing the OData service, for example, if the data in arabic is "هذه للتجربة" it is getting to me "هذه للتجربة"
I think this is the same data but in a different encoding/decoding.
Do you have any idea ?
Thanks
HossamBy the way, I have checked it again it is working fine when requesting data in XML format "default"
The problem occurs only when requesting the service with format parameter "?$format=json"
and it is even working fine when calling it from "Advanced REST client"
so I think it is just a problem in the browser while displaying the data, specially chrome as it is working fine with IE, as chrome is displaying json files as plain text without any formatting or decoding, but IE is saving the file on PC and if I tried to open it by notepad++ I find data correctly decoded.
It seems it is not an SMP nor Integration gateway issue, sorry for confusing -
Multi-byte character encoding issue in HTTP adapter
Hi Guys,
I am facing problem in the multi-byte character conversion.
Problem:
I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
I have given XML code as UT-8 in the XI payload manipulation block.
I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
Can you please guide me how to proceed further.
Please let me know if you need more info.
Regards,
SriniSrinivas,
Can you go through the url:
UTF-8 encoding problem in HTTP adapter
---Satish -
Shockwave to Javascript - character encoding issue !
Hi !
I have resigned from sending messages from JavaScript to
Shockwave movie
as I have found all existing methods unreliable (at worst
scenario Flash
blocker is installed and using localConnection trick with
Flash gateway
fails).
But in the consequence, I have to send message (a search
string) from
Shockwave to JavaScript.
That seems easy with the following Lingo:
goToNetPage("javascript:void myJSfunction('" & aString
But the problem is with encoding possible non-ASCII
characters.
I presume the browser page is using charset=UTF-8.
Any idea how to properly encode 'aString' so it will preserve
non-ASCII
characters while being transfered to JavaScript?
It is really urgent!
Rgs,
Ziggi> I have resigned from sending messages from JavaScript to
Shockwave movie
I replied a little late to your earlier thread, but take a
look at
<
http://dasdeck.de/staff/valentin/lingo/dir_js/> -
Russian character encoding issue
Hi All,
I wrote a java program (using jdk 1.5 ) to execute wevtutil.exe output in windows 2008 russian os, i use the command "wevtutil qe Security /c:1 /rd:true /f:text"
when i run the above command in the command prompt i got the proper output but when i execute the same command using the java program my output is garbled, i tried to run the same output using the encoding Cp866 but still no use. which encoding format i have to use for wevtutil output in russian language? Herewith i attach my java code and its output and the original command output.
Please help me to resolve this.
Thanks and Regards,
Rama
Java code:
import java.util.*;
import java.io.*;
public class evtquery
public static void main(String args[])
String cmd ="wevtutil qe Security /c:1 /rd:true /f:text";
Runtime run1 = Runtime.getRuntime();
try
Process p = run1.exec(cmd);
BufferedReader rd[] = new BufferedReader[2];
rd[1] = new BufferedReader(new InputStreamReader(p.getErrorStream()));
rd[0] = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = null;
while((line =rd[0].readLine()) != null)
System.out.println(line);
catch (Exception ex)
ex.printStackTrace();
output:
C:\rama>java evtquery
Event[0]:
Log Name: Security
Source: Microsoft-Windows-Security-Auditing
Date: 2012-01-08T22:57:38.703
Event ID: 4648
Task: ┬їюф т ёшёЄхьє
Level: ╤тхфхэш
Opcode: ╤тхфхэш
Keyword: └єфшЄ єёяхїр
User: N/A
User Name: N/A
Computer: WIN-KBTGH8I5IBE
Description:
┬√яюыэхэр яюя√Єър тїюфр т ёшёЄхьє ё тэ√ь єърчрэшхь єўхЄэ√ї фрээ√ї.
╤єс·хъЄ:
╚─ схчюярёэюёЄш: S-1-5-21-3933732538-1582820160-195548471
1-500
╚ь єўхЄэющ чряшёш: Administrator
─юьхэ єўхЄэющ чряшёш: WIN-KBTGH8I5IBE
╩юф тїюфр: 0xbc15a
GUID тїюфр: {00000000-0000-0000-0000-000000000000}
┴√ыш шёяюы№чютрэ√ єўхЄэ√х фрээ√х ёыхфє■∙хщ єўхЄэющ чряшёш:
╚ь єўхЄэющ чряшёш: eguser
─юьхэ єўхЄэющ чряшёш: WIN-KBTGH8I5IBE
GUID тїюфр: {00000000-0000-0000-0000-000000000000}
╓хыхтющ ёхЁтхЁ:
╚ь Ўхыхтюую ёхЁтхЁр: RAMA-PC
─юяюыэшЄхы№э√х ётхфхэш : RAMA-PC
╤тхфхэш ю яЁюЎхёёх:
╚фхэЄшЇшърЄюЁ яЁюЎхёёр: 0x4
╚ь яЁюЎхёёр:
╤тхфхэш ю ёхЄш:
╤хЄхтющ рфЁхё: -
╧юЁЄ: -
─рээюх ёюс√Єшх тючэшърхЄ, ъюуфр яЁюЎхёё я√ЄрхЄё т√яюыэшЄ№ тїюф ё єўхЄэющ чряшё№
■, тэю єърчрт хх єўхЄэ√х фрээ√х. ▌Єю юс√ўэю яЁюшёїюфшЄ яЁш шёяюы№чютрэшш ъюэЇш
уєЁрЎшщ яръхЄэюую Єшяр, эряЁшьхЁ, эрчэрўхээ√ї чрфрў, шыш т√яюыэхэшш ъюьрэф√ RUNA
S.
C:\rama> java -Dfile.encoding=Cp866 evtquery
Event[0]:
Log Name: Security
Source: Microsoft-Windows-Security-Auditing
Date: 2012-01-08T22:57:38.703
Event ID: 4648
Task: ┬їюф т ёшёЄхьє
Level: ╤тхфхэш
Opcode: ╤тхфхэш
Keyword: └єфшЄ єёяхїр
User: N/A
User Name: N/A
Computer: WIN-KBTGH8I5IBE
Description:
┬√яюыэхэр яюя√Єър тїюфр т ёшёЄхьє ё тэ√ь єърчрэшхь єўхЄэ√ї фрээ√ї.
╤єс·хъЄ:
╚─ схчюярёэюёЄш: S-1-5-21-3933732538-1582820160-195548471
1-500
╚ь єўхЄэющ чряшёш: Administrator
─юьхэ єўхЄэющ чряшёш: WIN-KBTGH8I5IBE
╩юф тїюфр: 0xbc15a
GUID тїюфр: {00000000-0000-0000-0000-000000000000}
┴√ыш шёяюы№чютрэ√ єўхЄэ√х фрээ√х ёыхфє■∙хщ єўхЄэющ чряшёш:
╚ь єўхЄэющ чряшёш: eguser
─юьхэ єўхЄэющ чряшёш: WIN-KBTGH8I5IBE
GUID тїюфр: {00000000-0000-0000-0000-000000000000}
╓хыхтющ ёхЁтхЁ:
╚ь Ўхыхтюую ёхЁтхЁр: RAMA-PC
─юяюыэшЄхы№э√х ётхфхэш : RAMA-PC
╤тхфхэш ю яЁюЎхёёх:
╚фхэЄшЇшърЄюЁ яЁюЎхёёр: 0x4
╚ь яЁюЎхёёр:
╤тхфхэш ю ёхЄш:
╤хЄхтющ рфЁхё: -
╧юЁЄ: -
─рээюх ёюс√Єшх тючэшърхЄ, ъюуфр яЁюЎхёё я√ЄрхЄё т√яюыэшЄ№ тїюф ё єўхЄэющ чряшё№
■, тэю єърчрт хх єўхЄэ√х фрээ√х. ▌Єю юс√ўэю яЁюшёїюфшЄ яЁш шёяюы№чютрэшш ъюэЇш
уєЁрЎшщ яръхЄэюую Єшяр, эряЁшьхЁ, эрчэрўхээ√ї чрфрў, шыш т√яюыэхэшш ъюьрэф√ RUNA
S.
command output:
C:\rama>wevtutil qe Security /c:1 /rd:true /f:text
Event[0]:
Log Name: Security
Source: Microsoft-Windows-Security-Auditing
Date: 2012-01-08T22:57:38.703
Event ID: 4648
Task: Вход в систему
Level: Сведения
Opcode: Сведения
Keyword: Аудит успеха
User: N/A
User Name: N/A
Computer: WIN-KBTGH8I5IBE
Description:
Выполнена попытка входа в систему с явным указанием учетных данных.
Субъект:
ИД безопасности: S-1-5-21-3933732538-1582820160-195548471
1-500
Имя учетной записи: Administrator
Домен учетной записи: WIN-KBTGH8I5IBE
Код входа: 0xbc15a
GUID входа: {00000000-0000-0000-0000-000000000000}
Были использованы учетные данные следующей учетной записи:
Имя учетной записи: eguser
Домен учетной записи: WIN-KBTGH8I5IBE
GUID входа: {00000000-0000-0000-0000-000000000000}
Целевой сервер:
Имя целевого сервера: RAMA-PC
Дополнительные сведения: RAMA-PC
Сведения о процессе:
Идентификатор процесса: 0x4
Имя процесса:
Сведения о сети:
Сетевой адрес: -
Порт: -
Данное событие возникает, когда процесс пытается выполнить вход с учетной запись
ю, явно указав ее учетные данные. Это обычно происходит при использовании конфи
гураций пакетного типа, например, назначенных задач, или выполнении команды RUNA
S.
C:\rama>Hi,
Thanks for your reply. I tried the code with the changes you have specified but it doesn't work.
The output is
C:\rama>java evtquery
Event[0]:
Log Name: Security
Source: Microsoft-Windows-Security-Auditing
Date: 2012-01-09T04:05:16.718
Event ID: 4672
Task: ??????????? ????
Level: ???????а
Opcode: ???????а
Keyword: ????? ??????
User: N/A
User Name: N/A
Computer: WIN-KBTGH8I5IBE
Description:
???╖???:
?? ????????????: S-1-5-18
??а ??????? ??????: ???????
????? ??????? ??????: NT AUTHORITY
??? ?????: 0x3e7
??????????: SeAssignPrimaryTokenPrivilege
SeTcbPrivilege
SeSecurityPrivilege
SeTakeOwnershipPrivilege
SeLoadDriverPrivilege
SeBackupPrivilege
SeRestorePrivilege
SeDebugPrivilege
SeAuditPrivilege
SeSystemEnvironmentPrivilege
SeImpersonatePrivilege
Thanks & Regards,
Rama -
Export arabic character into csv file turns into question mark in sql dev
Hi,
I am trying to export a table's output that contains some arabic and english mixed data, when I try to export it in csv, all the arabic characters change into question mark (?) but the same export is working fine on .xls or .xlsx.
Since we have to export a huge data and csv is much faster thar .xls, kindly let me know why it is happening and what is the solution for this.
Well, we are on Oracle 11gR2 and Sql Dev is 3.1
Appreciate your time and experience sharing.
Regards.Hi,
Since you say it works for xls but not csv, and the only applicable preference setting ( Tools|Preferences|Database|Utilities|Export|Encoding ) should apply to all export formats, I would imagine this is a bug. Especially if the encoding specified in Preferences is consistent with the client OS settings. I was not able to find any prior bug logged against the Export utility for this issue.
I logged a bug for this:
Bug 13993410 - FORUM: ARABIC CHARACTER ENCODING RESPECTED FOR XLS BUT NOT CSV
Regards,
Gary
SQL Developer Team -
Reading Advance Queuing with XMLType payload and JDBC Driver character encoding
Hi
I've got a problem retrieving the message from the queue with XMLType payload in Java.
It was working fine in 10g database but after the switch to 11g it returns corrupted string instead of real XML message. Database NLS_LANG setting is AL32UTF8
It is said that JDBC driver should deal with that automatically but it obviously don't in this case. When I dequeue the message using database functionality (DBMS_AQ package) it looks fine but not when using JDBC driver so Ithink it is character encoding issue or so. The message itself is enqueued by the database and supposed to be retrieved by dedicated EJB.
Driver file used: ojdbc6.jar
Additional libraries: aqapi.jar, xdb.jar
All file taken from 11g database installation.
What shoul dI do to get the xml message correctly?Do you mean NLS_LANG is AL32UTF8 or the database character set is AL32UTF8? What is the database character set (SELECT value FROM nls_database_parameters WHERE parameter='NLS_CHARACTERSET')?
Thanks,
Sergiusz -
ITunes support for foriegn character encoding
A friend burned two mix cds for me to listen two on my move back to the US from Korea. The songs are korean and have korean title/album information. I thought I would import the songs into my iBook. When I add them to my library, however, a majority of them have unintelligible song information. Only about 25-30% of the songs import successfully in korean font.
Finder reads the cd no problems. Looking through the disc shows all information clearly. Drop them into iTunes, however (or selecting "add to library"), and they get scrambled.
I'm guessing this is a character encoding issue. I don't know where my friend got the tracks from, so I'll have to assume he got them from sources which copied them using different encoding methods. But if Finder can support them all, why can't iTunes? Is there a way I can adjust character support in iTunes? Or should I be looking at something else?
1.2ghz iBook G4 1.25gb RAM Mac OS X (10.4.5) iTunes 6.0.3Try setting the Encoding of your Outputstream to UTF-8.
-
Character Encoding and File Encoding issue
Hi,
I have a file which has a data encoded using default locale.
I start jvm in same default locale and try to red the file.
I took 2 approaches :
1. Read the file using InputStreamReader() without specifying the encoding, so that default one based on locale will be picked up.
-- This apprach worked fine.
-- I also printed system property "file.encoding" which matched with current locales encoding (on unix cooand to get this is "locale charmap").
2. In this approach, I read the file using InputStream as an array of raw bytes, and passed it to String contructor to convert bytes to String.
-- The String contained garbled data, meaning encoding failed.
I tried printing encoding used by JVM using internal class, and "file.encoding" property as well.
These 2 values do not match, there is weird difference.
For e.g. for locale ja_JP.eucjp on linux box :
byte-character uses EUC_JP_LINUX encoding
file.encoding system property is EUC-JP-LINUX
To get byte to character encoding, I used following methods (sun.io.*):
ByteToCharConverter btc = ByteToCharConverter.getDefault();
System.out.println("BTC uses " + btc.getCharacterEncoding());
Do you have any idea why is it failing ?
My understanding was, file encoding and character encoding should always be same by default.
But, because of this behaviour, I am little perplexed.But there's no character encoding set for this operation:baos.write("���".getBytes());
Maybe you are looking for
-
Formatting / Post Processing of Exported Excel Sheet
Hi , Issue: The Columns in the Excel Sheet are not fully visible when we export Report from CR viewer to Excel Sheet. So we had planned for Post processing of the Excel Sheet using VBA or .NET etc to format the Excel Sheet, which would be on the ser
-
Running Tomcat v5.0.28 as a service w/ BEA JRockit v1.4.2_04
Target platform - Windows 2000 Server sp4. Using Sun's SDK/JRE implementation (v1.4.2_04), we had no problems getting Tomcat 5.0.28 to run as a service. However, the same cannot be said for BEA JRockit v1.4.2_04. Summary... I've already done some onl
-
Hello, Just upgraded Elements to version 12 on a Mac. Did a full backup on my Window based version 8 to external drive and when I wanted to do a restore on the V12 (Mac) got the following message: This catalog is from an older version of Elements and
-
Has anyone installed HTML DB on Open VMS. It is not offically supported but, I am told, since it is just web based PLSQL, it should work.
-
Reallocating Incoming Payments without cancelling deposit
There does need to be a method whereby incoming payments can be allocated to different invoices without having to go back and cancel the deposit. If Credit vouchers are submitted automatically, the incoming payment is unable to be cancelled at all, m