Multibyte characters (arabic) in Uzbl.
Hello, I'm experiencing a problem using Google-Translator with uzbl-browser. When I try to translate a text from Arabic into English it gives me
I Tھz ¨ "Q † ŘŞ
and what is interesting it doesn't convey the original arabic text in the pop-up window correctly, it gives:
ط¨طھظ„ ظ†طھ
instead of:
بتل نت
Does anyone have an idea?
P.S. I have all the necessary arabic fonts installed (chromium translates the same pattern without a problem)
Maybe connected: when opening a document in cyrillic using UTF without BOM, Uzbl-browser doesn't display letters correctly.
Last edited by zandgreen (2010-10-11 19:54:08)
Similar Messages
-
Multibyte characters not displaying on report
Hi there,
I am having a problem displaying multibyte characters on my report (Oracle reports 6i). These characters are needed for barcode encoding. e.g chr(203) . But when I run my report they are missing.
Also when I do the following sql in oracle11g (the version of the db the report is working against) :-
select chr(203) from dual;
I get the error :-
ORA-29275: partial multibyte character
though it works fine for oracle 8.
Any help much appreciated.Everything depends on your NLS parameters. If I do this on Oracle 11g I get:
select chr(203) from dual;
C
ËFor bar coding you should use a special bar code font, e.g.:
http://www.idautomation.com/font-encoders/oracle-reports/ -
IMPDP SQLFILE : multibyte characters in constraint_name leads to ORA-00972
Hi,
I'm actually dealing with constraint_name made of multibyte characters (for example : constrain_name='VALIDA_CONFIRMAÇÃO_PREÇO13').
Of course this Bad Idea® is inherited (I'm against all the fancy stuff like éàù in filenames and/or directories on my filesystem....)
The scenario is as follows :
0 - I'm supposed to do a "remap_schema". Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
1 - The scott schema is exported via datapump
2 - I do an impdp with SQLFILE in order to get all the DDL (table, packages, synonyms, etc...)
3 - I do some sed on the generated sqlfile to change every occurence of SCOTT to NEW_SCOTT (this part is OK)
4 - Once the modified sqlfile is executed, I do an impdp with DATA_ONLY.
(The scenario was imagined from this thread : {message:id=10628419} )
I'm getting some ORA-00972: identifier is too long at step 4 when executing the sqlfile.
I see that some DDL for constraint creation in the file (generated at step#2) is written as follow :ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...Obviously, the original name of the constraint with cedilla and tilde gets translated to something else which is longer than 30 char/byte...
As the original name is from Brazil, I also tried do add an EXPORT LANG=pt_BR.UTF-8 in my script before running the impdp for sqlfile. This didn't change anything. (the original $LANG is en_US.UTF-8)
In order to create a testcase for this thread, I tried to reproduce on my sandbox database... but, there, I don't have the issue. :-(
The real system is an 4-nodes database on Exadata (11.2.0.3) with NLS_CHARACTERSET=AL32UTF8.
My sandbox database is a (nonRAC) 11.2.0.1 on RHEL4 also AL32UTF8.
The constraint_name is the same on both system : I checked byte by byte using DUMP() on the constraint_name.
Feel free to shed any light and/or ask for clarification if needed.
Thanks in advance for those who'll take on their time to read all this.
I decided to include my testcase from my sandbox database, even if it does NOT reproduce the issue +(maybe I'm missing something obvious...)+
I use the following files.
- createTable.sql :$ cat createTable.sql
drop table test purge;
create table test
(id integer,
val varchar2(30));
alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
from user_constraints where table_name='TEST';- expdpTest.sh :$ cat expdpTest.sh
expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test- impdpTest.sh :$ cat impdpTest.sh
impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=testThis is the run :
[oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @createTable
Table dropped.
Table created.
Table altered.
CONSTRAINT_NAME LB LC
DMP
VALIDA_CONFIRMAÇÃO_PREÇO13 29 26
Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
,80,82,69,195,135,79,49,51
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh
Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "SCOTT"."TEST" 0 KB 0 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
/home/oracle/scott_dir/testNonAscii.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
[oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh
Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_SQL_FILE_TABLE_01": scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
[oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql
-- CONNECT SCOTT
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: TABLE_EXPORT/TABLE/TABLE
CREATE TABLE "SCOTT"."TEST"
( "ID" NUMBER(*,0),
"VAL" VARCHAR2(30 BYTE)
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
TABLESPACE "MYTBSCOMP" ;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;I was expecting to have the cedilla and tilde characters displayed incorrectly....
Edited by: Nicosa on Feb 12, 2013 7:13 PMSrini Chavali wrote:
If I understand you correctly, you are unable to reproduce the issue in the test instance, while it occurs in the production instance. Is the "schema move" being done on the same database - i.e. you are "moving" from SCOTT to NEW_SCOTT on the same database (test to test, and prod to prod) ? Do you have to physically move/copy the dmp file ? Hi Srini,
On the real system, the schema move will be to and from different machines (but same DBversion).
I'm not doing the real move for the moment, just trying to validate a way to do it, but I guess it's important to say that the dump being used for the moment comes from the same database (the long story being that due to some column using object datatype which caused error in the remap, I had to reload the dump with the "schema rename", drop the object column, and recreate a dump file without the object_datatype...).
So Yes, the file will have to move, but in the current test, it doesn't.
Srini Chavali wrote:
Obviously something is different in production than test - can you post the output of this command from both databases ?
SQL> select * from NLS_DATABASE_PARAMETERS;
Yes Srini, something is obviously different : I'm starting to think that the difference might be in the Linux/shell side rather than on the impdp as datapump is supposed to be NLS_LANG/CHARSET-proof +(when traditional imp/exp was really sensible on those points)+
The result on the Exadata where I have the issue :PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET AL32UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 11.2.0.3.0the result on my sandbox DB :PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET AL32UTF8
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 11.2.0.1.0------
Richard Harrison . wrote:
Hi,
Did you set NLS_LANG also when you did the import?Yes, that is one of the difference between the Exadata and my sandbox.
My environnement in sandbox has NLS_LANG=AMERICAN_AMERICA.AL32UTF8 where the Exadata doesn't have the variable set.
I tried to add it, but it didn't change anything.
Richard Harrison . wrote:
Also not sure why you are doing the sed part? Do you have hard coded scheme references inside some of the plsql?Yes, that is why I choose to sed. The (ugly) code have :
- Procedures inside the same package that references one another with the schema prepended
- Triggers with PL/SQL codes referencing tables with schema prepended
- Dynamic SQL that "builds" queries with schema prepended
- Object Type that does some %ROWTYPE on tables with schema prepended (that will be solved by dropping the column based on those types as they obviously are not needed...)
- Data model with object whose names uses non-ascii characters
+(In France we use to call this "gas power plant" in order to tell how a mess it is : pipes everywhere going who-knows-where...)+
The big picture is that this kind of "schema move & rename" should be as automatic as possible, as the project is to actually consolidate several existing databases on the Exadata :
One schema for each country, hence the rename of the schemas to include country-code.
I actually have a workaround yet : Rename the objects that have funky characters in their name before doing the export.
But I was curious to understand why the SQLFILE messed up the constraint_name on one sustem when it doesn't on another... -
Adobe AIR help; breadcrum navigation doesn't work in multibyte characters?
Hi there,
I created Adobe application with RoboHelp 9 (using FrameMaker files,
which are written in Japanese and English) to find that breadcrum navigation on the top doesn't work.
Is this a feature that breadcrum navigation doesn't support multibyte characters? Or is this any workaround?
Many thanks for your kind support in advance,See my reply to your other post. You can also test this in the new project and raise it with Adobe Support at the same time as the other problem.
See www.grainge.org for RoboHelp and Authoring tips
@petergrainge -
Adobe AIR help; question regarding search criteria with multibyte characters
Hi,
I created Adobe AIR application with Robohelp 9 (using FM 10 files as source, and texts are written in Japanese and English),
and happend to find that search function in AIR application doesn't catch keywords correctly.
For example,
1. If you type "文字" and "スタイル" with single byte space in search window, the result appears for both "文字" and "スタイル".
2. If you type "文字" and "スタイル" with double byte space in search window, the result doesn't match for anything.
3. If you type "文字スタイル" (in one word) in search window, the result doesn't match for anything.
Same thing happens for the case "文字種" (literally, "文字"+"種", the meaning is almost the same).
But, if you type search words which is all in Katakana, the result seems to be fine.
Is there any limitation for multibyte characters support? Or, this behaviour is a feature??
If so, how can make AIR application "hit" correct words?
Thank you very much for your kind help in advance!On this one your best course of action is to contact Adobe Support. They will likely require your project and there is one thing I would suggest you do first. Create a new project with just a few topics to prove the problem exists there as well. If it does it will be a simpler upload and you will know the problem is repeatable.
See www.grainge.org for RoboHelp and Authoring tips
@petergrainge -
Hi ,
I have created a procedure which sends e-mail using UTL_SMTP.
The procedure has a part in which we add the attachments to e-mail.
Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
Can anyone provide any guidance on this?First, you should not append 'charset="us-asci"' in this line:
UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: ' || IN_ATT_MIME_TYPE ||'charset="us-ascii"'||'; name="' || IN_ATT_FILE_NAME || '"' || UTL_TCP.CRLF);
The default IN_ATT_MIME_TYPE has this clause already, hence you would have a duplicate. Moreover, you add it without the required preceding semicolon. Further, in the Content-Type, you should pass the original character set of the file, not "us-ascii". This character set must support characters included in the file.
Second, the NCLOB is not written correctly either. UTL_ENCODE.BASE64_ENCODE expects a RAW value. If you give it an NVARCHAR2 value returned by DBMS_LOB.SUBSTR, then PL/SQL will implicitly apply HEXTORAW.to the value. HEXTORAW fails, if the NCLOB content is not a valid sequence of hex digits. Treating the content of NCLOB as a string of hex digits is obviously not your goal. You should use UTL_I18N.STRING_TO_RAW to convert NVARCHAR2 from DBMS_LOB.SUBSTR to the desired target encoding (the one specified in Content-Type) and cast it to RAW at the same time. UTF-8 (i.e. AL32UTF8) is usually the best choice for the target encoding. You should then apply UTL_RAW.CAST_TO_VARCHAR2 to change the RAW representation of base64-encoded value to VARCHAR2 expected by UTL_SMTP.WRITE_DATA.
Of course, passing DBMS_LOB.SUBSTR result directly to UTL_ENCODE.BASE64_ENCODE would make sense for a BLOB attachment. However, even then the encoded result should be passed to UTL_RAW.CAST_TO_VARCHAR2, not UTL_RAW.CAST_TO_RAW.
Third, if you use UTF-8 as Content-Type encoding, you may want to prepend three bytes (0xEF 0xBB 0xBF) to the NCLOB value before base64 encoding. This three-byte character is the UTF-8 Byte Order Mark. It helps some editors, such as Notepad, to recognize the file as encoded in UTF-8.
Fourth, if the target encoding is UTF-8, l_step should be no more than 8191. This is to avoid intermediate values exceeding 32767 bytes.
Fifth, the whole procedure will not work well on EBCDIC platform. In contrary to what documentation says, UTL_SMTP.WRITE_DATA does not seem to convert data to US7ASCII before sending (unless the package is ported separately by platform vendors). I guess this is not your worry but I thought I will mention this, just in case.
Thanks,
Sergiusz -
JSPs, UTF-8 & multibyte characters
In our project we have a situation where we must output some multibyte characters
to a JSP page. The data is retrieved from an Oracle database using BEA ELink and
XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to be ok
down to the JSP level, because I can output it to a file and it's properly UTF-8
encoded.
But when I try to write the data to the final reply (using <%=dataObject.getData()%>
the results definitely are not UTF-8 encoded. On the client browser they show
up as garbage, occupying more than twice the actual length of the data. The response
headers and META-tags are all set to UTF-8 encoding, and the browser is set to
use UTF-8.
The funny part is, that the string seems to be encoded twice or something similar
as is shown by the next example:
This is the correct UTF-8 byte sequence for the first twice characters (they are
just generated data for debugging purposes):
C3 89 C3 A5
Which translates to Unicode characters 00C9 and 00E5.
But on the final page that is sent to the client this sequence has been changed
to:
C3 83 E2 80 B0 C3 83 C2 A5
Which just doesn't make sense since it shows up as five different garbage characters.
Does anyone have any ideas what is causing the problem and any suggestions? What
are those extra characters in the final encoding?
.Pete.
It sounds like the Object.toString is coming back already encoded in UTF8,
and thus the JSP writer encodes that UTF8 using UTF8 again, which is what
you see. Try making the String value be:
> ... characters 00C9 and 00E5.
... instead of:
> C3 89 C3 A5
Then it will be encoded correctly.
Peace,
Cameron Purdy
Tangosol Inc.
<< Tangosol Server: How Weblogic applications are customized >>
<< Download now from http://www.tangosol.com/download.jsp >>
"Petteri Räisänen" <[email protected]> wrote in message
news:[email protected]...
>
> In our project we have a situation where we must output some multibyte
characters
> to a JSP page. The data is retrieved from an Oracle database using BEA
ELink and
> XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to
be ok
> down to the JSP level, because I can output it to a file and it's properly
UTF-8
> encoded.
>
> But when I try to write the data to the final reply (using
<%=dataObject.getData()%>
> the results definitely are not UTF-8 encoded. On the client browser they
show
> up as garbage, occupying more than twice the actual length of the data.
The response
> headers and META-tags are all set to UTF-8 encoding, and the browser is
set to
> use UTF-8.
>
> The funny part is, that the string seems to be encoded twice or something
similar
> as is shown by the next example:
>
> This is the correct UTF-8 byte sequence for the first twice characters
(they are
> just generated data for debugging purposes):
>
> C3 89 C3 A5
>
> Which translates to Unicode characters 00C9 and 00E5.
>
> But on the final page that is sent to the client this sequence has been
changed
> to:
>
> C3 83 E2 80 B0 C3 83 C2 A5
>
> Which just doesn't make sense since it shows up as five different garbage
characters.
>
>
> Does anyone have any ideas what is causing the problem and any
suggestions? What
> are those extra characters in the final encoding?
>
> Pete.
-
Hi there,
I am writing a midlet where I handle some arabic words.. I have a pb, I
cannot display arabic character . I used the following code, unfortunately
it does not work, unfortunately I got a sequence of question marks
displayed in the txtfield. In addtion I used different emulator Nokia s40, s60,
sun java emulator.
try{
textField1.setString(new String("الكتاب".getBytes("UTF-8")));
} catch(UnsupportedEncodingException e) {
e.printStackTrace();
thank you for your help
A.E.K.Use unicode representalion of the characters that you wanna display on to the device.
Also, the font engine on the device on which you are testing you app must support arabic charset inorder to render the characters.
~Mohan -
Junk characters (arabic)
Hi everyone,
I am having hard time converting my database characters into arabic, they appearing in junk like ???? Ive got 2 tier architecture Oracle 9i R2 database and 6i forms. In the database itself the data is appearing in junk(????) where as at the client side some reports are being generated in junk(????). My concern is how do i change junk into arabic at the DB level. il post some information that may be helpfule to u all. thanks for reading.
SQL> select * from nls_database_parameters;
PARAMETER
VALUE
NLS_LANGUAGE
AMERICAN
NLS_TERRITORY
AMERICA
NLS_CURRENCY
$
In registry i have Oracle and when i click on the + sing i have its Home...i have two NLS_LANG settings in registry....for Oracle theres nothing,the value data is NA but for HOME0 the NLS_LANG is ARABIC_UNITED ARAB EMIRATES. AR8MSWIN1256
SQL> SELECT DUMP (NAME,'1017') FROM SHR.PRV_USERS;
DUMP(NAME,'1017')
Typ=1 Len=14 CharacterSet=UTF8: da,c8,cf,c7,e1,e1,e5, ,c7,e1,e4,ca,c7,dd
Typ=1 Len=12 CharacterSet=UTF8: c7,cd,e3,cf, ,c7,e1,db,c7,e3,cf,ed
Typ=1 Len=10 CharacterSet=UTF8: e3,cd,e3,cf, ,da,d3,ed,d1,ed
Typ=1 Len=12 CharacterSet=UTF8: c7,cd,e3,cf, ,c7,e1,d4,d1,c7,d1,ed
Typ=1 Len=9 CharacterSet=UTF8: cd,c7,e3,cf, ,da,c7,de,e1
Typ=1 Len=11 CharacterSet=UTF8: ce,c7,e1,cf, ,e6,de,ed,ca,c7,e4
Typ=1 Len=12 CharacterSet=UTF8: ed,c7,d3,d1, ,c7,e1,cc,c8,d1,ca,ed
Typ=1 Len=10 CharacterSet=UTF8: d3,da,cf, ,c7,e1,de,d5,ed,d1
Typ=1 Len=13 CharacterSet=UTF8: e3,e3,cf,e6,cd, ,c7,e1,e3,d1,cd,c8,ed
Typ=1 Len=11 CharacterSet=UTF8: dd,ed,d5,e1, ,c7,e1,cd,d1,c8,ed
Typ=1 Len=11 CharacterSet=UTF8: cc,c7,c8,d1, ,c7,e1,da,e3,d1,ed
SQL> SELECT sys_context('userenv','language') from dual;
SYS_CONTEXT('USERENV','LANGUAGE')
ARABIC_SAUDI ARABIA.UTF8
SQL> SELECT sys_context('userenv','language') from dual;
SYS_CONTEXT('USERENV','LANGUAGE')
ARABIC_SAUDI ARABIA.UTF8
SQL> SELECT name,value$ from sys.props$ where na
NAME
VALUE$
NLS_LANGUAGE
AMERICAN
NLS_TERRITORY
AMERICA
NLS_CURRENCY
$
NAME
VALUE$
NLS_ISO_CURRENCY
AMERICA
NLS_NUMERIC_CHARACTERS
NLS_CHARACTERSET
UTF8
NAME
VALUE$
NLS_CALENDAR
GREGORIAN
NLS_DATE_FORMAT
DD-MON-RR
NLS_DATE_LANGUAGE
AMERICAN
NAME
VALUE$
NLS_SORT
BINARY
NLS_TIME_FORMAT
HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT
DD-MON-RR HH.MI.SSXFF AM
NAME
VALUE$
NLS_TIME_TZ_FORMAT
HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT
DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY
$
NAME
VALUE$
NLS_COMP
BINARY
NLS_LENGTH_SEMANTICS
BYTE
NLS_NCHAR_CONV_EXCP
FALSE
NAME
VALUE$
NLS_NCHAR_CHARACTERSET
AL16UTF16
NLS_RDBMS_VERSION
9.2.0.1.0
Regards,
katheriPl see your duplicate post here - Database characters(junk)
Srini -
Multibyte characters are not printing correctly.
Hi all,
When i read a multibyte character from inputstream i am getting a negative value. The code is something like this.
int c=in.read();
System.ouy.ptinyln((char)c);
It is printing ? instead of �, i know the reason why it is printing the ? mark because when i read a mutibyte character from the InputStream it is returning the negative value so when i try to type cast and try to print the negative value it is printing ?. My question is why it is returning negative value when i read the multibyte character.
Please help me.
ThanksWhat kind encoding type u using?
mulitbyte character is composed by two bytes.
and the first bit of each bye is 1.
as a result, u get negative value when u cast the multibye character to int. -
Question marks in outgoing emails for multibyte characters
I have an application that stores and displays Japanesse characters correctly on the screen. But when I try to email them, the characters come out as ?????.
I am on version 4.02.
I am setting the following items at the top:
MIME-Version: 1.0
Content-Type: text/html; charset=utf-8
The body is started with a <html> tag.
Any ideas?After installation BI Publisher 10.1.3.3.1 Base (standalone, OC4J) :
Directory of F:\bip\jdk\lib\fonts
13/10/2007 21:16 15 196 128R00.TTF
13/10/2007 21:16 18 473 348 ALBANWTJ.ttf
13/10/2007 21:16 18 777 132 ALBANWTK.ttf
13/10/2007 21:16 18 676 084 ALBANWTS.ttf
13/10/2007 21:16 18 788 600 ALBANWTT.ttf
13/10/2007 21:16 276 384 ALBANYWT.ttf
13/10/2007 21:16 12 860 B39R00.TTF
13/10/2007 21:16 18 800 MICR____.TTF
13/10/2007 21:16 6 580 UPCR00.TTF
Directory of F:\bip\jdk\jre\lib\fonts
01/08/2006 19:25 75 144 LucidaBrightDemiBold.ttf
01/08/2006 19:25 75 124 LucidaBrightDemiItalic.ttf
01/08/2006 19:25 80 856 LucidaBrightItalic.ttf
01/08/2006 19:25 344 908 LucidaBrightRegular.ttf
01/08/2006 19:25 317 896 LucidaSansDemiBold.ttf
01/08/2006 19:25 698 236 LucidaSansRegular.ttf
01/08/2006 19:25 234 068 LucidaTypewriterBold.ttf
01/08/2006 19:25 242 700 LucidaTypewriterRegular.ttf
Directory of F:\bip\jre\1.4.2\lib\fonts
24/03/2004 19:12 75 144 LucidaBrightDemiBold.ttf
24/03/2004 19:12 75 124 LucidaBrightDemiItalic.ttf
24/03/2004 19:12 80 856 LucidaBrightItalic.ttf
24/03/2004 19:12 344 908 LucidaBrightRegular.ttf
24/03/2004 19:12 317 896 LucidaSansDemiBold.ttf
24/03/2004 19:12 698 236 LucidaSansRegular.ttf
24/03/2004 19:12 234 068 LucidaTypewriterBold.ttf
24/03/2004 19:12 242 700 LucidaTypewriterRegular.ttf
What is wrong?
In Adobe Reader's Document Properties -> Fonts
+Helvetica:
Type: Type1
Encoding: Ansi
Actual Font: ArialMT
Actual Font Type: TrueType
I feel BIP use wrong encoding . . . -
How to extract multibyte characters?
Hi,
I have used acrobat plug in and could extract ascii characters to a text file.
I want to extract multi byte characters, Chinese, Korean.
Could you help me if you know how to do it?
Another question is that how could I debug plug in (by example, step by step) in IDE?Sorry, this file is a correct one.
Attachments:
Test.vi 7 KB -
Multibyte characters(Chinese Data)
Hi,
We have a table in oracle(10g) which stores Chinese data as well English characters.
Production is designed in such a way that, we can' t increase size of the columns.
Is there any way to handle multi-byte characters like Chinese with the same length?
Pls help me out on this and my heartful thanks for any repliesYou need some basic understanding of how language and territory handled by database, read this article,
NLS_LANG FAQ
http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm -
Entering MultiByte characters using OA Extension JDev forms
Hello
We have certain custom OA extension JDev forms running on Oracle Applications 11.5.10/ 10g database. Recently we converted our db to be utf8 compliant. While I can querry chineses characters through the custom JDev forms, when I try to enter chinese characters using the custom forms, it stores it as Junk. In the preference I did set the Client Character Encoding to UTF-8, but that did not help. Any clues as to how to get this working would be appreciated.
Thanks
PHKWhen you say it stores junk, you see the junk characters from back end or on the OA page on further retrieval of the same value?
--Shiv -
How can I parse multibyte characters in java?
Is there any API available to do that?
I would like to eliminate that character while I am saving the data in the database. It giving me a problem with XML parsing later. for ex. character typed from japanese script keyboard
Maybe you are looking for
-
Sales Order contract reference for IDOC
Dear Guru's, Good Day, We are implementing a Sales order IDOC process for our client with Message type ORDERS (Inbound). Normally in a standard sales order process when u enter a Sold to party, Ship to party and material number system will automatica
-
When I click on RSA1, I get a message " System RBA client 150 has no logical name! Naming takes place through assignment to a logical system." And gave me an option to create or assign. I select Assign and get 4 options SAP AG Konzern Auslieferun
-
Error 128 - Songs in Playlist won't fit
When I try to burn a cd, I get "The songs in this playlist will not fit on on Audio CD". I created a playlist with just 1 song and got the same error. Diagnostics listed below. I haven't seen this type of problem posted by anyone else. I used to be a
-
my iphone says an unknown error occurred (9) when trying to restore in iphone recovery mode
-
I need to know how I can insert a row in the middle of my bidimensional array.