Extended ASCII

We are using JDBCTemplate (Spring) with MySQL as database and AES Encryption technique.
The data in extended ASCII is saved correctly. But when we retrived the data, in the resultset the data is reterived correctly. However when we display the resultset it places question-mark (?) to the extended ASCII.
Why is it does not display the right extended ASCII charecters?
Thanking you in advance

Well I don't understand. The character that I am
using are �����You see you can easily see and type them. Then how
come they are "UTF-8"Please adviceBased on your responses in this thread and in http://forum.java.sun.com/thread.jspa?threadID=5176938&messageID=9689678#9689678 I don't think you have a good enough knowledge of Java or cryptography to take this forward. I suggest you spend some time reading about Java and the JCE and experimenting with it before writing another line of production code.

Similar Messages

  • Problem convertting certain extended ascii characters

    I'm having problems with the extended ascii characters in the range 128-159. I'm working with SQL server environment using java. I originally had problems with characters in the range 128-159 when I did a 'select char_col from my_table' I always get junk when I try to retreive it from the ResultSet using the code 'String str = rs.getString(1)'. For example char_col would have the ascii character (in hex) '0x83' but when I retrieved it from the database, my str equaled '0x192'. I'm aware there is a gap in the range 128-159 in ISO-8859-1 charset. I've tracked the problem to be a charset issue converting the extended ascii characters in ISO-8859-1 into java's unicode charset.
    I looked on the forum and it said to try to specify the charset when I retreived it from the resultset so I did 'String str = new String(rs.getBytes(), "ISO-8859-1")' and it was able to read the characters 128-159 correctly except for five characters (129, 141, 143, 144, 157). These characters always returned the character 63 or 0x3f. Does anyone who what's happening here? How come these characters didn't work? Is there a workaround this? I need to use only use java and its default charsets and I don't want to switch to the windows Cp1252 charset cuz I'm using the java code in a unix environment as well.
    thanks.
    -B

    Normally your JDBC driver should understand the charset used in the database, and it should use that charset to produce a correct value for the result of getString(). However it does sometimes happen that the database is created by programs in some other language that ignore the database's charset and do their own encoding, bypassing the database's facilities. It is often difficult to deal with that problem, because the custodians of those other programs don't have a problem, everything is consistent for them, and they will not allow you to "repair" the database.
    I don't mean to say that really is your problem, it is a possibility though. You are using an SQL Server JDBC driver, aren't you? Does its connection URL allow you to specify the charset? If so, try specifying that SQL-Latin1 thing and see if it works.

  • Display extended ascii characters as question mark in xml file

    I am creating a XML file with encoding as UTF-8. Some tag values contain some extended ascii characters. When i run the java program to create the file in windows, the extended ascii characters are display correctly. But in linux it is displaying as ?(question mark).
    i am not able to rectify this. can anyone help me....
    Its urgent
    Thanks in advance.
    Message was edited by:
    Rosy_Thomas@Java

    Probably the locale is not set for the shell you are running in. The default 'C' locale uses the ASCII encoding which defines only 128 characters. See if giving the commandexport LC_CTYPE=en_US.UTF-8before starting the program fixes the issue.

  • SQL Developer, UTF8 Oracle DB, extended ascii characters appear as blocks

    I have this value stored on the database:
    (Gestion Económica o Facturaci
    Notice the second word has an extended ascii character in it. When I use SQL Developer on my windows machine to view the data, I get a box in place of the o, kinda like this:
    (Gestion Econ�mica o Facturaci
    If I log on to the AIX server where the oracle database in question is and run sqlplus from there, I see things properly. I also managed to regedit oracle home to get sql plus on my windows machine to display this properly. I still cannot get sql developer to work though...
    Details about sql developer:
    font: arial Unicode MS
    environment encoding: UTF-8
    NLS Lang: American
    NLS Territory: America
    windows regional options:
    English (United States)
    Location: United States
    Database NLS settings:
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     mm/dd/yyyy hh24:mi:ss
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_CHARACTERSET     UTF8
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    Any ideas on how I can fix this. I'd rather NOT log onto the server to run queries.... thanks in advance for your thoughts!
    Edited by: user10939448 on Jan 31, 2012 1:51 PM

    user10939448 wrote:
    This problem is quite strange in that when I've been able to manually set American_america.utf8, things work.Sorry to say, but it seems you may have an incorrect setup.
    In general, you should set char set part of NLS_LANG to let Oracle know the code page used by the client. With win-1252, NLS_LANG should include .WE8MSWIN1252.
    The display from sqlplus was "lying", due to incorrectly stored data coupled by incorrect nls_lang setting (char set part). The pass-through or gigo scenario can be dangerous this way. Search the Globalization forum for the term 'pass-through' for previous discussions on the theme.
    The setting on AIX servers may be incorrect as well, but it depends how you use it (e.g. for database export or data load with utf-8 encoded files it may be correct).
    The output of the query you recommended looks odd to me:
    (Gestion Econ�mica o Facturaci     Typ=1 Len=30 CharacterSet=UTF8:
    28,47,65,73,74,69,6f,6e,20,45,63,6f,6e,f3,6d,69,63,61,20,6f,20,46,61,63,74,75,72,61,63,69;This is the telling part. The 0xF3 is not legal in UTF8. Actually, the code units for ó, U+00F3 Latin small letter o with acute, are C3 B3. So instead of f3 you should have expected c3,b3 from the dump output.
    >
    So it looks like what's under the covers is correct, but I'm still not seeing the correct character in sql developer.The opposite is true. Data is incorrectly stored and SQL Developer is correctly showing you this. Sqlplus is not the best tool in Unicode environments, SQL Developer is better.
    >
    ACP according to my windows registry is 1252. OEMCP is 437Also, if you use database clients in console mode (such as sqlplus), NLS_LANG should include .US8PC437 to properly indicate that code page in use is 437.

  • Need to find out extended ASCII characters in database

    Hi All,
    I am looking for a query that can fetch list of all tables and columns where there is a extended ASCII character (from 128 to 256). Can any one help me?
    Regards
    Yadala

    yadala wrote:
    Hi All,
    I am looking for a query that can fetch list of all tables and columns where there is a extended ASCII character (from 128 to 256). Can any one help me?
    Regards
    YadalaThis should match your requirement:
    select t.TABLE_NAME, t.COLUMN_NAME from ALL_TAB_COLUMNS t
    where length(asciistr(t.TABLE_NAME))!=length(t.TABLE_NAME) 
    or length(asciistr(t.COLUMN_NAME))!=length(t.COLUMN_NAME);The ASCIISTR function returns an ASCII version of the string in the database character set.
    Non-ASCII characters are converted to the form \xxxx, where xxxx represents a UTF-16 code unit.
    The CHR function is the opposite of the ASCII function. It returns the character based on the NUMBER code.
    ASCII code 174
    SQL> select CHR(174) from dual;
    CHR(174)
    Ž
    SQL> select ASCII(CHR(174)) from dual;
    ASCII(CHR(174))
                174
    SQL> select ASCIISTR(CHR(174)) from dual;
    ASCIISTR(CHR(174))
    \017DASCII code 74
    SQL> select CHR(74) from dual;
    CHR(74)
    J
    SQL> select ASCII(CHR(74)) from dual;
    ASCII(CHR(74))
                74
    SQL> select ASCIISTR(CHR(74)) from dual;
    ASCIISTR(CHR(74))
    J

  • [FIXED 1.5.3] 1.5: Export Data to clipboard can't handle extended ASCII

    Hi,
    Exporting table data to the clipboard through the Export Data entry in the context menu, extended ASCII shows as "�". Especially killing for languages with accented vocals like Spanish.
    FWIW, this was working in 1.2...
    Regards,
    K.

    Hi Kristof,
    I have exported the following table structure successfully to clipboard, pasting into Microsoft word on an English language XP SP2 Windows box:
    CREATE TABLE "TOTIERNE"."SPANISHMÁSÓN"
    (     "SPANISHMÁSÓN" VARCHAR2(100 BYTE),
         "ID" NUMBER(38,0)
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS" ;
    Can you give a more precise testcase?
    -Turloch

  • Printing extended ascii characters

    How can i print an extended ascii character through a java program if i know its ascii value?I tried this :
    for (int i=0;i<256;i++)
    System.out.print((char)i);
    and i got the character '?' printed on the console for some of the characters.

    How can i print an extended ascii character through a
    java program if i know its ascii value?I tried this
    for (int i=0;i<256;i++)
    System.out.print((char)i);
    and i got the character '?' printed on the console
    for some of the characters.According to this site: http://www.pantz.org/html/symbols-hex/htmlcodes.shtml
    [[[HTML 4.01, ISO 10646, ISO 8879, Latin extended A and B]]]
    The ASCII value 8240 == �
    According to this site: http://www.idevelopment.info/data/Programming/ascii_table/PROGRAMMING_ascii_table.shtml
    [[[ISO 10646, ISO 8879, ISO 8859-1 Latin alphabet No. 1]]]
    The ASCII value of 137 == �
    It seems like it's a Windows ISO 8859-1 issue.
    From everything I've read it appears as though there are no characters from DEC value of 128-159 inclusive. DEC value 127 is DEL delete and DEC value 160 is � in the Latin-1 encoding or ISO-8859-1.
    I have a program that writes ASCII values to the screen. If I try to force it to print a value in the 150's it fails, returning the symbol --->>> ?
    However it can print every other ASCII value that I've tired (in the output file, the � symbol prints correctly but when it's posted on the forum it doesn't show up...). Here's an input sample:












    Output:
    [�]          [40]          [1]               [402]
    [�]          [41]          [1]               [381]
    [�]          [42]          [1]               [8364]
    [�]          [43]          [1]               [171]
    [�]          [44]          [1]               [182]
    [�]          [45]          [1]               [174]
    [�]          [47]          [1]               [8240]
    [�]          [49]          [1]               [255]
    [�]          [50]          [1]               [214]
    [�]          [51]          [1]               [220]
    [�]          [52]          [1]               [162]
    [�]          [53]          [1]               [163]
    Trying to force (char)155
    out.write( (char)155 + lineSep );
    Prints: ?

  • Extended Ascii Code in Swing

    Hi master,
    I have found some viewing extended ascii code in Swing. The palteform I used is Windows Me.
    It is not viewing properly for some of chars with ascii code more than 127 to 255.
    Any idea would be appreciate.
    Thanks in advance!

    Hi,
    This character is special one, so you can't print it out. In SAP all special charcter which can't be printed are replaced with placeholder #.  They are intended to do theri special purpose in flat file, once you open it.
    Do the following:
    - use fm HR_KR_XSTRING_TO_STRING to convert xstring with value  '1F' (30 decimal value) to string
    - write this string to certain place in internal string table (where you want later use it in the file)
    - download this table as to file
    - special character will do its purpose in text file
    This is the same functionality as you would be placing Line Feed (0x0A - 10 dec). It would be shown as # but once you open downlaoded file you don't see hash # anymore, but you see that new line in that place.
    Hope this is clear
    Regards
    Marcin

  • Would like to use/see extended ascii characters in...

    I've got an E72 and I love it, but I would like to be able to see and use extended ascii characters.  Here is an example: σ__σ .  It looks like an upside-down, mirror-image capital Q on either side of the regular underline character.  To get the character I use ALT-229 on my Windows keyboard.  I can type this into my sms's or emails using ctrl-229, but it looks like a small "a" with a little "u" over it.  It looks the same way when it gets to my Windows email, so the character is not being sent properly from my E72.
    Am I just using the wrong ascii code?  Help!

    I did a little testing.
    SMS:
    when I create a text message to send to myself as a test, I can use ctrl-963 and on my original text message it looks like the lower case sigma, the character I want.
    When I get the text message, the character looks like an upper case sigma (the "M" on its side).
    The character is not supported at all in the regular Nokia Messaging app.
    EMAIL:
    When I attempt to use the character in creating a message, it just gives me a question mark character.
    When I get an email with the character, again it just gives me a question mark character.
    It appears that this is indeed a matter of software.  I-sms will support showing me that character, but not receiving it.  Profimail (the mail app I am using) doesn't support it either way.
    This isn't critical.  I will keep it in the back of my mind when evaluating this kind of app, and try suggesting to the programmers of both apps that supporting the extended character set would be handy, particularly in text messaging.

  • How to write extended ASCII characters to a file

    Here is a distilled fragment from some larger script. Its purpose is to create a text file containing some characters from the extended ASCII character set.
    $String = "Test" + [char]190 + [char]191 + [char]192
    echo $String | out-file d:\test.txt -Encoding ascii
    What I want in the target file is exactly the 7 characters that make up $String. The above method fails to deliver this result. How can I do it?

    Hi,
    Try using Add-Content or Set-Content instead:
    $String = "Test" + [char]190 + [char]191 + [char]192
    echo $String | Set-Content .\test.txt
    Don't retire TechNet! -
    (Don't give up yet - 13,225+ strong and growing)

  • Contains query fails for extended ascii characters

    I have an Oracle 9.2 instance whose characterset is WE8MSWIN1252. I'm using the same characterset on my client. If I have a LONG column that contains extended-ascii characters (the example I'm using has the Euro character '€', but I've seen the same problem with other characters), and I'm using the Intermedia service to index that column, then this select statement returns no records even though it should find several:
    select id from table1 where (contains(long_col,'€',1) > 0);
    However, the same select statement looking for something else, like 'e', works just fine.
    What am I doing wrong? I can do a "like" query against a VARCHAR2 column with a Euro character, and it works correctly. I can do a "dbms_lob.instr" query against a CLOB column with a Euro character, and it also works. It's just the "contains" query against a LONG column that fails.

    There are a number of limitations in using Long datatypes. If you check the SQL Reference you will see: "Oracle Corporation strongly recommends that you convert LONG columns to LOB columns as soon as possible. Creation of new LONG columns is scheduled for desupport.
    LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."

  • How to find an extended ASCII character

    Hi,
    I have a problem while reading from a InputStream with a BufferedReader. I�m trying to find out if an extended ASCII character is within a String made with readline(), but it always says no. Example:
    myString.indexOf('\u00D2')
    While debugging, I see that the character is a '?'.
    How can I find this extended ASCII character ('\u00D2') ? Is it an encoding matter?
    Please help!!

    You are using a default character encoding when you use BufferedReader, so the text you read is probably translated to some other character in some cases (two or more bytes can sometime make up a new unicode character depending on the encoding).
    You could try this:
    BufferedReader br = new BufferedReader(new InputStreamReader(in, "IS0-8859-1"));where in is your InputStream.

  • Extended ASCII changing to UNICODE in Oracle9i?

    Hello,
    We're just getting to verifying support for our applications against Oracle9i database. Historically, we've been supporting Oracle8 and Oracle8i, and they work just peachy.
    On some of our tables, we have a varchar column that is 255 characters long. We often import data that is exactly 255 chars in length. With 9i, if the data is 255 chars long, and contains any extended ASCII chars (such as degree symbol or plus/minus symbol, both of which we use), that row will fail to be imported. My personal impression is that it is being converted to UNICODE, which of course means that it becomes a two-byte character, and that means that this 255 char string is now 256 chars (bytes, actually, but you know what I mean), and can't be loaded into a varchar(255).
    We are willing to change our schema, but cannot do so until our next release. We need to get this release working on 9i, without changing the schema.
    Is it possible to import (using sqlldr) extended ASCII characters without changing them into Unicode characters?
    I have tried changing my NLS_LANG settings to US7ASCII (which is definitely wrong, it changes the extended chars into zeros) and I have tried WE8MSWIN1252, which does preserve the symbols, but does not preserve the ASCII encoding...
    I have tested the application against a changed schema ( just extended the varchar(255) to varchar(265)), so I know it works, but we've alreacy frozen this release, so I can't include the new schema...
    I am totally open to any suggestion that does not involve schema changes...
    Thank you,
    William

    My previous post is not really relevant to your problem.
    What character sets are you using in Oracle 8, Oracle 8i
    and Oracle 9i?
    For example:
    SQL> select * from nls_database_parameters
    2 where parameter = any('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');
    PARAMETER VALUE
    NLS_CHARACTERSET WE8ISO8859P15
    NLS_NCHAR_CHARACTERSET AL16UTF16
    According to Oracle's documentation,
    up to three character set conversions may be required for data definition language
    (DDL) during an export/import operation:
    1. Export writes export files using the character set specified in the NLS_LANG
    environment variable for the user session. A character set conversion is
    performed if the value of NLS_LANG differs from the database character set.
    2. If the export file's character set is different than the import user session
    character set, then Import converts the character set to its user session character
    set. Import can only perform this conversion for single-byte character sets. This
    means that for multibyte character sets, the import file's character set must be
    identical to the export file's character set.
    3. A final character set conversion may be performed if the target database's
    character set is different from the character set used by the import user session.
    To minimize data loss due to character set conversions, ensure that the export
    database, the export user session, the import user session, and the import database
    all use the same character set.

  • How to set extended ascii table to ISO Latin-1?

    Hi,
    I need to specify a ascii table set to ISO Latin-1... anyone know how to do this?
    thanks in advance

    226 is the correct integer value for the character '&#xE2;'. The "extended ascii codes" on the page you cited come from the [cp437|http://en.wikipedia.org/wiki/Codepage_437] encoding, which is not what you should be using. The reason you see that other character (I assume it's '&#x393;') is because the console on English-locale Windows machines uses cp437 by default. So your program is outputting the value in the platform-default windows-1252 encoding, but the console is mistakenly decoding the putput as cp437. You can force the console to use the correct encoding with the CHCP command, e.g. CHCP 1252 Internally, Java strings use the [UTF-16|http://en.wikipedia.org/wiki/UTF-16] encoding, which is identical to [ISO-8859-1|http://en.wikipedia.org/wiki/ISO/IEC_8859-1#ISO-8859-1] for the first 256 characters. If you're going to refer to Latin characters by their integer values, that's the conversion table you should be using.

  • Reg Extended Ascii Characters...

    Hi,
    I have state name data with Ascii extended characters, the examples of which are given below:
    Bouches-du-Rhône
    Corrèze
    Côte-d''Or
    Côtes-d''Armor
    Finistère
    Hérault
    Isère
    Lozère
    Nièvre
    Puy-de-Dôme
    Pyrénées-Atlantiques
    Pyrénées (Hautes)
    Pyrénées-Orientales
    Rhône
    Saône (Haute)
    Saône-et-Loire
    Sèvres (Deux)
    VendéeI need to :
    1) insert these data in a table.. i think i have to use ASCII codes for it or is it any other way?
    2) How will a user be able to search the above states.. I mean what will be the behaviour of above states when searched with LIKE operator or = search...?
    3) will indexes on state column be used in the above cases..?
    4) any other things that i need to keep in mind while working with such data..
    Thx

    What is your database character set?
    What application(s) will you be using to modify the data?
    What are the NLS_LANG settings on the client machine(s)?
    Assuming that the database character set supports the characters in the first place, and that the client supports them, they should behave just like any other character. Searching, indexing, etc. will all continue to work as normal.
    Justin

Maybe you are looking for

  • I can no longer turn on or off 32 bit to 64 from the info window.

    I can no longer turn on or off 32 bit to 64 from the info window. I have done a total reinstall with system 10.6 and Logic with no results. I was running Lion but it would allow me to run my Tascam FW-1884. I need to switch back to 32bit on Logic to

  • ATI and TVout plus games problems

    Hi, So I really hate ATI by now but since switching to nvidia ain't and option atm I might as well ask for user input on couple of problems: 1st: Tvout, I guess to get working tvout i must use fglrx? Tvout works as clone atm but is it possible to get

  • How can i get the timestamp of first data at continue sample mode?

    how can i get the timestamp of first data at  continue sample mode? 6023e pci , and c++ code thanks !

  • Service tax  standard Report

    hi... Tell me service tax  standard reports T.code. Thanks & Regards Rekha sharma

  • FM & PS integration

    Hi Gurus, 1> Can any one shed some light on FM(BCS) and PS integration. 2> Our client has implemented PS, where the budgeting is taken care. So the question is, what are the other benefits if they go for FM implementation? 3> WBS element can be creat