Converting bytes to chars.

Cant get the right ByteToCharConverter...
This works (but not quite as it is supposed to):InputStreamReader isr;
isr = new InputStreamReader(socket.getInputStream());Whereas thisInputStreamReader isr;
isr = new InputStreamReader(socket.getInputStream(),"UTF-8");does not work.
I get the UnsupportedEncodingException:
java.io.UnsupportedEncodingException: UTF-8 [Could not load class: sun.io.ByteToCharUTF-8]
     at sun/io/ByteToCharConverter.getConverter (ByteToCharConverter.java)
     at java/io/InputStreamReader.<init> (InputStreamReader.java)
How do I solve this?The code that doesn�t crash makes all the special characters (non-English letters ������������... the copyright sign ... ) wrong!
Please help me!
Ragnvald

HURRAY!!!
UTF8 not UTF-8 !!!!!

Similar Messages

  • Converting bytes to character array problem

    I am trying to convert a byte array into a character array, however no matter which Character Set i choose i am always getting the '?' character for some of the bytes which means that it was unable to convert the byte to a character.
    What i want is very simple. i.e. map the byte array to its Extended ASCII format which actually is no change in the bit represtation of the actual byte. But i cannot seem to do this in java.
    Does anyone have any idea how to get around this problem?

    Thanks for responding.
    I have however arrived at a solution. A little un-elegant but it seems to do the job.
    I am converting each byte into char manually by this algorithm:
              for (int i=0;i<compressedDataLength;i++)
                   if (((int) output)<0)
                        k=(127-((int) output[i]));
                        outputChr[i]=((char) k);
                   else
                        k=((int) output[i]);
                        outputChr[i]=((char) k);
                   System.out.println(k);
    where output is the byte array and outputChr is the character array

  • Form English Char ( Single Byte  ) TO Double Byte ( Japanese Char )

    Hello EveryOne !!!!
    I need Help !!
    I am new to Java ,.... I got assignment where i need to Check the String , if that string Contains Any Non Japanse Character ( a~z , A~Z , 0 -9 ) then this should be replaced with Double Byte ( Japanese Char )...
    I am using Java 1.2 ..
    Please guide me ...
    thanks and regards
    Maruti Chavan

    hello ..
    as you all asked Detail requirement here i an pasting C code where 'a' is passed as input character ..after process it is giving me Double Byte Japanese "A" .. i want this to be Done ..using this i am able to Convert APLHA-Numeric from singale byte to Doubel Byte ( Japanse ) ...
    Same program i want to Java ... so pleas guide me ..
    #include <stdio.h>
    int main( int argc, char *argv[] )
    char c[2];
    char d[3];
    strcpy( c, "a" ); // a is input char
    d[0] = 0xa3;
    d[1] = c[0] + 0x80;
    printf( ":%s:\n", c ); // Orginal Single byte char
    printf( ":%s:\n", d ); // Converted Double Byte ..
    please ..
    thax and regards
    Maruti Chavan

  • Converting byte[] to unicode , help needed.

    need help, folks.
    i need to convert byte[] to unicode in byte[] form
    say i already loaded bunch of data
    byte[] bytes = {........} //bunch array of bytesand i read in the bytes as a stream in the native form.
    ByteArrayInputStream stream = new ByteArrayInputStream(bytes);
    InputStreamReader isr = new InputStreamReader(stream,"GB18030");how do i get the bytes back in unicode ? i've been trying all kinds of methods, but doesnt seems to get what i want. i'm novice programmer, someone pls guide me ? thx.

                String s ;
              StringBuffer buffer = new StringBuffer();
              try {
                   ByteArrayInputStream stream = new ByteArrayInputStream(bytes);
                   InputStreamReader isr = new InputStreamReader(stream, "GB18030");
                   Reader in = new BufferedReader(isr);
                   int ch;
                   while ((ch = in.read()) > -1) {
                   buffer.append((char)ch);
                   in.close();
                   s = buffer.toString();
                   bytes = s.getBytes("UnicodeLittle");
                   out.write(bytes);
              } catch (IOException e) {
                   e.printStackTrace();
                   //return null;
              }ah, nvm, i found a better solution to it, i'm little confused before this. the code above works fine.
    well, thx alot.

  • Convert byte array to table of int

    [http://www.codeproject.com/KB/database/PassingArraysIntoSPs.aspx?display=Print|http://www.codeproject.com/KB/database/PassingArraysIntoSPs.aspx?display=Print] Hello friends.
    I'm pretty new with PL/SQL.
    I have code that run well on MSSQL and I want to convert it to PL/SQL with no luck.
    The code converts byte array to table of int.
    The byte array is actually array of int that was converted to bytes in C# for sending it as parameter.
    The TSQL code is:
    CREATE FUNCTION dbo.GetTableVarchar(@Data image)
    RETURNS @DataTable TABLE (RowID int primary key IDENTITY ,
    Value Varchar(8000))
    AS
    BEGIN
    --First Test the data is of type Varchar.
    IF(dbo.ValidateExpectedType(103, @Data)<>1) RETURN
    --Loop thru the list inserting each
    -- item into the variable table.
    DECLARE @Ptr int, @Length int,
    @VarcharLength smallint, @Value Varchar(8000)
    SELECT @Length = DataLength(@Data), @Ptr = 2
    WHILE(@Ptr<@Length)
    BEGIN
    --The first 2 bytes of each item is the length of the
    --varchar, a negative number designates a null value.
    SET @VarcharLength = SUBSTRING(@Data, @ptr, 2)
    SET @Ptr = @Ptr + 2
    IF(@VarcharLength<0)
    SET @Value = NULL
    ELSE
    BEGIN
    SET @Value = SUBSTRING(@Data, @ptr, @VarcharLength)
    SET @Ptr = @Ptr + @VarcharLength
    END
    INSERT INTO @DataTable (Value) VALUES(@Value)
    END
    RETURN
    END
    It's taken from http://www.codeproject.com/KB/database/PassingArraysIntoSPs.aspx?display=Print.
    The C# code is:
    public byte[] Convert2Bytes(int[] list)
    if (list == null || list.Length == 0)
    return new byte[0];
    byte[] data = new byte[list.Length * 4];
    int k = 0;
    for (int i = 0; i < list.Length; i++)
    byte[] intBytes = BitConverter.GetBytes(list);
    for (int j = intBytes.Length - 1; j >= 0; j--)
    data[k++] = intBytes[j];
    return data;
    I tryied to convert the TSQL code to PL/SQL and thats what I've got:
    FUNCTION GetTableInt(p_Data blob)
    RETURN t_array --t_array is table of int
    AS
    l_Ptr number;
    l_Length number;
    l_ID number;
    l_data t_array;
    BEGIN
         l_Length := dbms_lob.getlength(p_Data);
    l_Ptr := 1;
         WHILE(l_Ptr<=l_Length)
         loop
              l_ID := to_number( DBMS_LOB.SUBSTR (p_Data, 4, l_ptr));
              IF(l_ID<-2147483646)THEN
                   IF(l_ID=-2147483648)THEN
                        l_ID := NULL;
                   ELSE
                        l_Ptr := l_Ptr + 4;
                        l_ID := to_number( DBMS_LOB.SUBSTR(p_Data, 4,l_ptr));
                   END IF;
                   END IF;
    l_data(l_data.count) := l_ID;
              l_Ptr := l_Ptr + 4;
         END loop;
         RETURN l_data;
    END GetTableInt;
    This isn't work.
    This is the error:
    Error report:
    ORA-06502: PL/SQL: numeric or value error: character to number conversion error
    06502. 00000 - "PL/SQL: numeric or value error%s"
    I think the problem is in this line:
    l_ID := to_number( DBMS_LOB.SUBSTR (p_Data, 4, l_ptr));
    but I don't know how to fix that.
    Thanks,
    MTs.

    I'd found the solution.
    I need to write:
    l_ID := utl_raw.cast_to_binary_integer( DBMS_LOB.SUBSTR(p_Data, 4,l_ptr));
    instead of:
    l_ID := to_number( DBMS_LOB.SUBSTR (p_Data, 4, l_ptr));
    The performance isn't good, it's take 2.8 sec to convert 5000 int, but it's works.

  • How to find out if colums is defined as VARCHAR2 in bytes or char?

    Hello,
    I'd like to know if it is possible to find out if a colum table (or view) is defined as a VARCHAR2 in bytes or in CHAR on Oracle 10g.
    When I do a desc, it shows only VARCHAR2 with its length but not if it is bytes or char. How can I know for sure?
    Thanks,

    SQL> create table t
      id    varchar2 (10 char),
      id2   varchar2 (10 byte)
    Table created.
    SQL> select column_name, data_type, char_used
      from cols
    where table_name = 'T'
    COLUMN_NAME                                   DATA_TYPE       CHAR_USED
    ID                                            VARCHAR2        C       
    ID2                                           VARCHAR2        B       
    2 rows selected.

  • Converting bytes to pixels???

    Hi Everyone,
    My Jdev version is 11.1.2.3.0.
    I have deloped one ADF applicaton which is working fine.
    Now i have added a table to the page which has 3 columns. The width of the column in ADF page should be equal to width of the column in the database.
    How can i convert bytes to pixels in my ADF page.
    The three column's width in database are: 200Bytes, 250Bytes, 150Bytes. Now i need to specify the width in pixels in ADF.
    how can i do that?
    Any ways to do that?
    Thanks.

    Yeah I meant e[i] - sorry about that.
    When you say "diagram the to / from arrays" do you mean you want to see their definition & initialisation? If so, please see below:
    numBands, length & width are the three fields read in from the image header.
    byte[] imageDataByte = new byte[numBands * length * width];
              ra.read(imageDataByte,0,numBands * length * width);
              int[] iD = new int[numBands * length * width];
              int count = 0;     
              int[] imageData = new int[numBands * length * width];
              for (int x = 0; x < length; x++)
              for (int y = 0; y < numBands; y++)
              for (int z = 0; z < width; z++)
                   imageData[(length*z+x) + (length* width*y)] += imageDataByte[(length*z+x) + (length* width*y)] << ((length*z+x) + (length* width*y)*512);     
                   count = count + 1;
                   System.out.println(count + ": " + imageData[(length*z+x) + (length* width*y)]);
    Any help would be greatly appreciated.
    Many thanks,
    CG.

  • Converting byte[] to Class without using defineClass() in ClassLoader

    so as to de-serialize objects that do not have their class definitions loaded yet, i am over-riding resolveClass() in ObjectInputStream .
    within resolveClass() i invoke findClass() on a network-based ClassLoader i created .
    within findClass() , i invoke
    Class remoteClass = defineClass(className, classImage, 0, classImage.length);
    and that is where i transform a byte[] into a Class , and then finally return a value for the resolveClass() method.
    this seems like a lengthy process. i mean, within resolveClass() i can grab the correct byte[] over the network.
    then, if i could only convert this byte[] to a Class within resolveClass() , i would never need to extended ClassLoader and over-ride findClass() .
    i assume that the only way to convert a byte[] to a Class is using defineClass() which is hidden deep within ClassLoader ? there is something going on under the hood i am sure. otherwise, why not a method to directly convert byte[] to a Class ? the byte[] representation of a Class always starts with hex CAFEBABE, and then i'm sure there is a standard way to structure the byte[] .
    my core issue is:
    i am sending objects over an ObjectInputStream created from a Socket .
    at the minimum, i can see that resolveClass() within ObjectInputStream must be invoked at least once .
    but then after that, since the relevant classes for de-serialization have been gotten over the network, i don't want to cascade all the way down to where i must invoke:
    Class remoteClass = defineClass(className, classImage, 0, classImage.length);
    again. so, right now, within resolveClass() i am using a Map<String, Class> to create the following class cache:
    cache.put(objectStreamClass.getName(), Class);
    once loaded, a class should stay loaded (even if its loaded in the run-time), i think? but this is not working. each de-serialization cascades down to defineClass() .
    so, i want to short-circuit the need to get the class within the resolveClass() method (ie. invoke defineClass() only once),
    but using a Map<String, Class> cache looks really stupid and certainly a hack.
    that is the best i can do to explain my issue. if you read this far, thanks.

    ok. stupid question:
    for me to use URLClassLoader , i am going to need to write a bare-bones HTTP server to handle the class requests, right?Wrong. You have to deploy one, but what's wrong with Apache for example? It's free, for a start.
    and, i should easily be able to do this using the com.sun.net.httpserver library, right?Why would you bother when free working HTTP servers are already available?

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Converting a 6 char string to date

    Please help..
    How can I convert a 6 char date (ddmmyy) to date format
    that can be passed as input to function module?

    Hi,
    Try FM,
    CONVERT_DATE_TO_INTERNAL
    Run this sample code,
    DATA:
      my_date(6) TYPE c VALUE '300708',
      w_date     TYPE sy-datum.
    CALL FUNCTION 'CONVERT_DATE_TO_INTERNAL'
      EXPORTING
        date_external                  = my_date
    *   ACCEPT_INITIAL_DATE            = ACCEPT_INITIAL_DATE
    IMPORTING
       date_internal                  = w_date
    EXCEPTIONS
       date_external_is_invalid       = 1.
    WRITE: w_date.
    Regards
    Adil

  • NLS_LENGTH_SEMENTICS from BYTE TO CHAR

    Hi,
    For supporting multibyte character, we need to change NLS_LENGTH_SEMENTICS parameter from BYTE to CHAR. But this parameter setting will effect for new database tables created thereafter. To change the storage characteristics for existing database tables we explicitly executed Alter statements for database tables for columns having datatype as “Varchar2” and “Char”.
    Problem:
    ======
    Since the number of database tables in PRODUCTION are very high and contains approx. 600 million of records spread over 400 database tables, we are not in a position to afford the time which will be spent in altering these database tables in PRODUCTION.
    We ran the test by alter script in System Test environment and alteration of database tables covering 150 tables and 200 million of records was carried out in almost 16-20 hrs.
    APPROACHES WE HAVE IN MIND
    ==========================
    1. Alter all the table columns (We tried the same and taking too much time)
    2. Export /Import with NLS_LENGTH_SEMENTICS set as CHAR(We discuss with our DBA about this approach and found that it will also take too much time and there is RISK of data inconsistency)
    3. Drop the index of the table, run alter script for changing storage type BYTE to CHAR , and rebuild the index (this is also taking too much time).
    All above approaches are very costly in terms of time, that we cannot afford.
    If any one having better solution then please suggest.
    thanks in advance
    Syed

    Hi
    We are also facing a similar problem
    We ran alter table scripts and now compiling the objects
    and that is taking lot of time.
    we have around 4000 invalids that by parallel recomp came down to 2000 but still these 2000 which are mostly packages .. are giving a hard time.
    if anyone has faced/found a similar issue/solution pls post. or maildirectly to me.
    Sunil Choudhary

  • Change NLS_LENGTH_SEMANTICS from BYTE to CHAR on Oracle 9i2 problem!

    Hi,
    I have created a new database on Oracle 9i 2, I did not find the correct pfile parameter for the NLS_LENGTH_SEMANTICS setting.
    So have created a standart UTF8 database and now I am trying to change the standard NLS_LENGTH_SEMANTICS=BYTE to CHAR. When I execute the following command in SQL PLUS "ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR SCOPE=BOTH"
    The system is tells me that command is successfully executed.
    But when I look at the NLS_DATABASE_PARAMETERS table I do not see any change for the NLS_LENGTH_SEMANTICS parameter.
    I have also restarted the instance but still everything is the same as it was before.
    Do you know what I am doing wrong?
    Regards
    RobH

    Hi,
    Yeah you are right, the nls_session_parameters "NLS_LENGTH_SEMANTICS" for the app user is set to CHAR.
    This means that NLS_DATABASE_PARAMETERS is from the SYS or SYSTEM user view?
    Thanks a lot
    Regards
    RobH

  • Converting byte from dec to hex

    Hi All,
    I'm having a problem converting byte from decimal to hex - i need the following result:
    if entered 127 (dec), the output should be 7f (hex).
    The following method fails, of course because of NumberFormatException.
        private byte toHexByte(byte signedByte)
            int unsignedByte = signedByte;
            unsignedByte &= 0xff;
            String hexString = Integer.toHexString(unsignedByte);
            BigInteger bigInteger = new BigInteger(hexString);
            //byte hexByte = Byte.parseByte(hexString);
            return bigInteger.byteValue();
        }

    get numberformatexception because a lot of hex digits cannot be transformed into int just like that (ie f is not a digit in decimal) heres some code that i used for a pdp11 assembler IDE... but this is for 16-bit 2s complement in binary/octal/decimal/hex , might be useful for reference as example though
        public static String getBase(short i, int base){
            String res = (i>=0)? Integer.toString((int)i,base)
                    : Integer.toString((int)65536+i,base) + " ("+Integer.toString((int)i,base)+")";
           StringBuffer pad= new StringBuffer();
            for(int x = 0; x < 16 - res.length() ; x++){
                pad.append("0");
            res = pad.toString() + res;
            return res;
        }

  • Byte or CHAR? - as Unit

    Hi all,
    DB - 9.2
    Which is the better option to keep unit as Byte or CHAR while using the VARCHAR2 datatype?
    Suppose unit is selected as Byte then is it not cumbersome to calculate the max. no. of characters could be stored in the field?

    Hi!
    I'd propose CHAR, because with BYTE it is dependent of the character set how many characters you can store in your field. This means -> calculate :)
    Best regards,
    Daniel

  • Function module to convert byte stream to Tiff formate.

    Hi All,
    Is there any function module is there to convert byte stream to Tiff formate.
    Thanks&regds,
    Srinivas.
    Edited by: srinivas balla on Dec 24, 2007 6:28 AM

    try this
    SCP_1TO1_BYTE_CONVERTER_MAP
    Award Points if useful
    bhupal

Maybe you are looking for