Char encoding Problem with HibernateEM+Timesten11.2.1

Hi,
i have a very simple java application on Timesten11.2.1 which insert a VARCHAR2 value in a table and then read it back in String. I can insert it correctly, (i have checked it by ttisql command line) but i can not read the inserted value correctly from the table, what returned is unreadable characters.
If i run my application on Timesten7 (same codes and same configuration with another ttjdbc.jar for timesten7.x), the inserted value can be correctly returned.
The DatabaseCharacterSet of Timesten11.2.1 is WE8MSWIN1252 and my hiberante configuration like this:
               <property name="hibernate.connection.driver_class" value="com.timesten.jdbc.TimesTenDriver"/>
          <property name="hibernate.dialect" value="*org.hibernate.dialect.TimesTen7Dialect*"/>
               <property name="hibernate.show_sql" value="true" />
               <property name="hibernate.connection.charSet" value="*UTF-8*"/>
<property name="hibernate.connection.useUnicode" value="*true*" />
I use Jdk1.5 and hibernate em 3.4.0.GA , ttjdbc5.jar and orai18n are all in classpath.
does anyone have experience with HibernateEM+Timesten11.2.1? or do i forget something?
Any help or advice would be appreciated !!
(If someone have interest to test i can paste my whole source code here.)
Thanks in advance
stratocomit

hi simon,
The setting for Timesten7 is:
connect "DSN=strato;pwd=secret;oraclepwd=secret";
Connection successful: DSN=strato;UID=strato2;DataStore=/etc/TimesTen/tt70/data/strato;DatabaseCharacterSet=WE8ISO8859P1;ConnectionCharacterSet=US7ASCII;DRIVER=/etc/TimesTen/tt70/lib/libtten.so;OracleId=stratodata;Authenticate=0;PermSize=20480;TempSize=1024;TypeMode=0;
(Default setting AutoCommit=1)
and for Timesten11.2.1 is
connect "DSN=ORADB33;uid=STRATO2;pwd=secret;oraclepwd=secret";
Connection successful: DSN=ORADB33;UID=STRATO2;DataStore=/etc/TimesTen/tt1121/info/DataStore/ORADB33;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=WE8MSWIN1252;LogFileSize=512;DRIVER=/etc/TimesTen/tt1121/lib/libtten.so;OracleId=ORADB33;LogDir=/etc/TimesTen/tt1121/info/DataStore/logs;PermSize=10240;TempSize=128;PassThrough=2;TypeMode=0;OracleNetServiceName=ORADB33;
(Default setting AutoCommit=1)
what i inserted is ASCII characters and ASCIISTR() does not help......I tyied to run JDBC example from the installation of Timesten11.2.1 e.g.TTJdbcExamples.java , i still get the unreadable characters..........:(

Similar Messages

  • Encoding problem with XSL

    Hi,
    I have problems when printing the result of processing XML with an XSL that contains locale specific chars.
    Here is a sample:
    XML :
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <ListePatients>
    <Patient>
    <Nom>Zeublouse</Nom>
    <NomMarital/>
    <Prinom>Agathe</Prinom>
    </Patient>
    <Patient>
    <Nom>Stick</Nom>
    <NomMarital>Laiboul</NomMarital>
    <Prinom>Ella</Prinom>
    </Patient>
    <Patient>
    <Nom>`ihnotvy</Nom>
    <NomMarital/>
    <Prinom>Jacques</Prinom>
    </Patient>
    </ListePatients>
    XSL :
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    <xsl:output method="html"/>
    <xsl:template match="*|/"><xsl:apply-templates/></xsl:template>
    <xsl:template match="text()|@*"><xsl:value-of select="."/></xsl:template>
    <xsl:template match="/">
    <HTML>
    <HEAD>
    <META http-equiv='Content-Type' content='text/html; charset=iso-8859-1'/>
    <TITLE>Liste de patients</TITLE>
    </HEAD>
    <BODY>
    <xsl:apply-templates select='ListePatients'/>
    </BODY>
    </HTML>
    </xsl:template>
    <xsl:template match='ListePatients'>
    <TABLE>
    <xsl:for-each select='Patient'>
    <xsl:sort select='Nom' order='ascending' data-type='text'/>
    <TR TITLE='`ihnotvy'>
    <TD> <xsl:value-of select='Nom'/> </TD>
    <TD> <xsl:value-of select='NomMarital'/> </TD>
    <TD> <xsl:value-of select='Prinom'/> </TD>
    </TR>
    </xsl:for-each>
    </TABLE>
    </xsl:template>
    </xsl:stylesheet>
    Test program (from Oracle sample) :
    import java.net.URL;
    import java.io.*;
    import oracle.xml.parser.v2.DOMParser;
    import oracle.xml.parser.v2.XMLDocument;
    import oracle.xml.parser.v2.XMLDocumentFragment;
    import oracle.xml.parser.v2.XSLStylesheet;
    import oracle.xml.parser.v2.XSLProcessor;
    public class XSLSampleOTN
    * Transforms an xml document using a stylesheet
    * @param args input xml and xml documents
    public static void main (String args[]) throws Exception
    DOMParser parser;
    XMLDocument xmldoc, xsldoc, out;
    URL xslURL;
    URL xmlURL;
    try
    if (args.length != 2)
    // Must pass in the names of the XSL and XML files
    System.err.println("Usage: java XSLSampleOTN xslfile xmlfile");
    System.exit(1);
    // Parse xsl and xml documents
    parser = new DOMParser();
    parser.setPreserveWhitespace(true);
    // parser input XSL file
    xslURL = DemoUtil.createURL(args[0]);
    parser.parse(xslURL);
    xsldoc = parser.getDocument();
    // parser input XML file
    xmlURL = DemoUtil.createURL(args[1]);
    parser.parse(xmlURL);
    xmldoc = parser.getDocument();
    // instantiate a stylesheet
    XSLStylesheet xslSS = new XSLStylesheet(xsldoc, xslURL);
    XSLProcessor processor = new XSLProcessor();
    // display any warnings that may occur
    processor.showWarnings(true);
    processor.setErrorStream(System.err);
    // Process XSL
    XMLDocumentFragment result = processor.processXSL(xslSS, xmldoc);
    // print the transformed document
    result.print(System.out);
    // an other way to print, it doesn't print the same !!!!
    processor.processXSL(xslSS, xmldoc, System.out);
    catch (Exception e)
    e.printStackTrace();
    When printing the transformed document with DocumentFragment.print() it work fine but when using processXSL(xslSS, xmldoc, System.out) it don't works for locale specific chars and a second <META> balise appears, Why ?
    with DocumentFragment.print(), it's Ok :
    <HTML>
    <HEAD>
    <META http-equiv="Content-Type"
    content="text/html; charset=iso-8859-1"/>
    <TITLE>Liste de patients</TITLE>
    </HEAD>
    <BODY>
    <TABLE>
    <TR TITLE="`ihnotvy">
    <TD>`ihnotvy</TD>
    <TD/>
    <TD>Jacques</TD>
    </TR >
    <TR TITLE="`ihnotvy">
    <TD>Stick</TD>
    <TD>Laiboul</TD>
    <TD>Ella</TD>
    </TR>
    <TR TITLE="`ihnotvy">
    <TD>Zeublouse
    </TD>
    <TD/>
    <TD>Agathe</TD>
    </TR>
    </TABLE>
    </BODY>
    </HTML>
    With processXSL(xslSS, xmldoc, System.out), it's not Ok :
    <HTML>
    <HEAD>
    <META http-equiv="Content-Type" content="text/html">
    <META http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
    <TITLE>Liste de patients</TITLE>
    </HEAD>
    <BODY>
    <TABLE>
    <TR TITLE="C C)C(C.C/C4C6C9">
    <TD>C C)C(C.C/C4C6C9</TD>
    <TD></TD>
    <TD>Jacques</TD>
    </TR>
    <TR TITLE="C C)C(C.C/C4C6C9">
    <TD>Stick</TD>
    <TD>Laiboul</TD>
    <TD>Ella</TD>
    </TR>
    <TR TITLE="C C)C(C.C/C4C6C9">
    <TD>Zeublouse</TD>
    <TD></TD>
    <TD>Agathe</TD>
    </TR>
    </TABLE>
    </BODY>
    </HTML>
    TIA
    Didier
    null

    Two other problems with XSL and print:
    first one :
    XSL :
    <SCRIPT langage="Javascript" type="text/javascript" src="scripts/erreur.js"></SCRIPT>
    DocumentFragment.print() produce :
    <SCRIPT langage="Javascript" type="text/javascript" src="scripts/erreur.js"/>
    => IE5.5 don't load the file !!, it required syntaxe like <SCRIPT ...></SCRIPT> to load.
    the second one :
    XSL:
    <TD><IMG src="images/menuleft.gif"/></TD>
    DocumentFragment.print() produce :
    <TD>
    <IMG src="images/menuleft.gif">
    </TD>
    processXSL(xslSS, xmldoc, System.out) produce :
    <TD><IMG src="images/menuleft.gif">
    </TD>
    Why a cariage return ?? it cause prisentation failure when you want to specifie the size off the cell !!
    TIA
    Didier
    null

  • Encoding Problems with WRT300N V2

    Hello guys,
    i have following problem:
    I have a WRT300n Router and it seems to have problems with WPA2 encoding. I have 2 different deviceses (a SMC Ethernet Bridge and a PS3) trying to connect to my Linksys wireless router. No chance in WPA2 mode but when I try in WEP-Mode, both devicec can connect to it.
    Are there known encoding problems?
    Any idea what i can do?

    It is able to use. But my router always drops the connection. WEP is no problem, but ... what about low security and Wirless N is not enabled ??

  • Encoding problem with Application adapter for OEBS

    Hello All!
    We going to pass value via app adapter from EBS Forms to Content Server page. Link to documentation Configuring the Managed Attachments Solution - 11g Release 1 (11.1.1)
    We have done all setting, and this function is working now .
    It is good working with English letters and numbers, but when we try to use Cyrillic we have a problem with Encoding in Content server page.
    Data on Oebs table use ''CL8ISO8859P5'', but service xml for adapter has allays a title with  charset=utf-8
    System are:
    OEBS 12.1.3
    Webcenter Content 11.1.1.6
    Adapter 11.1.1.6 + patch 16463891
    Could you help us?

    Hi Denis ,
    Try this solution :
    1) Log on to the APPS EBS schema and re-create the AXF_SOAPCall function using the following SQL:
    create or replace
    function AXF_SOAPCall (url varchar2, soapmsg varchar2, secure varchar2, walletid varchar2, walletpass varchar2) return varchar2 as
    http_req utl_http.req;
    http_resp utl_http.resp;
    response_env varchar2(32767);
    v_newcharset VARCHAR2(40 BYTE) :='UTF8';
    v_Dbcharset VARCHAR2(40 BYTE);
    v_Raw1 RAW(32767);
    v_Stmt1 VARCHAR2(6000 BYTE);
    begin
    v_newcharset := 'AMERICAN_AMERICA.'|| v_newcharset ;
    v_Dbcharset := 'AMERICAN_AMERICA.'||utl_i18n.map_charset(fnd_profile.value('ICX_CLIENT_IANA_ENCODING'),0,1);
    v_Raw1 := UTL_RAW.CAST_TO_RAW (soapmsg);
    v_Raw1 := UTL_RAW.CONVERT (v_Raw1,v_newcharset,v_Dbcharset);
    if ( secure = 'true' or secure = 'TRUE' ) then
    utl_http.set_wallet (walletid, walletpass);
    end if;
    http_req := utl_http.begin_request(url , 'POST', utl_http.HTTP_VERSION_1_1);
    utl_http.set_header(http_req, 'Content-Type', 'text/xml; charset=utf-8');
    utl_http.set_header(http_req, 'Content-Length', utl_raw.length(v_Raw1));
    utl_http.write_raw(http_req, v_Raw1);
    http_resp := utl_http.get_response(http_req);
    utl_http.read_text(http_resp, response_env);
    dbms_output.put_line('Response: ');
    dbms_output.put_line(response_env);
    utl_http.end_response(http_resp);
    return response_env;
    end;
    2) Log on to the EBS and confirm that you can now open the Managed Attachments window for all records & forms, regardless of the use (& length) of multibyte string values.
    This was caused by multibyte characters used with non UTF-8 language strings .
    I presumed that you are seeing that when "Managed Attachments" option through the ZOOM button from an E-Business Suite (EBS) forms, nothing happens.If you have a support id then check the following note : 1409703.1 from MyOracleSupport portal.
    Hope this helps.
    Thanks,
    Srinath

  • Encoding problem with convert and CLOB involving UTF8 and EBCDIC

    Hi,
    I have a task that requires me to call a procedure with a CLOB argument containing a string encoded in EBCDIC. This did not go well so I started narrowing down the problem. Here is some SQL to illustrate it:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> select value from v$nls_parameters where parameter = 'NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    SQL> select convert(convert('abc', 'WE8EBCDIC500'), 'AL32UTF8', 'WE8EBCDIC500')
    output from dual;
    OUT
    abc
    SQL> select convert(to_char(to_clob(convert('abc', 'WE8EBCDIC500'))), 'AL32UTF8', 'WE8EBCDIC500') output from dual;
    OUTPUT
    ╒╫¿╒╫¿╒╫¿
    So converting to and from EBCDIC works fine when using varchar2, but (if I am reading this right) fails when involving CLOB conversion.
    My question then is: Can anyone demonstrate how to put correct EBCDIC into a CLOB and maybe even explain why the examples do what they do.

    in order to successfully work with xmldb it is recommended that you use 9.2.0.4
    and above. Its seems to have lower version.
    Okay now related to the problem , if your data that you want to send to the attributes are not greater than 32767, then you can use the pl/sql varchar2 datatype to hold the data rather then CLOB and overcome this problem.
    here is the sample. use function with below pl/sql to return the desired output.
    SQL> declare
      2   l_clob     CLOB := 'Hello';
      3   l_output   CLOB;
      4  begin
      5    select  xmlelement("test", xmlattributes(l_clob AS "a")).getclobval()
      6      into l_output from dual;
      7  end;
      8  /
      select  xmlelement("test", xmlattributes(l_clob AS "a")).getclobval()
    ERROR at line 5:
    ORA-06550: line 5, column 44:
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got CLOB
    ORA-06550: line 5, column 3:
    PL/SQL: SQL Statement ignored
    SQL> declare
      2   l_vchar     varchar2(32767) := 'Hello';
      3   l_output   CLOB;
      4  begin
      5    select  xmlelement("test", xmlattributes(l_vchar AS "a")).getclobval()
      6      into l_output from dual;
      7    dbms_output.put_line(l_output);
      8  end;
      9  /
    <test a="Hello"></test>
    PL/SQL procedure successfully completed.

  • JSF encoding problem with Russian

    Hi,
    I am new to JSF, trying to build prototype i18n JSF app with MyFaces 1.1.5
    Mostly care about IE6/7, do not officially support other browsers.
    I do have resource bundles, it is localized and internationalized.
    So user is supposed to work in either English or Russian.
    It all works except a few issues:
    1) When user enters something in Russian in inputText field on the first page,
    trying to pass it to the next page via managed bean.
    But instead of showing it in Russian, it shows some garbled text in outputText or outputLabel: Îëåã
    FF3 & Chrome show it as: &#1054;&#1083;&#1077;&#1075; [&#xxxx, where xxxx us a 4 digits]
    It does pick up labels from correct bundle on the next page, so locale is changed correctly.
    Actually, when I switch locale on the same first page, getting the same problem (after it's refreshed).
    Seems to work OK with Spanish
    Is that incorrect Cyrillic encoding, JSF encoding or broken Unicode or locale?
    How can we fix that ?
    Less important issues:
    2) When we switch locale (correct values in bundles), on the screen it can not change currencyCode, currencySymbol, TimeZone,
    it always shows the first one it picked, although dateTime and Number formatting changes correctly;
    3) In IE6 "alt" tag attribute produces garbled characters in Russian (black vertical squares), looks like IE6 bug; FF3 & Chrome work fine.
    Please help !
    TIA,
    Oleg.

    For 1, you are not passing the characters in the correct encoding from one page to the next when you pass the parameter (most likely).

  • Encoding problem with xml

    Hello,
    I'm trying to save on disk an xml with utf-8 codification and compressed with "gzip" algorithm. For some reasons, I have to use a BufferedWriter, then I must convert the byte[] to String.
    Afterwards, I read this document and I try to decompress it and save on a String variable.
    If I do all this process using utf-8 encoding, the decompressing process throws an exception: Not in GZIP format.
    But if I use iso-8859-1, then everything works OK.
    I don't understand why using iso works, and why using utf-8 does not work (when the webservice specification says that the xml documents are sent in utf-8).
    The code is the following (the static "myCharset" is the key: when I set iso works, and setting utf-8 does not work):
    public class testCompress
    public static String xmlOutput     = "<?xml version=\"1.0\" encoding=\"utf-8\"?><soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"><soap:Body><consultaCiudadesPorPaisResponse xmlns=\"http://tempuri.org/\"><consultaCiudadesPorPaisResult><xml funcion=\"ConsultaCiudadesPorPais\" xmlns=\"\"><ROK>TRUE</ROK></xml></consultaCiudadesPorPaisResult></consultaCiudadesPorPaisResponse></soap:Body></soap:Envelope>";
    public static String myCharset     = "iso-8859-1";
    // Writes the compressed document to disk.
    public static void writeDocument() throws Exception
      BufferedWriter writer = null;
      try 
       // Compress the "xmlOutput" converting this string to bytes using "utf-8" as specification says (in "compress" method)
       // Afterwards, I convert this byte[] to String using "myCharset".
       String compressedFile = new String(CompressionService.compress(xmlOutput, "utf-8", "gzip", 8), myCharset);
       // And write to disk using "myCharset" as the encoding used by "OutputStreamWriter".
       writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("c:/cache"), myCharset), 8192);
       writer.write(compressedFile);
      catch (Exception e) { throw e; }
      finally
        if (writer != null)
         try { writer.close(); writer = null; } catch (IOException ioe) {}
    // Reads the compressed document from disk.
    public static byte[] readDocument() throws Exception
      BufferedReader reader = null;
      StringBuilder sb           = new StringBuilder();
      char[] buffer           = new char[8192];
      try
       // Open the file using "myCharset" for reading chars.
       reader = new BufferedReader(new InputStreamReader(new FileInputStream("c:/cache"), myCharset));
       int numchars = 0;
       while ((numchars = reader.read(buffer, 0, 8192)) >= 0) sb.append(buffer, 0, numchars);
       // And return the result as a byte[] encoded with "myCharset".
       return (sb.toString().getBytes(myCharset));
      catch (Exception e) { throw e; }
      finally
       if (reader != null)
        try { reader.close(); } catch (IOException ioe) {}
    public static void main(String[] args) throws Exception
      writeDocument();
      byte[] file = readDocument();
      // DECOMPRESS FAILS IF myCharset = "utf-8", and works if myCharset = "iso-8859-1"
      System.out.println(com.vpfw.proxy.services.compress.CompressionService.decompress(file, "utf-8", "gzip", 8));
    }

    Okay, here's what's happening. You created a byte[] by encoding some text as UTF-8, then you ran that byte[] through a gzip deflater. The result is binary data that can only be understood by a gzip inflater; to any other software it just looks like garbage. Now you're taking a randomly-chosen encoding and pretending the binary data is really text that was encoded with that encoding.
    Most encodings have limits on what kinds of input they can accept. For example, US-ASCII only uses the low-order seven bits of each byte; any byte with a value larger than 127 is invalid. When the encoder encounters such a byte, it inserts the standard replacement character, U+FFFD, in that spot. When you try to decode the string again as US-ASCII, the replacement character is what you see in that position; the original byte value is lost. In UTF-8, the bytes have to conform to [certain patterns|http://en.wikipedia.org/wiki/UTF-8#Description]; for example, any byte with a value greater than 127 has to be part of a valid two-, three- or four-byte sequence.
    ISO-8859-1 is different. It's a single-byte encoding like ASCII, but it uses all eight bits of every byte. Furthermore, every possible byte value (0..255) maps to a character, so you can throw any random byte at it and tell it the byte represents a character, and it will believe you. Some of those values may map to control characters that would look like garbage if you displayed them, but they're valid. That means you can re-encode the string as ISO-8859-1 and get back the exact byte sequence you started with.
    So that's why your code "works" when you use ISO-8859-1, but I strongly recommend that you find another way; making binary data masquerade as text is dangerously fragile. Why do you have to use a Writer anyway? Is it for transmission over a medium that only accepts text data? If so, you should use a Base64 encoder or similar tool that's designed for that purpose.

  • Char Encoding Problem

    Hi every one. I looked over the forums to check if this problem was discussed before or not, and didn't find any thing
    I am running a JSP application accessing mySQL database, reading and displaying arabic records. This is the code:
                   String url   = "jdbc:mysql://localhost:3306/phpbb?characterEncoding=UTF-8";     //I tried "windows-1256 & cp-1256"
                   String query = "SELECT * FROM phpbb_posts_text";
                   try
                        Class.forName  ("com.mysql.jdbc.Driver");
    Connection con = DriverManager.getConnection( url, "root", "password" );
                        Statement stmt = con.createStatement ();
                        ResultSet rs = stmt.executeQuery (query);                         
                        while (rs.next())
                             out.print(rs.getString(1));
                             out.print(rs.getString(3));
                             out.print(rs.getString(4));
                        rs.close();
                        stmt.close();
                        con.close();
                   catch (Exception ex)
                        out.print(ex);           
                   }The problem is with the output, it comes in the same format I see it in mySQL shell (if I used windows-1256 arabic encoding)
    If I used UTF-8, the output comes as series of question marks "?? ???? ??? ??"
    Can any body help me in this issue?

    please try
    <%@ page language="java" contentType="text/html; charset=UTF-8" %>
    <%@ page import="java.sql.*" %>
    <%
         String url   = "jdbc:mysql://localhost:3306/phpbb?requireSSL=false&useUnicode=true&characterEncoding=utf8";     
         String query = "SELECT * FROM phpbb_posts_text";
         try
              Class.forName  ("com.mysql.jdbc.Driver");
              Connection con = DriverManager.getConnection( url, "root", "password" );
              Statement stmt = con.createStatement ();
              ResultSet rs = stmt.executeQuery (query);                         
              while (rs.next())
                   out.print(rs.getString(1));
                   out.print(rs.getString(3));
                   out.print(rs.getString(4));
              rs.close();
              stmt.close();
              con.close();
         catch (Exception ex)
              out.print(ex);           
    %>

  • Encoding Problems with 16:9

    hi, just started creating 16:9 projects, however i dont think idvd likes 16:9 as anytime i try t burn it fails and says that "there was a problem during encoding" anyway else experience this?

    This may be of assistance to you and others experiencing the wonderfully enigmatic " Encoding Video - There was an error during movie encoding. "
    SHORT VERSION of SOLUTION: Downloaded Handbrake. Converted source video using Handbrake's default settings. Imported resulting videos into iDVD. Everything worked.
    LONG VERSION of SOLUTION: I have a few 16:9 mp4 videos that I was attempting to burn onto a relatively simple DVD. No matter what theme I chose, I received the encoding error. After reading a number of other posts on the subject, on a lark, I thought I would try to change the encoding of the movies to some other format. Turns out this solved the problem in a roundabout way. Handbrake (http://handbrake.fr/?article=download) was key to the solution. I tried converting to MKV first; that didn't work. Then I noticed something. When I chose the mp4 original and left Handbrake with default settings, the specifications for the source video showed 1280x720, but the specifications for the output video showed 1278x720. I converted the mp4 video to ... well ... mp4 (Handbrake's default format). Handbrake's resulting file had an m4v extension, but I don't think that has any relevance. So, back in iDVD, I imported the new versions of the video and everything worked. I hope that is of some assistance to you.

  • Url char-encoding problem

    I am connecting to a web-server. The URL has Japanese characters embedded in it.
    When run from NetBeans 6.8 everything works ok.
    When run from the shell, the server appears to not understand the Japanese characters. For example, if the Japanese characters represent a user name, none of the names are ever understood.
    Default charset when run from IDE is "UTF-8".
    Default when run from shell: "Cp1252".
    I don't want a systemic solution [for example a command-line switch, or config file setting]. I want to control every IO stream's encoding manually.
    This is the test that comes closest to what I think should work:
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    OutputStreamWriter baosStreamWrt = new OutputStreamWriter(baos, utf8.newEncoder());
    BufferedWriter bufWrt = new BufferedWriter(baosStreamWrt);
    bufWrt.write(url_with_jp_characters);
    bufWrt.flush();
    ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
    InputStreamReader inStreamRdr = new InputStreamReader(bais, utf8.newDecoder());
    BufferedReader bufRdr = new BufferedReader(inStreamRdr);
    String faulty_url = bufRdr.readLine();
    URL webpage = new URL(faulty_url);
    URLConnection urlConn = webpage.openConnection();
    BufferedReader webIn = new BufferedReader(new InputStreamReader(urlConn.getInputStream(), utf8.newDecoder()));
    BufferedWriter response = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("/tmp/dump.txt"), utf8.newEncoder()));
    response.("checking the the url: ")
    response.write(faulty_url);
    response.newLine();
    while(true) {
      String webLine = webIn.readLine();
      if(webLine == null) { break; }
      response.write(webLine);
      response.newLine();
    response.close();
    webIn.close();
    ....Inspecting the response from the server in the file "/tmp/dump.txt":
    (1) the file is formatted "UTF-8".
    (2) the url, written in the first line of the file, is valid and a cut/paste into a browser works correctly.
    (3) many correctly formed Japanese words are in the response from the server that is saying: "I have no idea how to understand/(decode?) the user name you sent me in the URL."
    At this point I have a choice:
    (1) I don't understand the source of the problem?
    (2) I need to keep banging away at trying to get the url correctly encoded.
    Finally, how can I debug this???
    (1) It works in the IDE, so I don't have those debugging tools.
    (2) My terminal cannot display asian characters.
    (3) Writing output to files involves another encoder for the FileWriter which taints everything in the file.
    (4) I don't have a webserver to act as a surrogate for the real one.
    thanks.

    Kayaman wrote:
    rerf wrote:
    And I don't know why NetBeans allowed me to get around not using this command.Because of the default charset of UTF-8 that's set in Netbeans.I see that, but something more subtle was happeneing I think. I still don't fully understand. Here is my best explanation. Exact code:
    ....for(File f : files) {
      BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(f), utf8.newDecoder()));
      String jpName = in.readLine().split(";")[0];
      String jpNameForUrlUsage = URLEncoder.encode(jpName, "UTF-8");
      String dumpFileName = (outputDir + f.getName().split(".txt")[0] + "-dump.txt");
      BufferedWriter out = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(dumpFileName), utf8.newEncoder()));
      URL url = new URL("http://www.abc.cp.jp?name=" + jpNameForUrlUsage);
      URLConnection urlConn = url.openConnection();
    }....In my first post, I formed the complete url, then encoded it to utf-8 using ByteArray[Input / Output]Streams. But, my understanding is that you can't just encode Japanese characters to utf-8 and then append them to a url. A special method call is needed to transform the Japanese characters into something understandable by the URL (and its not as simple as just encoding the characters to utf-8). After posting, I actually made an effort to read all of javadoc for java.net.URL:
    java.net.URL
    The URL class does not itself encode or decode any URL components according to the escaping mechanism defined in RFC2396. It is the responsibility of the caller to encode any fields, which need to be escaped prior to calling URL, and also to decode any escaped fields, that are returned from URL. Furthermore, because URL has no knowledge of URL escaping, it does not recognise equivalence between the encoded or decoded form of the same URL. For example, the two URLs:
    http://foo.com/hello world/ and http://foo.com/hello%20world
    would be considered not equal to each other.
    Note, the URI class does perform escaping of its component fields in certain circumstances. The recommended way to manage the encoding and decoding of URLs is to use URI, and to convert between these two classes using toURI() and URI.toURL().....
    Then, it all made sense. I had already diagnosed that it was just the Japanese character part of the URL that caused the failure. And I remember reading that the Chinese were upset that they could not use asian characters in URLs. So, even though in my browser it looks like a Japanese character is in it, its really not. Chrome browser is doing some transform from what I see in my browser URL and what the actual URL is. That is why cutting/pasting the url I was forming in Java into the browser is not a good representation of what is going on. Chrome, behind the scenes does what the URLEncoder class does.
    Yet, I could very easily be wrong:
    On the one hand NetBeans worked, and the shell did not. The relevant difference being default charset.
    But on the other hand, in my initial post, I completely encoded that url into utf-8 using ByteArray[Input / Output]Streams. I am no expert on character encodings, but if the url I created in the initial posting is not utf-8 encoded then I need a few pointers on how to character encode.
    final thought:
    I can't just encode a Japanese character to UTF-8, append it to a URL, and expect it to work. A browser can make it look that way so its deceiving. That is why there is the class URLEncoder. I don't understand why NetBeans appears to invoke it for me without my knowledge. And maybe its not. Having the default charset as utf-8 might obviate the need for a URLEncoder() call (but I don't see why it should).

  • Encoding problem with servlet

    I am using java servlet to submit an Arabic string to some java class and save this Arabic string in text file.
    The problem is in the encoding, where the Arabic query is not saved correctly inside the text
    file. The same program is work fine using java.
    Getting the query (Unicode: Arabic windows) from servlet:
         String query = request.getParameter("query".trim());
    Saving it in the text file:
         ps = new PrintStream(new FileOutputStream(
                        "C:/file/1.txt"),true,"utf-8");
              ps.println(query);
    As I said the same task is done correctly using java. Where I submit a utf-8 string and save it inside the text file.
    I tried also to convert the Arabic windows code to utf and then save it inside the text file but this also doesn’t help: query = new String (query.getBytes("utf-8"));
    Any suggestions?
    Thanks
    Edited by: [email protected] on Oct 6, 2009 6:52 AM

    I've tried the following code and it worked for me :
                   response.setContentType("text/html");
              String param = request.getParameter("test");
              String query =new String(param.getBytes("ISO-8859-1"),"UTF-8");
              PrintStream ps = new PrintStream(new FileOutputStream(
              "C:/1.txt"));
              ps.write((query.getBytes("UTF-8")));
              ps.close();

  • Character encoding problem with german umlaut in propertie files

    Hi,
    I use propertie files to translate application to multiple languages.
    These files contains german umlaut (e.g.: Wareneingänge).
    If I rebuild my application then this files are copied from ../src/view to ../classes/view.
    The file in ../classes/view contains "Wareneing\ufffdnge" instead of "Wareneingänge" which is displayed as "Wareneing�nge".
    My browser-, project- and application settings are UTF8.
    Previously the settings for project and application where "Windows-1252"
    I have found an workarounds but maybe this is a bug in Jdeveloper TP4.
    Therefore I post this problem. Maybe someone can confirm this behaviour.
    Workaround:
    Replace "Wareneingänge" with "Wareneing\u00e4nge" in the ../src/view file
    (Zaval JRC Editior does this for you :-) )
    regards
    Peter

    Hi,
    I think to remember that the same was required for properties in 10.1.3 as well. Not sure if this is an issue in JDeveloper 11. I'll take anot and have a look though
    Frank

  • JSON encoding problem with entities

    I am trying to pass actionscript strings that have entities such as the Degrees symbol (as in 175°, not sure if the degree symbol shows here)
    I have a string like:
    private var step7b:String = "Cool the custard to below 70° F by stirring it over the ice bath. ";
    When the PHP attempts to write out this line of text it either fails completely, or prints the wrong entity code. such as "Preheat oven to 325% u02DAF"
    What do I need to do to correctly pass symbols such as the degree sign from an Actionscript string to PHP?
    Note: Entities loaded from XML work fine.  It's only AS string variables.  And this whole setup worked correctly when I used to use a WSDL service, the JSON is new and it's when this problem arose.  I am using this code block:
    // create the JSON
                        var objSend:Object = new Object();
                        var dataString:String = JSON.encode(packagedData);
                        dataString = escape(dataString);
                        objSend.jsonSendData = dataString;

    Hi,
    You are using something like dataString = escape(jsonString);
    the escape function in actionscripts does about the same as urlencode(addslashes(jsongString) in php.
    So you could try what happens if you leave out the escape or if you're able to adjust the php side, then
    use a urldecode(jsonString) or something.

  • Character encoding problems with weblogic stax implementation?

    Hello all,
    While using Stax to parse some XML, we encounter the following exception when the processor reaches the UTF-8 character C3 B1, ('ñ'):
    Caused by: Error at Line:1, token:[CLOSETAGBEGIN]Unbalanced ELEMENT got:StudentRegistration expected:LastName
    at weblogic.xml.babel.baseparser.BaseParser.parseSome(BaseParser.java:374)
    at weblogic.xml.stax.XMLStreamReaderBase.advance(XMLStreamReaderBase.java:199)
    We suspect that the processor's encoding might somehow be set to ANSI instead of UTF-8. I have read, in other posts, of a startup property related to web services:
    -Dweblogic.webservice.i18n.charset=utf-8
    However, this XML is not a web service request, but rather a file being read from disk after an MDB's onMessage() method is called.
    Could this setting be affecting stax parsing outside of webservices? Any other ideas?
    Thanks!

    As far as I know, we don't support changing outbound message encoding charset in 9.x. Both 8.x and 10.x support it. Check [url http://docs-stage/wls/docs100/webserv/client.html#wp230016]here

  • Flash cs5.5 FLVplayback Encoding Problems with Air for Android

    Dear All,
    I'm trying to play flv in my air application by using FLV playback in flash cs5.5.
    I can play flv in the pc but not on the samsung galaxy tab andriod platform.
    if anyone out there has tried out FLV playback with either an Air For Android application, where the FLV files are packaged within the app, or either streaming via normal http//:, any help  would be greatly appreciated.
    Thanks!

    First disable autoplay. It gave errors for me. Try to make the movie play with the play() command.
    to embed the movie in your air bundle, just go to publish settings for your flash project. Then in that screen go to the player settings.
    In the first GENERAL tab you'll see a the bottom that you can add files to your project
    Propably there is allready your .swf and an .xml file in there.
    Using the + icon you can add your video.
    Make sure that your video is in the same directory as your . FLA file and you can use it as is (by name)
    If for example your flashfile is in c:\mytest\mytest.fla and the video in c:\mytest\videos\myvideo.flv , then you will have to load your video as "videos/myvideo.flv" with a FORWARD slash, never use \
    Good luck

Maybe you are looking for

  • Does pages encrypt password-protected documents?

    I would like to protect a document against reading its contents by others. Therefore I would like to know whether Pages encrypts a document when you set a password on it. I tried to look at the contents of a .pages file, but even when it is not encry

  • IPad, Acrobat Reader, Content (text and pictures) no longer showing, just blank pages

    Hi, I have a user who successfully opened a .pdf attachment from outlook mail and then opened it in Acrobat Reader on the iPad and then edited it and moved it to a created folder. Everything was working fine and the user highlighted text in the .pdf

  • Creating an animated flag (without a Plug-in !)

    Hi all , I previously animated a flag in motion 3 , but have deleted the project file. I know that Zaxwerks do a plug in but it it $120 which is quite pricey as I am only using it for one project. I have done it before but can not remember how I did

  • Picture in Picture with main video

    I have a main video to which I'm adding photos. What is the way I could show main video as Picture in Picture above the added photo and going back to main video in full screen as photo is no longer demonstrating? Thanks a lot, looking forward for you

  • Run time error CONVT_OVERFLOW

    dear Sir we have window 7 for new oc as soon runned teh program cv01n i got run time error msg  CX_SY_CONVERSION_OVERFLOW,plese guide Runtime Errors         CONVT_OVERFLOW                                                               Except.