SciTE 1.60 UTF-8 problem

I can't write Russian or Hebrew when i set "File-> Encoding->UTF-8"
In Kate for example it's work properly.

@leX wrote:I can't write Russian or Hebrew when i set "File-> Encoding->UTF-8"
In Kate for example it's work properly.
I can write german Umlauts like ü,ä,ö without problems.  I also copied a text
from the Russian truth (prawda) to it and saw the text correctly.
My crystal ball says you dont have enabled the pango engine in scite. Instead
you sticked with the oridinary gtk textarea. To enable it you need specify
your font settings in your /usr/share/scite/SciTEGlobal.properties (or you
local copy of itin ~) with setting an exclamation mark infron of the font.
Here is a copy of mine:
# Give symbolic names to the set of fonts used in the standard styles.
if PLAT_WIN
    font.base=font:Verdana,size:10
    font.small=font:Verdana,size:8
    font.comment=font:Comic Sans MS,size:9
    font.code.comment.box=$(font.comment)
    font.code.comment.line=$(font.comment)
    font.code.comment.doc=$(font.comment)
    font.text=font:Times New Roman,size:11
    font.text.comment=font:Verdana,size:9
    font.embedded.base=font:Verdana,size:9
    font.embedded.comment=font:Comic Sans MS,size:8
    font.monospace=font:Courier New,size:10
    font.vbs=font:Lucida Sans Unicode,size:10
if PLAT_GTK
    font.base=font:!bitstream vera sans mono,size:10
    font.small=font:!bitstream vera sans mono,size:9
    font.comment=font:!bitstream vera sans mono,italics,size:10
    font.code.comment.box=$(font.comment)
    font.code.comment.line=$(font.comment)
    font.code.comment.doc=$(font.comment)
    font.text=font:!bitstream vera sans mono,size:10
    font.text.comment=font:!bitstream vera sans mono,size:10
    font.embedded.base=font:!bitstream vera sans mono,size:9
    font.embedded.comment=font:!bitstream vera sans monosize:9
    font.monospace=font:!andale mono,size:10
    font.vbs=font:!bitstream vera sans mono,size:10
font.js=$(font.comment)
Try this, HTH
bye neri

Similar Messages

  • OC4J 9.0.2.0.1 UTF-8 problems

    We were using OC4J 1.0.2.2.1 with default-charset="UTF-8" in global-application.xml and UTF-8 is working fine.
    We recently upgraded to 9.0.2.0.1. The same setting is not working. We tried setting the default-charset in
    application-deployments/**/wev/orion-web.xml also. It did not work.
    Please respond if any one experiencing the same problem and workarounds for it.
    Thanks

    Srinivas,
    Are you seeing this with servlets or jsps or both...
    recently we got a similar problem reported with jsp.
    thanks,
    -Prasad

  • Robohelp HTML, JBoss: UTF-8 Problem

    Hello
    We use Robohelp 8 to create a web help for our JBOSS based project. Unfortunaltely, all files that robohelp creates use UTF-8, but the default of our JBOSS is ISO-8859-1.
    So, if I open the help using Firefox in the web container I receive only "" and the encoding in the brower is set to ISO. IE8 can display the help in a correct way, I suppose this browser recognize the different character encoding.
    The only thing that I can do to let it work in FF is to change the robohelp output html files and add this lines in each(!) htm file of the help
    <?xml version="1.0"?>
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    To show the special characters in german like ä,ö,ü I have also to remove the charset in the <metatag> of the output files...
    But every time I re-generate the output I have to redo this. Is there some possibility to change the character encoding in my robohelp source code? Unfortunaltely it seams impossible to change this things in masterfiles or robohelp pages in the UI
    Has somebody a workaround?
    Best regards
    Stefan

    Hi, Stefan. Yes, others have seen this problem - specifically with your ISO charset.
    There isn't a fix in RoboHelp8. The solution appears to be to enable UTF-8 on your application server.
    There are several posts about the problem on this forum:
    http://forums.adobe.com/message/2095270#2095270
    http://forums.adobe.com/message/961768#961768
    These two seem most useful, but if you're really technical, you can search for ISO-8859-1 on the Adobe Forums. If you want just RoboHelp posts, enter  "ISO-8859-1 +RoboHelp".
    HTH,
    Elisa

  • More Cf + MySQL 5 + Unicode/UTF-8 Problems

    Here is the problem:
    I am using a MySQL database that store data in Unicode/UTF-8
    (the website/database are in Lao).
    Settings:
    CF 7.0.2
    MySQL 5.0.26
    MySQL Defaults: latin1_swedish_ci collation, latin1 encoding
    Database Defaults: utf8_general_ci collation, utf8 encoding
    These are same on my local computer and on the host
    (HostNexus)
    The only difference is that my CF uses
    mysql-connector-java-3.1.10 driver while the host uses MySQL 3.x
    driver (org.gjt.mm.mysql.Driver class).
    On my local computer everything works just fine, even without
    any extra CF DSN settings (i.e. connection string and/or JDBC URL
    do not need the useUnicode=true&characterEncoding=UTF-8 strings
    added to show Lao text properly).
    On the host, even with the
    useUnicode=true&characterEncoding=UTF-8 added (I have even
    tried adding
    &connectionCollation=utf8_unicode_ci&characterSetResults=utf8
    to the line as well), I only get ??????? instead of Lao text from
    the db.
    The cfm pages have <cfprocessingdirective> and
    <cfcontent> tags set to utf-8 and also have html <meta>
    set to utf-8. ALl static Lao text in cfm pages shows just fine.
    Is this the problem with the MySQL driver used by the host?
    Has anyone encountered this before? Is there some other setting I
    have to emply with the org.gjt.mm.mysql.Driver class on the host?
    Please help!

    Thanks for your reply/comments, Paul!
    I also think it must be the db driver used on the host... I
    just don't understand why the DSN connection string
    (useUnicode=true&characterEncoding=UTF-8 [btw, doesn't really
    matter utf8 or UTF-8 - works with both; I think the proper way
    actually is UTF-8, since that is the encosing's name used in
    Java...]) wouldn't work with it??? I have the hosting tech support
    totally puzzled over this.
    Don't know if you can help any more, but I have added answers
    to your questions in the quoted text below.
    quote:
    Sabaidee wrote:
    > Here is the problem:
    > I am using a MySQL database that store data in
    Unicode/UTF-8 (the
    > website/database are in Lao).
    well that's certainly different.
    I mean, they are in Lao language, not that they are hosted in
    Laos.
    > Database Defaults: utf8_general_ci collation, utf8
    encoding
    how was the data entered? how was it uploaded to the host?
    could the data have
    been corrupted loading or uploading to the host?
    The data was entered locally, then dumped into a .sql file using
    utf8 charset and then the dump imported into the db on the host,
    once again with utf8 charset. I know the data in the database is
    correct: when I browse the db tables with phpMyAdmin, all Lao text
    in the db is displayed in proper Lao...
    > The only difference is that my CF uses
    mysql-connector-java-3.1.10 driver
    > while the host uses MySQL 3.x driver
    (org.gjt.mm.mysql.Driver class).
    and does that driver support mysql 5 and/or unicode?
    I am sure it does support MySQL5, as I have other MySQL5
    databases hosted there and they work fine. I am not sure if it
    supports Unicode, though.... I am actually more and more sure it
    does not... The strange this is, I am not able to find the java
    class that driver is stored in to try and test using it on my local
    server... I have asked the host to email me the .jar file they are
    using, but have not heard back from them yet...
    > On my local computer everything works just fine, even
    without any extra CF DSN
    > settings (i.e. connection string and/or JDBC URL do not
    need the
    > useUnicode=true&characterEncoding=UTF-8 strings
    added to show Lao text
    > properly).
    and what happens if you do use those? what locale for the
    local server?
    If I use just that line, nothing changes (apart from the 2 mysql
    variables which then default to uft8 instead of latin1) -
    everything works just fine locally.
    The only difference I have noticed between MySQL setup on my
    local comp and on the host is that on my comp the
    character_set_results var is not set (shows [empty string]), but on
    the host it is set to latin1. When I set it to latin1 on my local
    comp using &characterSetResults=ISO8859_1 in the JDBC URL
    string, I get exactly same problem as on the host: all ???????
    instead of Lao text from db. If it is not set, or set to utf8,
    everything works just fine.
    For some reason, we are unable to make it work on the host:
    whatever you add to the JDBC URL string or enter in the Connection
    String box in CF Admin is simply ignored...
    Do you know if this is a particular problem of the driver
    used on the host?
    > The cfm pages have <cfprocessingdirective> and
    <cfcontent> tags set to utf-8
    > and also have html <meta> set to utf-8. ALl static
    Lao text in cfm pages
    > shows just fine.
    db driver then.
    I think so too...

  • UTF-8 problem in LR 3.4RC, not in 3.3

    I've just found a new issue in LR 3.4RC that was not present in 3.3:
    When exporting images to JPG and the IPTC contain words with umlauts (ä, ö, ü), at first glance the results seeem to be the same as in 3.3: The IPTC are read in the same way as they used to. But, when opening these JPGs in exiftool, you can see the difference: The recently exported JPGs are encoded UTF-8, whereas those JPGs exported from LR 3.3 do not have a UTF-8 flag.
    This issue is a severe problem when creating a web gallery, i. e. by JAlbum, because then the umlauts are not displayed correctly, but appear as corrupted letters.
    First I thought this is a JAlbum issue, but after examening the matter it is clear that LR 3.4 has changed the export format from 3.3
    Is this a bug or a feature?
    Regards
    Thokra

    Hi adda,
    are you German? I am.
    Exactly the same problem here. JAlbum can manage one kind of encoded charactres only. So, best would be if we could convert all the formerly exported files to the new standard. I tried simply writing the metadata again by CTRL+S, but no success.
    Anybody around who knows a method to convert the files without having to write down the hundreds of names with the German characters again?
    Regrads
    Thokra

  • Display UTF-8 problem

    configuration:
    1. HP-UNIX 11
    2. Oracle 8.1.7
    3. Weblogic6 sp1
    Our application needs to support english, big5 chinese and Portuguese
    and I have set NLS_CHARACTERST=UTF8 and NLS_NCHAR_CHARACTERSET = UTF8 in
    oracle
    I have also set NLS_LANG=AMERICAN_AMERICA.UTF8 in shell of start up
    weblogic
    However, u can only read English, Protuguese in jsp file. The Chinese
    character cannot be displayed correctly.
    I have tried to use weblogic5.1 sp9 to retieve data from oracle. The
    charasters displayed correctly.
    Is there any problem about my setting or my code?
    Driver myDriver = (Driver)Class.forName
    ("weblogic.jdbc.oci.Driver").newInstance();
    Properties props = new Properties();
    props.put("weblogic.oci.min_bind_size", "660");
    props.put("weblogic.codeset", "UTF8");
    props.put("user", "oracle");
    props.put("password", "oracle");
    props.put("server", "devdb");
    try{
    Connection conn = myDriver.connect("jdbc:weblogic:oracle", props);
    Statement stmt = conn.createStatement();
    stmt.execute("select * from test");
    ResultSet rs = stmt.getResultSet();
    while (rs.next()) {
    out.println(" loginid = " +rs.getString("loginid"));
    out.println("<br>");

    Do you have the following page directive in the jsp files?
    <%@ page contentType="text/html; charset=utf-8" %>
    "shall" <[email protected]> wrote in message
    news:[email protected]..
    configuration:
    1. HP-UNIX 11
    2. Oracle 8.1.7
    3. Weblogic6 sp1
    Our application needs to support english, big5 chinese and Portuguese
    and I have set NLS_CHARACTERST=UTF8 and NLS_NCHAR_CHARACTERSET = UTF8 in
    oracle
    I have also set NLS_LANG=AMERICAN_AMERICA.UTF8 in shell of start up
    weblogic
    However, u can only read English, Protuguese in jsp file. The Chinese
    character cannot be displayed correctly.
    I have tried to use weblogic5.1 sp9 to retieve data from oracle. The
    charasters displayed correctly.
    Is there any problem about my setting or my code?
    Driver myDriver = (Driver)Class.forName
    ("weblogic.jdbc.oci.Driver").newInstance();
    Properties props = new Properties();
    props.put("weblogic.oci.min_bind_size", "660");
    props.put("weblogic.codeset", "UTF8");
    props.put("user", "oracle");
    props.put("password", "oracle");
    props.put("server", "devdb");
    try{
    Connection conn = myDriver.connect("jdbc:weblogic:oracle", props);
    Statement stmt = conn.createStatement();
    stmt.execute("select * from test");
    ResultSet rs = stmt.getResultSet();
    while (rs.next()) {
    out.println(" loginid = " +rs.getString("loginid"));
    out.println("<br>");

  • UTF-8 problem on Mac OS X

    Hello,
    We have a 1.4.1 Java app on Mac OS X.
    This app writes out a text file, encoded using UTF-8, to a Windows folder. The following code is used:
    FileOutputStream fout = new FileOutputStream(<windows path>, false);
    OutputStreamWriter osw = new OutputStreamWriter(fout, "UTF-8");
    osw.write(str, 0, str.length());
    osw.flush();
    fout.flush();
    fout.close();
    Windows app tries to read that file as UTF-8 and is having a problem doing that. When you pull it up in Notepad - it's wrong.
    When we run the same Java app on Windows and write out a UTF-8 encoded file - other Windows apps understand it.
    This only happens when the encoded characters are not in the ASCII range. Therefore accented French characters, Japanese characters, etc. are all wrong.
    It seems to be an OS X Java implementation that "decomposes" accented characters as apparently required by the HFS file system.
    Has anyone else run into this?
    What would be a work around - or are we doing something wrong here?
    Thanks, any help will be greatly appreciated!!!
    Dave Raskin

    The main problem here seems to be that you are writing a text file. Text files do not contain meta-data and therefore applications like notepad can not tell what encoding was used to write the file and will assume a default encoding.
    Depending on what you are trying to do you could either change the encoding you use to write the text file to match the default encoding used on your target system or use a file format that contains meta-data like rtf, html, pdf, or doc.
    Thomas

  • UTF-8 Problems in Flex Builder 3

    I created locale properties files that contained characters
    requiring UTF-8 encoding using Flex Builder 3 for Mac OS. Some of
    the characters displayed incorrectly when viewed through the web
    browser. Open the properties files with an editor (BBEdit) produced
    a warning that the UTF-8 format was corrupted and the characters
    did not display in BBEdit correctly. I fixed the format and
    characters through BBEdit and found they worked and displayed as
    expected. Opening the properties in Flex Builder 3 show the
    characters correctly, but as soon as I saved the file from Flex
    Builder the characters stopped displaying. BBEdit again showed the
    UTF-8 format was corrupted.
    Does anyone have any suggestions for fixing this problem? Any
    workarounds?

    FB saves files as ISO-8859-1 by default. Right click file,
    select Properties, then select "other" in the Text File Encoding
    area, and then select UTF-8 there.

  • SOAP + UTF-8 problem

    Hi,<br />I have this problem: I need to  execute a call to a webservice passing to an XML. I writte in Spanish, so in my XML there are letters like: àá ÒÓ ... <br /><br />I'm trying to codify in UTF-8 cause de webservice do the decode from UTF-8 to ASCII, but I don't know how. I try with :  <br />        /*theRequest is the XML*/<br />         var cRequest = util.streamFromString(theRequest, "utf-8");<br />      theRequest = util.stringFromStream(cRequest);<br /><br />But I think this is not ok, this codify the request twice?<br /> <br />The Soap message that I send to the webservice starts like this: <br /><?xml version="1.0"?><br /><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><SOAP-ENV:Header>....<br /><br />is this " <?xml version="1.0"?>" ok?? must I see the encoding="utf-8" ??<br /><br />The webservice returns me this error:<br /><?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><soapenv:Fault><fault code>soapenv:Server.userException</faultcode><faultstring>java.io.UTFDataFormatException: Invalid byte 2 of 4-byte UTF-8 sequence.</faultstring><detail><ns1:hostname xmlns:ns1="http://xml.apache.org/axis/">nut</ns1:hostname></detail></soapenv:Fault></soap env:Body></soapenv:Envelope><br /><br />Any idea?<br /><br />Thanks,<br />And sorry for my english :P

    Yes, I create the connection. This is the function:
    theURL is the URL of the service dispatcher connection
    thHeader is the Header SOAP
    theRequest the XML of the pdf
    the Action the action of the service Dispather I'm calling for.
    function doWS(theURL, theHeader, theRequest, theAction){
    try {
    var oResponse = "";
    SOAP.wireDump = true;
    /* var cRequest = util.streamFromString(theRequest, "utf-8");
    theRequest = util.stringFromStream(cRequest);
    var cHeader = util.streamFromString(theHeader, "utf-8");
    theHeader = util.stringFromStream(cHeader);*/
    var oService = SOAP.connect(theURL);
    if (oService != "[object SOAPService]") {
    oResponse = "ERROR - No connection";
    } else {
    // Create the processRequest parameter
    var processRequest = {
    soapType: "xsd:string", //anyType
    soapValue: theRequest
    // Create the action
    var mySOAPAction = theAction;
    var sendHeader = {
    soapType: "xsd:string",
    soapValue: theHeader
    // Create the responseHeader parameter
    var responseHeader = {};
    responseHeader[theAction] = {};
    // Do the request method
    oResponse = SOAP.request ({
    cURL: theURL,
    oRequest: processRequest,
    cAction: mySOAPAction,
    bEncoded: false,
    oReqHeader: sendHeader,
    oResHeader: responseHeader,
    cResponseStyle: SOAPMessageStyle.XML
    return oResponse;
    oService = null;
    catch(e)
    console.println("ERROR - " + e);
    return ("ERROR - " + e);
    oService = null;
    I'm new on this job, and they use to do it like that, so I supose that this is correct. But they have problems when they work with àáèé...
    I thougth that adobe designer works all in utf-8, so I don't understand that the problem could be on the utf-codification.
    Maybe the problem is on te server???
    Thank you so much!

  • Charset or UTF-8 problems

    Is there any solution to display the Hungarian (or other) special characters in a htmldb application (or in a html page generated by dynamic PL/SQL)?
    Vasek

    I don't think this is a HTMLDB problem; if you are using the default XE database which is created using the Western European Latin-1 character set, it will not be able to store and process the accented Hungarian characters correctly. You should see the same problem in SQL*PLUS too.
    There will be a Universal language version of XE (created using UTF-8 as the database character set) available in the production release of Oracle Database 10g Express Edition.

  • Text track russian (utf-8) problems in windows

    Hi,
    I am working with QT Pro on a Windows 7 PC and need to create several subtitled versions of a Movie.
    For european languages everything went fine, but i have problems with Russian, Polish and Turkish.
    When i open my utf-8 encoded *.txt file with Quicktime I only get gibberish characters.
    I spent at least 2 hours searching the internet for similar issues, but couldn't find anything.
    Can anybody help me with this, please?
    thanks a lot!

    The character set in your e-mail headers is set to iso-8859-1 (Western European).  Change that to utf-8.
    $headers = "MIME-Version: 1.0\n"
                ."From: \"".$name."\" <".$email.">\n"
                ."Content-type: text/html; charset=iso-8859-1\n";
    Nancy O.

  • No more data to read from socket UTF instance problem

    I'm using oracle jdbc thin driver and SunOne Application Server 7 environment.
    I'm trying to call the stored procedure which has one IN parameter that is of type CLOB.
    My code looks like this:
    conn = DriverManager.getConnection (url, username, password);
    conn.setAutoCommit(false);
    clob = CLOB.createTemporary(conn, true, CLOB.DURATION_SESSION);
    Writer wr = clob.getCharacterOutputStream();
    wr.write(m_data);
    wr.flush();
    wr.close();
         PreparedStatement pstmt = conn.prepareCall(procedureCall);
    pstmt.setClob(1, clob);
         pstmt.execute();
    but when I run it, it throws this (at wr.write(m_data) statement):
    [29/Jan/2003:15:07:25] WARNING ( 9340): CORE3283: stderr: java.io.IOException: No more data to read from socket
    [29/Jan/2003:15:07:25] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.dbaccess.DBError.SQLToIOException(DBError.java:716)
    [29/Jan/2003:15:07:25] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.driver.OracleClobWriter.flushBuffer(OracleClobWriter.java:270)
    [29/Jan/2003:15:07:25] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.driver.OracleClobWriter.write(OracleClobWriter.java:172)
    [29/Jan/2003:15:07:25] WARNING ( 9340): CORE3283: stderr: at java.io.Writer.write(Writer.java:150)
    [29/Jan/2003:15:07:25] WARNING ( 9340): CORE3283: stderr: at java.io.Writer.write(Writer.java:126)
    I tried using this instead of Writer:
    clob.putString(1, m_data);
    but the same error occurs.
    I then tried to do both of these:
    InputStream reader = new StringBufferInputStream(m_data);
    PreparedStatement pstmt = conn.prepareCall(procedureCall);
    pstmt.setUnicodeStream(1, reader, reader.available());
    Reader reader = new StringReader(m_data);
         PreparedStatement pstmt = conn.prepareCall(procedureCall);
    pstmt.setCharacterStream(1, reader, m_data.length());
    but in both cases I got this (at pstmt.setCharacterStream() or pstmt.setUnicodeStream()):
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: java.sql.SQLException: Data size bigger than max size for this type: 76716
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.ttc7.TTCItem.setArrayData(TTCItem.java:95)
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.dbaccess.DBDataSetImpl.setBytesBindItem(DBDataSetImpl.java:2414)
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.driver.OraclePreparedStatement.setItem(OraclePreparedStatement.java:1134)
    [29/Jan/2003:16:06:00] WARNING ( 9340): CORE3283: stderr: at oracle.jdbc.driver.OraclePreparedStatement.setUnicodeStream(OraclePreparedStatement.java:2633)
    But, the greatest mistery of all is that code with temporary CLOB works fine when I create instance and use default settings. Problem occurs when I create instance with UTF coding scheme. But we are forced to use Unicode coding scheme, because of local special characters.
    We are using Oracle 9i on Solaris UNIX platform and jdbc drivers supplied with it.
    The CLOB I am trying to pass is a XML file and it is possible to be up to 400 KB in size.
    Please help. I'm at my wit's end!

    Hi,
    I have a similar problem . This is the code that I used. Can u please help me
    oracle.sql.CLOB newClob = oracle.sql.CLOB.createTemporary(((org.apache.commons.dbcp.PoolableConnection) con).getDelegate() , true, oracle.sql.CLOB.DURATION_SESSION);
              newClob.open(oracle.sql.CLOB.MODE_READWRITE);
              Writer wr = newClob.getCharacterOutputStream();
              wr.write(valuesXml);
              wr.flush();
              wr.close();
              //newClob.putString(1,valuesXml);
              pst.setClob(1,newClob);
    These are the versions that I use
    java version is 1.4.2_06
    and it is a Liunx OS - gij (GNU libgcj) version 3.2.3 20030502 (Red Hat Linux 3.2.3-49)
    the Oracle version is 9.2.0.4.0
    The exception I see is
    java.io.IOException: No more data to read from socket
         at oracle.jdbc.dbaccess.DBError.SQLToIOException(DBError.java:716)
         at oracle.jdbc.driver.OracleClobWriter.flushBuffer(OracleClobWriter.java:270)
         at oracle.jdbc.driver.OracleClobWriter.flush(OracleClobWriter.java:204)

  • UTF 8 Problem

    I am using UTL_FILE to write data into an excel file. The problem is it shows as garbage data ("ä") as there are UTF8 Charcters in that. I have tried using utl_file.fopen_nchar & other nchar functions but it is still not displaying correctly
    Database version is 9.2.0.5.0
    NLS_CHARACTERSET is UTF8
    NLS_NCHAR_CHARACTERSET is UTF8
    Thanks in Advance!
    Rk

    It appears that there was an extra ? at the end of the URL. Try
    What's the best way to export UTF8 data into Excel?
    instead.
    Justin

  • Utf-16 problems when using writeUTFBytes and readUTFBytes

    I'm really confused about the 'unicode' chatset as it does not seem to do what I would think it would do.
    Every utf16 character is represented by 2 bytes. So a "C" would have 2 bytes [67,0], a chinese character would have 2 bytes also.
    var testStr:String = '漢語';
    var testBA:ByteArray = new ByteArray();
    testBA.writeMultiByte(testStr,'unicode');
    testBA.position = 0;
    for(var a=0;a<testBA.length;a++)
        trace(testBA[a]);
    var str:String = testBA.readMultiByte(testBA.length,'unicode');
    trace(str);
    OUTPUT:
    63
    63
    The testBA should have 4 bytes shouldnt it?
    Can anyone show an example of how to convert my test String to and then back from UTF-16?

    Surely there is a simply answer to this? Why have the string conversions if they dont work?

  • Utf-7 Problem

    Hi,
    I have to analyze the bodyPart of an email. It is encoded in utf-7 and java doesn t understnad it.
    I find this to convert utf-7 to utf-8 :
    String strInUTF8 = new String(this.toScan.toString().getBytes("UTF-8"));
    But it doesn t work.
    Here is an exemple of the conversion :
    fff+AEA-xxx.com
    the real conversion will be [email protected]
    how can i do that please ??

    see UTF-7 Definition->Rule 2
    http://www.cis.ohio-state.edu/cgi-bin/rfc/rfc1642.html.
    I quote:
    " The "+" signals that subsequent octets are to be interpreted as elements of the Modified Base64 alphabet until a character not in that alphabet is encountered.Such characters include control characters such as carriage returns and line feeds; thus, a Unicode shifted sequence always terminates at the end of a line. As a special case, if the sequence terminates with the character "-" (US-ASCII decimal 45) then that character is absorbed; other terminating characters are not absorbed and are processed normally. "
    So, for those chars you must use the decoding for the base64 characters.
    Can find an working class which provides decoding at:
    http://kevinkelley.mystarband.net/java/Base64.java
    The other chars remain unchanged.
    Regards
    BG

Maybe you are looking for

  • My iPhone 3G is no longer recognizing my home wifi.

    My iPhone used to recognize my home wifi and connect automatically, but now it won't at all. I've resetted my network settings, restarted my wifi several times (other devices can connect just fine, so that can't be the problem,) and restarted my phon

  • Portege R400 - A2DP profile is not working using BT Stack 5.10.12

    Hi, I am trying to get my Toshiba R400 connected to the Logitech Wireless Music station. I can find the device, but while pairing, it says something like error while connecting with remote device, please try again. It doesn't ask for a passkey unlike

  • PO for ASSET Internal Order

    Hi, Do we have a PO with Asset as AAC & Internal Order in Acct Assignment Tab of PO item details? I mean system does allow this in a same PO but I would like to know the significance of having such combination. Request to share some inputs from the b

  • JMS communication channel

    Hi, We got an error in the JMS comunication channel. After we got this error, we restarted the java stack. But then the JMS communication channels did not resume the connectivity automatically. We had to manually start and stop the channels. Even we

  • Table Cell Editor - Link To Action UI

    Hi, I have a Table UI element in the first view and i need that the first column should have Link To Action UI as the cell editor. For this the requirment is that Context Attribute Type should be Boolean. please let me know why this is so. Also i wan