Verifying 8859-2 characters

I have a document that contains polish characters (Latin-2) 8859-2.
When I sign the document and try and verify the document I can't
When I remove the polish characters I can sign and verify the document no problem.
We take the document and load it into a DOM before doing the signing etc (apache XML Security package).
Even before writing it to a file we try and verify the signature on the byte stream and that fails also.
The big problem is we do not know where the real signed documents will come from and what characterset the data is signed in. If it is signed natively in 8859-2 and java converts it to UTF8 before verifying then we will never get a match.
Anyone worked with signatures in a non-ASCII characterset?

Try using ISO8859_2 in place of iso-8859-2 on the @page directive and the charset=. Also in the weblogic.properties file, in the WEBLOGIC JSP PROPERTIES section, add the following lines:
verbose=true,\
encoding=ISO8859_2
It will work. I have done the same thing for SJIS just now.
Keep me informed about it.
Nikhil
Lukasz Kowalczyk <[email protected]> wrote:
I created a test JSP file:
<%@page contentType="text/html; charset=iso-8859-2"%>
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=iso-8859-2">
</head>
<body>
2+2=<%= 2+2 %><br>
<!-- this garbage are polish-specific characters -->
±æê³ñ󶼿
¡ÆÊ£ÑÓ¦¬¯
<br>
</body>
</html>
The problem is that one of polish-specific characters gets turned into
a question mark (the character o-dashed, "ó" and (capitalized) "Ó").
I searched the group archives but didn't found anything related to this
problem.
Lukasz Kowalczyk

Similar Messages

  • Problems reading Latin2 (ISO 8859-2) characters

    Hello!
    I want to read the content of an MS Access table (in an MDB file) using the JDBC:ODBC driver.
    The program works well but there is a character conversion problem when I read text fields from the table.
    The Latin2 (ISO 8859-2) characters like áéíóőűüöÁÉÍÓÜÖŰŐ are replaced by the "?" character.
    I use the ResultSet object's getString() method.
    Any idea about how to solve this problem?

    Try to change session encoding from defaut to iso-8559-2
    This probably would help:
    http://download.oracle.com/javase/1.4.2/docs/guide/jdbc/bridge.html
    >
    What's New with the JDBC-ODBC Bridge?
    * A jdbc:odbc: connection can now have a charSet property, to specify a Character Encoding Scheme other than the client default.
    For possible values, see the Internationalization specification on the Web Site.
    The following code fragment shows how to set 'Big5' as the character set for all character data.
    // Load the JDBC-ODBC bridge driver
    Class.forName(sun.jdbc.odbc.JdbcOdbcDriver) ;
    // setup the properties
    java.util.Properties prop = new java.util.Properties();
    prop.put("charSet", "Big5");
    prop.put("user", username);
    prop.put("password", password);
    // Connect to the database
    con = DriverManager.getConnection(url, prop);

  • ISO-8859-1 characters in xmlDom.domDocument

    -- Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    -- JServer Release 8.1.7.0.0 - Production
    -- Oracle XML Parser 2.0.2.9.0 Production
    -- OS: Windows 2000 Professional
    -- NLS_LANG in Oracle is: AMERICAN_AMERICA.UTF8
    -- NLS_LANG in registry (client) is set to: SWEDISH_SWEDEN.WE8ISO8859P1
    -- Description: Getting corrupt characters instead of the
    -- Swedish characters "edv" after parsing the clob.
    -- Output after running this script in sqlplus:
    --| BEFORE
    --| -----------------------------------------------------------------------
    --| AFTER
    --| -----------------------------------------------------------------------
    --| <?xml version="1.0" encoding="ISO-8859-1"?><asdf>aaa edv aaa</asdf>
    --| <?xml version = '1.0' encoding = 'ISO-8859-1'?>
    --| <asdf>aaa ??? aaa</asdf>
    --|
    set serveroutput on
    drop table xmltest;
    create table xmltest
    (before clob
    ,after clob);
    declare
    beforeClob clob;
    afterClob clob;
    xdoc xmldom.domdocument;
    parser xmlparser.Parser;
    begin
    insert into xmltest
    values('<?xml version="1.0" encoding="ISO-8859-1"?><asdf>aaa edv aaa</asdf>', empty_clob())
    returning after into afterClob;
    select before
    into beforeClob
    from xmltest;
    parser := xmlparser.newParser;
    xmlparser.parseCLOB(parser,beforeClob);
    xdoc := xmlparser.getDocument(parser);
    xmlparser.freeParser(parser);
    dbms_output.put_line('Parsed xml charset: '||xmldom.getCharset(xdoc));
    xmldom.writeToClob(xdoc, afterClob, 'WE8ISO8859P1');
    commit;
    end;
    select * from xmltest;

    Hi,
    This is a known issue. Within CLOB, Oracle DB will always store data in UTF-8, so the encoding setups will not work.
    Thanks.

  • Polish (iso-8859-2) characters in JSP don't display properly...

    I created a test JSP file:
    <%@page contentType="text/html; charset=iso-8859-2"%>
    <html>
    <head>
    <meta http-equiv="Content-type" content="text/html; charset=iso-8859-2">
    </head>
    <body>
    2+2=<%= 2+2 %><br>
    <!-- this garbage are polish-specific characters -->
    ±æê³ñ󶼿
    ¡ÆÊ£ÑÓ¦¬¯
    <br>
    </body>
    </html>
    The problem is that one of polish-specific characters gets turned into
    a question mark (the character o-dashed, "ó" and (capitalized) "Ó").
    I searched the group archives but didn't found anything related to this
    problem.
    Lukasz Kowalczyk

    Try using ISO8859_2 in place of iso-8859-2 on the @page directive and the charset=. Also in the weblogic.properties file, in the WEBLOGIC JSP PROPERTIES section, add the following lines:
    verbose=true,\
    encoding=ISO8859_2
    It will work. I have done the same thing for SJIS just now.
    Keep me informed about it.
    Nikhil
    Lukasz Kowalczyk <[email protected]> wrote:
    I created a test JSP file:
    <%@page contentType="text/html; charset=iso-8859-2"%>
    <html>
    <head>
    <meta http-equiv="Content-type" content="text/html; charset=iso-8859-2">
    </head>
    <body>
    2+2=<%= 2+2 %><br>
    <!-- this garbage are polish-specific characters -->
    ±æê³ñ󶼿
    ¡ÆÊ£ÑÓ¦¬¯
    <br>
    </body>
    </html>
    The problem is that one of polish-specific characters gets turned into
    a question mark (the character o-dashed, "ó" and (capitalized) "Ó").
    I searched the group archives but didn't found anything related to this
    problem.
    Lukasz Kowalczyk

  • Problems with reading XML files with ISO-8859-1 encoding

    Hi!
    I try to read a RSS file. The script below works with XML files with UTF-8 encoding but not ISO-8859-1. How to fix so it work with booth?
    Here's the code:
    import java.io.File;
    import javax.xml.parsers.*;
    import org.w3c.dom.*;
    import java.net.*;
    * @author gustav
    public class RSSDocument {
        /** Creates a new instance of RSSDocument */
        public RSSDocument(String inurl) {
            String url = new String(inurl);
            try{
                DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder();
                Document doc = builder.parse(url);
                NodeList nodes = doc.getElementsByTagName("item");
                for (int i = 0; i < nodes.getLength(); i++) {
                    Element element = (Element) nodes.item(i);
                    NodeList title = element.getElementsByTagName("title");
                    Element line = (Element) title.item(0);
                    System.out.println("Title: " + getCharacterDataFromElement(line));
                    NodeList des = element.getElementsByTagName("description");
                    line = (Element) des.item(0);
                    System.out.println("Des: " + getCharacterDataFromElement(line));
            } catch (Exception e) {
                e.printStackTrace();
        public String getCharacterDataFromElement(Element e) {
            Node child = e.getFirstChild();
            if (child instanceof CharacterData) {
                CharacterData cd = (CharacterData) child;
                return cd.getData();
            return "?";
    }And here's the error message:
    org.xml.sax.SAXParseException: Teckenkonverteringsfel: "Malformed UTF-8 char -- is an XML encoding declaration missing?" (radnumret kan vara f�r l�gt).
        at org.apache.crimson.parser.InputEntity.fatal(InputEntity.java:1100)
        at org.apache.crimson.parser.InputEntity.fillbuf(InputEntity.java:1072)
        at org.apache.crimson.parser.InputEntity.isXmlDeclOrTextDeclPrefix(InputEntity.java:914)
        at org.apache.crimson.parser.Parser2.maybeXmlDecl(Parser2.java:1183)
        at org.apache.crimson.parser.Parser2.parseInternal(Parser2.java:653)
        at org.apache.crimson.parser.Parser2.parse(Parser2.java:337)
        at org.apache.crimson.parser.XMLReaderImpl.parse(XMLReaderImpl.java:448)
        at org.apache.crimson.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:185)
        at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124)
        at getrss.RSSDocument.<init>(RSSDocument.java:25)
        at getrss.Main.main(Main.java:25)

    I read files from the web, but there is a XML tag
    with the encoding attribute in the RSS file.If you are quite sure that you have an encoding attribute set to ISO-8859-1 then I expect that your RSS file has non-ISO-8859-1 character though I thought all bytes -128 to 127 were valid ISO-8859-1 characters!
    Many years ago I had a problem with an XML file with invalid characters. I wrote a simple filter (using FilterInputStream) that made sure that all the byes it processed were ASCII. My problem turned out to be characters with value zero which the Microsoft XML parser failed to process. It put the parser in an infinite loop!
    In the filter, as each byte is read you could write out the Hex value. That way you should be able to find the offending character(s).

  • HTTP-Receiver: Code page conversion error from UTF-8 to ISO-8859-1

    Hello experts,
    In one of our interfaces we are using the payload manipulation of the HTTP receiver channel to change the payload code page from UTF-8 to ISO-8859-1. And from time to time we are facing the following error:
    u201CCode page conversion error UTF-8 from system code page to code page ISO-8859-1u201D
    Iu2019m quite sure that this error occurs because of non-ISO-8859-1 characters in the processed message. And here comes my question:
    Is it possible to change the error behaviour of the code page converter, so that the error will be ignored?
    Perhaps the converter could replace the disruptive character with e.g. u201C#u201D?
    Thank you in advance.
    Best regards,
    Thomas

    Hello.
    I'm not 100% sure if this will help, but it's a good Reading material on the subject (:
    [How to Work with Character Encodings in Process Integration (NW7.0)|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42]
    The part of the XSLT / Java mapping might come in handy in your situation.
    you can check for problematic chars in the code.
    Good luck,
    Imanuel Rahamim.

  • Displaying Chinese characters in SQL*Plus

    DB version: 11.2
    OS Version : AIX 6.1
    DB characterset:AL32UTF8
    To display chinese characters in SQL*Plus, I did the following:
    $ export LANG=zh_CN.UTF-8
    $ export LC_ALL=zh_CN.GB2312
    $ export NLS_LANG="SIMPLIFIED CHINESE_CHINA.ZHS16GBK"
    $
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on ÐÇÆÚÈý 5ÔÂ 2 15:52:33 2012
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning option
    SQL> ALTER SESSION SET NLS_LANGUAGE='SIMPLIFIED CHINESE';
    Session altered.
    SQL> ALTER SESSION SET NLS_TERRITORY='CHINA';
    Session altered.
    SQL> select unistr('\8349') from dual;  ---- not 100% sure if this is the way to verify if chinese characters can be displayed.
    UN
    ²Ý                 ----------------------------------------> Getting a junk character instead of chinese If I was using putty, are the above steps enough to get chinese characters displayed ?
    Our ssh client is Tectia (not putty).
    According the below ML Note, the SSH client has to configured correctly to use globalization features.
    +The correct NLS_LANG setting in Unix Environments [ID 264157.1]+
    Googling "Tectia + Chinese" didn't return useful results

    I understand that you are talking about Windows SSH Client.
    For Putty, you should set:
    $ export LANG=zh_CN.UTF-8
    $ export LC_ALL=zh_CN.UTF-8
    $ export NLS_LANG="SIMPLIFIED CHINESE_CHINA.AL32UTF8"
    and configure Putty in Window->Translation to use UTF-8.
    There is nothing about this subject on Tectia website and in their manuals, so my best guess is that the client requires Windows code page to work correctly. In such case you need to set your Windows system default locale (locale for non-Unicode programs) to Chinese and use the following settings on the server:
    $ export LANG=zh_CN.GBK
    $ export LC_ALL=zh_CN.GBK
    $ export NLS_LANG="SIMPLIFIED CHINESE_CHINA.ZHS16GBK"
    Verify with 'locale -a' that the setting zh_CN.GBK is supported on your system.
    -- Sergiusz

  • [SOLVED]Gnumeric and problem with non-US characters

    I am trying here to get Gnumeric to recognize the UTF-8 or ISO-8859-15 characters, but there even doesn't seem to be any locale option in menus? Googling didn't show up anything spectacular.
    I tried this:
    LC_ALL=fr_FR gnumeric &
    but got an error from command line:
    (gnumeric:7559): Gtk-WARNING **: Locale not supported by C library.
            Using the fallback 'C' locale.
    And everything is still in the US format, even how dates are represented.

    Thanks, rdoggsv - I got the menus in my language in gnumeric and as a bonus - I got the euro sign to work with your advice
    But still I don't have characters like ä,ö,õ,ü at gnumeric. I get them nicely at kwrite, konsole, konqueror, openoffice and opera.
    This missing chars - thing seems to be Gnome-specific. For example I can't get those chars visible at Gimp, graveman and galculator. But in Abiword the chars are visible in the editor's text editing field but not when for example I try to save a file.

  • Big5 to ISO-8859-1

    Hi, I want to convert a big5 chinese character to ISO-8859-1 character (&#xxxxx;)
    Here is my code:
    String record = "���~"; //a big5 string
    System.out.println("BIG5: " + record); //Display ok
    byte[] b = record.getBytes("ISO-8859-1");
    String target = new String(b, "ISO-8859-1");
    System.out.println("Target: " + target);
    But I can't get the ISO-8859-1 code which like &#xxxxx; but gives me ???.
    Please advice.

    ISO-8859-1 DO have chinese characters
    In JSP, when I submit a chinese value in a form, I "ll
    receive this chinese characters in ISO-8859-1
    encodings and get: &#22909;&#20154; (In chinese:
    �n�H)
    And when I put this ISO-8859-1 characters in the value
    field of a form, the html gives me chinese character
    in Big5 correctly.
    I just don't know how to convert ISO-8859-1 to Big5 in
    Java and vice versa.
    Please help.ISO-8859-1 doesn't support chinese characters.
    The reason that you can received chinese characters from a html form was because the charset of that html page was set to BIG-5. When a user makes a request, all request parameters and values will be encoded with the charset of the HTML page.

  • Why the oracle XML parser "parses" the DTD comments?

    Hi all,
    I always use the header
    <?xml version = '1.0' encoding='ISO-8859-1' ?>
    to be able to use foreign characters in the XML documents.
    The oracle xml parser handles this correctly.
    My problem is, when I write comments inside the DTD, the
    parser reports "Invalid UTF8 encoding".
    Why the parser "parse" the comments? (protected by <!-- and -->)
    How do I say that the DTD encoding is different from UTF, like
    ISO-8859-1?
    Example of a correct DTD and corresponding XML, reporting
    problems, related to the 2nd comment in the DTD specification,
    written with ISO-8859-1 characters.
    The DTD:
    <!-- valid.dtd -->
    <!ELEMENT valid ( B, C ) >
    <!-- valid represents the concept "Identificagco" -->
    <!ELEMENT B (#PCDATA) >
    <!ELEMENT C (#PCDATA) >
    The XML:
    <?xml version = '1.0' encoding='ISO-8859-1' ?>
    <!DOCTYPE valid SYSTEM 'valid.dtd'>
    <valid>
    <B>How are you, Conceigco</B>
    <C>I'm fine, thank you.</C>
    </valid>
    The parser output:
    [jgr@frontera test-dtd]$ java oracle.xml.parser.v2.oraxml -v
    valid.xml
    Error while parsing input sourcevalid.xml(Invalid UTF8 encoding.)
    Thank you for any help.
    Jorge Gustavo Rocha

    I was wrong in saying that the attributes are not added to the element.My main aim is to add a array of elements to the root node.
    Is there a efficient manner in adding the elements , rather than adding them individually with the help of appendChild method.
    Thanks in advance.
    null

  • PL/SQL + Parser: CLOB/Encoding

    I'm traying to create my XML document in PL/SQL but the method
    setCharset('ISO-8859-1')
    does not work and is ignored. it works only with
    setCharset('UTF8').
    Does anybody know why??? Is this a bug??
    There is also another issue:
    We're using the Oracle XML Parser for PLSQL 1.0.2.0.0 on a Windows NT platform.
    We're using the very latest available version of Oracle8i and Java.
    When using the xmldom.WriteToCLOB procedure most of the non-US ASCII characters gets
    converted to an inverterted question-mark.
    When we use xmldom.WriteToFile the file contents are correct
    (i.e. shows all ISO-8859-1 characters correctly on the NT platform).
    Is it a bug in WriteToCLOB? If yes, is there any chance that it will be fixed, shortly??

    1/ we have filed a bug for your first question.
    2/ for you second question, we have notice our limitation on current pl/sql API.
    This mostly because of CLOB limitation:
    As we known, the character repertoire of CLOB is limited by the database character set. This limitation may cause data loss if the database character set wasn't UTF8.
    Please just ignore this API. In future release, we will accept new datatype "xmlobject". This will solve this problem.
    Thanks for point out this problem.

  • How to change UTF-8 encoding for XML parser (PL/SQL) ?

    Hello,
    I'm trying to parse xml file stored in CLOB.
    p := xmlparser.newParser;
    xmlparser.parseCLOB(p, CLOB_xmlBody);
    Standard PL/SQL parser encoding is UTF-8. But my xml CLOB contain ISO-8859-2 characters.
    Can you advise me, please, how to change encoding for parser?
    Any help would be appreciated.
    null

    Do you documents contain an XML Declaration like this at the top?
    <?xml version="1.0" encoding="ISO-8859-2"?>
    If not, then they need to. The XML 1.0 specification says that if an XML declaration is not present, the processor must default to assume its in UTF-8 encoding.

  • Charsets - java -oracle

    Hi,
    I am trying to transfer data from an Oracle -Long field to an
    CLOB field in another Oracle- database.
    The transfer of the data works fine, except for the tranfer
    of the euro -currency -sign.
    The charsets of the databases are set to ISO8859-15.
    This charset is supported on my operating system
    Charset.availableCharsets()When i output the hex-representation of the
    euro-sign I get the following:
    ffffffef, ffffffbf, ffffffbd
    according to the ISO8859-15 code-chart this should be 'A4' I guess.
    I have tried various scenarious, e.g reading the input with
    the encoding "ISO8859-15", reading it with "windows-1252" and
    converting it to ISO8859-15, but without any success.
    When I read the data via sqlplus(which also works with ISO8859-15)
    I get the euro currency sign and I get a byte-value 128(which seems
    to be correct). Here the charset conversion seems to be o.k..
    I am using Windows-NT, with Oracle-jdbc-ThinClient.
    somehow the oracle client does some characterset-conversion that
    I am missing, but nevertheless I am just reading the bytes and
    get wrong Hex-digits!!!?????.
    thanks for any help,
    regards,
    alex
    InputStream druckdat = rs.getBinaryStream(colDruckdat);
         try{
            //inputReader = new InputStreamReader(druckdat, "ISO8859-15");
         byte[] byteArr = new byte[10000];
            druckdat.read(byteArr);
         printHexString(byteArr);
         //String string = new String(byteArr, "windows-1251");
         //printHexString(string);
    void printHexString(byte[] byteArr){
                   for(int k=0; k<byteArr.length; k++){
                        if(k % 8 == 0)
                             System.out.println("\n");
                        System.out.print(Integer.toHexString(new Integer(byteArr[k]).intValue()));
                        //System.out.print(new Integer(byteArr[k]).intValue());
                        System.out.print(",   ");

    i am not sure if the problems i had with oracle are identical to yours, but anyway here is what i found out:
    i had a oracle 9i with database character set of ISO8859P1 and national character set UTF8. Problem was that when i wanted to insert or read non-8859-1 characters i just got garbage. Reason was the automatic charset conversion done by the thin driver which first converts to the database character set before sending the data to the db. if one uses a NCHAR data type the db then converts it to the national character set in use there. This obviously is a problem when the database character set is only a subset of the national character set, resulting in information loss.
    unfortunately one has to use oracle-specific API (OraclePreparedStatement and its method setFormOfUse( types. NCHAR) so the thin driver doesn't use this conversion. This problem continues if one dynamically creates a SQL statement and wants to do an executeQuery() (standard jdbc API), it again gets converted automagically by the thin driver but OracleStatement has no method to suppress the conversion. So to avoid charset conversion one has to use the UNISTR function and encode the string to UTF-16 code points.
    Of (only some) help were the jdbc developers guide and the globalization docs from oracle you can find on tahiti.oracle.com
    i suspect you suffer from the same problem as 8859-1 and 8859-15 differ most prominently in the euro sign, and i guess your database character set is set to ISO8859P1.

  • Need guidelines on deciding over varchar or nvarchar

    Hi All,
    I need to know following information regarding sorting.
    1. Is linguisting sorting possible for char, varchar columns ? Or it is only available for nchar, nvarchar columns ?
    2. Performance wise which would be better (with the constraint that my database character set is already utf-8) - char, varchar columns or nchar, nvarchar columns ?
    3. I have some database colums, of varchar datatype, which presently store iso-8859-1 characters. In future they are going to store asian characters also. My database character set presently is utf-8. In this situation will it be better to change the datatype of these columns to nvarchar or increasing the length of the columns by 3/4 times should be better choice ?
    Any input/pointer would be highly appreciated.
    Regards,
    Sourav

    Hello,
    Before I forget you should take a look at the whitepapers on Globalization home page at:
    http://www.oracle.com/technology/tech/globalization/index.html
    Linguistic sorting is supported for varchar and char columns. Performance wise you may do better migrating your columns to your UTF8 database versus using extra NCHAR columns. You can expand the column sizes as needed to store Asian data as you said by a factor of 3 or 4, or you can use character length semantics. Should you decide to use character length semantics I would advice to do it for the entire database and not just select columns. You can read more about character length semantics in the Globalization Suport Guide.

  • Character problems with xsql:include-xsql reparse="yes"

    I have a problem retrieving XML-fragments from CLOB columns.
    Danish ISO-8859-1 characters (aelig, oslash, aring) are returned as "?" from Apache/Jserv when using xsql:include-xsql reparse="yes".
    My platform is Solaris9/Oracle-9.2.0.2/XDK-9.2.0.4.
    Database characterset is we8iso8859p1.
    I'm using the Apache/Jserv that comes with Oracle 9.2.0.1.
    Steps to reproduce problem:
    -- Table data:
    create table tab1 (id number,clob_col clob);
    insert into tab1 values(1, '<x>fxe</x>');
    /*inserted characters are aelig(230), oslash(248), aring(229)*/
    commit;
    -- test.xsql:
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <testdata xmlns:xsql="urn:oracle-xsql" connection="pnrtest">
    <xsql:include-xsql reparse="yes" href="inc.xsql" />
    </testdata>
    -- inc.xsql:
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <?xml-stylesheet type="text/xsl" href="unquote_clob_col.xsl"?>
    <xsql:query
    xmlns:xsql="urn:oracle-xsql"
    connection="pnrtest"
    tag-case="lower"
    >
    select clob_col
    from tab1
    </xsql:query>
    -- unquote_clob_col.xsl:
    <xsl:stylesheet
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    version="1.0">
    <xsl:output method="xml" indent="yes" omit-xml-declaration="no" encoding="ISO-8859-1"/
    <xsl:include href="identity.xsl"/>
    <xsl:template match="clob_col">
    <clob_col>
    <xsl:value-of select="." disable-output-escaping="yes"/>
    </clob_col>
    </xsl:template>
    </xsl:stylesheet>
    -- identity.xsl:
    <!-- The Identity Transformation -->
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <!-- Whenever you match any node or any attribute -->
    <xsl:template match="node()|@*">
    <!-- Copy the current node -->
    <xsl:copy>
    <!-- Including any attributes it has and any child nodes -->
    <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
    </xsl:template>
    </xsl:stylesheet>
    -- Notes:
    Running test.xsql works fine with XSQL command-line, but FAILS through Apache/Jserv (danish characters are returned as "?").
    inc.xsql works fine through XSQL command-line and Apache/Jserv, problem only happens with xsql:include-xsql reparse="yes" (e.g. test.xsql).
    xsql:include-xml works fine, but I cannot use this, bca. in my real business case I'm selecting more than one row from the database.
    I've checked and double-checked my jserv.properties several times, and believes it to be correct.
    The xsql:include-xsql reparse="yes" technique works fine in our Solaris9/Oracle-8.1.7/iAS-1.0.2.2 environment.
    Any suggestions ?
    -- Peter ([email protected])

    If I put the following line in jserv.properties:
    wrapper.env=LANG=en_US.ISO8859-1
    the problem with xsql:include-xsql reparse="yes" seems to go away.
    Really strange, since Oracle products in my experience normally only uses NLS_LANG, not LANG.
    Also, we're accessing several databases with different charactersets from the same ApacheJserv installation, so I don't understand why LANG (or NLS_LANG) should be set to a particular value.
    Can anybody explain ?
    -- Peter

Maybe you are looking for