ISO8859 & UTF 8 default encoding

Hi:
My data comes to me with the ISO 8859 encoding. So it has the
umlauts and ess-sets (double dots over the character) in it. So
whenever I try to do
java OracleXMl putXMl
I got the error of Invlid UTF8 encoding. Even I have tried
mentioning in my first line of the data as
<?xml version="1.0" encoding="ISO-8859-1?>
OR
<?xml version="1.0" encoding="ISO8859_1"?>
both. Well I manually removed, as a testing, those characters
from the input file and tried. This time the error is
'PI names starting with 'xml' are reserved
Means I have to remove the first line also. So I assume that the
parser is not looking the first line at all.
Otherwise Could you please tell me where exactly I am making the
mistake?
Thanks
null

Prakash (guest) wrote:
: Yeah I am using v2.0.0.2 which I downloaded from OTN few days
: back. Since I was facing the problem of uploading control
: characters on your previous verison, I switched to v2. But
here I
: am facing this problem
: V Prakash
: Oracle XML Team wrote:
: : Prakash (guest) wrote:
: : : Hi:
: : : My data comes to me with the ISO 8859 encoding. So it has
the
: : : umlauts and ess-sets (double dots over the character) in
it.
: So
: : : whenever I try to do
: : : java OracleXMl putXMl
: : : I got the error of Invlid UTF8 encoding. Even I have tried
: : : mentioning in my first line of the data as
: : : <?xml version="1.0" encoding="ISO-8859-1?>
: : : OR
: : : <?xml version="1.0" encoding="ISO8859_1"?>
: : : both. Well I manually removed, as a testing, those
characters
: : : from the input file and tried. This time the error is
: : : 'PI names starting with 'xml' are reserved
: : : Means I have to remove the first line also. So I assume
that
: : the
: : : parser is not looking the first line at all.
: : : Otherwise Could you please tell me where exactly I am
making
: : the
: : : mistake?
: : : Thanks
: : Are you using v2.0.0.2 of the Java XML Parser? If not please
: try
: : it.
: : Oracle XML Team
: : http://technet.oracle.com
: : Oracle Technology Network
Please try again with the production version (2.0.2) of the
parser which just got released. If you are still having
problems, please send an email to [email protected] with your
xml file and dtd if applicable as an attachment.
Oracle XML Team
http://technet.oracle.com
Oracle Technology Network
null

Similar Messages

  • Encoding from UTF-8 encoded String to Microsoft Project default encode

    Hi Expert ...
    I have a problem with encoding a String from UTF-8 String in order to write a MPX (Microsoft Project) file. I used UTF-8 on my Database encoding, and I want to write a MPX file using MPXJ library, but the result is (?) character. I think it's because I didn't encode yet to Shift JIS (a Microsoft Product default encoding). And after that I try to encode the String with Shift_JIS encoding, but the same result is appeared. I try to looking another way, but there is no result.
    I hope some expert would help me to solve this problem.
    Thank you,
    Alfian B.

    Totally wrong. A String doesn't have an encoding.
    Now if you had an array of bytes, which were encoded using one charset, and you wanted to convert that to an array of bytes encoded using a second charset, you would use code like this:byte[] bytes = // the bytes encoded in UTF-8, let's say
    String s = new String(bytes, "UTF-8"); // make that into a String
    byte[] newbytes = s.getBytes("windows-31j"); // encode the String into windows-31j

  • Message mapping : UDF parameter string type versus default UTF-8 encoding

    Hi,
    I'm facing an issue with character encoding when using an UDF to transform into base64 encoding.
    While thinking about the subject, I'm not 100% sure if it's possible to get it to work corerctly :
    Given :
    -The input XML is encoded UTF-8 ( with a special characeter )
    -The UDF is generated with java parameter type 'string' ( = set of 16bit unicode characters )
    Doubts :
    -What is supposed to happen when a node content ( of message encoded in UTF-8 ) is used as input for the UDF string type parameter  ? Is the node content decoded/encoded correctly by PI automatically ( at input/output versus the internal 16bit unicode character string ) ?
    ( I would assume yes )
    -Is the default charset of the underlying JVM relevant ? Or does pi always use explicit charsets when encoding/decoding ?
    ( I would assume it's not relevant )
    The UDF java code considers the string as a array of chars while processing them. It uses methods .length and .charat on the input string.
    The result is that I have a ISO-8859 encoded string ! ( after decoding it back from the base64 ) 
    What could cause this ?
    regards
    Dirk
    PS If I simply use default functions ( concat etc..) then the resulting xml stays correctly encoded...

    Hi,
    But that would mean that an UTF-8 encoded byte array would be passed unconverted to the UTF-16 unicode string parameter ?
    Shouldn't that trigger an exception ?
    I'm going to make some tests and see if it enlights my knowledge ( empirical )
    Keep you updated,
    thanks
    dirk

  • How to make utf-8 default encoding firefox 4.0

    how to make utf-8 default encoding firefox 4.0

    The default encoding that you set in Firefox/Tools > Options > Content is only used if a server doesn't send an encoding via the HTTP response header and if there is also no meta tag or other indication in the file.

  • Romaji yen sign in Terminal in the UTF-8 encoding

    Hello all,
    I have a MacBook Pro with a Japanese keyboard running Mac OS X 10.6.2. In Romaji mode, the Japanese keyboard has a dedicated yen sign (¥) key, and Option-¥ produces a backslash (\). In Terminal, for some reason, the ¥ key produces \ without the Option modifier. (Option-¥ also produces \ in Terminal, which is normal behavior.)
    A similar situation was discussed in an older topic, http://discussions.apple.com/thread.jspa?messageID=10665836 , where the problem was diagnosed as having the Shift JIS encoding enabled in Terminal. However, this doesn‘t reflect my situation, since the only encoding that is enabled in my Terminal is UTF-8 – and there‘s certainly a yen sign available in UTF-8.
    I am able to type other UTF-8 characters in Terminal in Romaji mode; for example, I can type Option-e e to produce é, and entering the command *echo é | od -x* within Terminal shows that the correct UTF-8 byte sequence is generated for é. Since the command *echo -e '\0302\0245'* within Terminal will produce a yen sign there, the problem seems to be connected to the key mapping rather than to a stty interface problem.
    Is there anyone running 10.6.2 with a Japanese keyboard who can type the ¥ key in Romaji mode in Terminal with the UTF-8 encoding enabled, and have a yen sign appear rather than a backslash?
    (This topic was initially posted in the +Installation and Setup+ forum, and I‘ve taken the advice of a kind soul there to repost the topic in this forum.)

    I don't know the exact reason why ¥ is forcefully converted to \ in Terminal (even in UTF-8 encoding), and anyway it would be better to add an option to turn off this conversion (or there may already be a hidden option which I can't find).
    But the conversion may be helpful for many users, as expected from the following reasons:
    I guess there is no key for backslash on the Japanese keyboard of MacBook Pro. If this is the case, then being able to input \ by just hitting the ¥-key (instead of typing option-¥) may be "useful" for may Terminal users (because \ is used much more frequently than ¥ in programs). Kotoeri has an option to swap ¥ and option-¥ keys (so hitting ¥-key inputs \ and option-¥ inputs ¥), but this setting is global (i.e., not restricted to Terminal.app), so making this as the default setting may confuse most of Japanese users (they don't use Terminal.app at all, but uses ¥ as the currency symbol in other apps). Even Terminal users would use ¥ more frequently than \ in apps other then Terminal, so don't want to modify the global setting.
    Another reason may be that there are still many Japanese textbooks for programing which uses ¥ as the escape character (I guess you know why). For example the first C program looks like: printf("Hello World!¥n"); So many beginners would try to input ¥ as written in the textbook, without knowing the escape character in UTF-8 should be \, not ¥. Converting ¥ to \ may be helpful for these users (of course they would be surprised to see not ¥ but \ appears on the screen, but anyway the program would work).
    You can send a bug report or feature request at:
    http://www.apple.com/feedback/macosx.html

  • CONVERSION FROM ANSI ENCODED FILE TO UTF-8 ENCODED FILE

    Hi All,
    I have some issues in conversion of ANSI encoded file to utf encoded file. let me tell you in detail
    I have installed the Language Support for Thai Language on My Operating System.
    now, when I open my notepad and add thai character on the file and save it as ansi encoding. it saves it perfectly and also I able to see it on opening the file again.
    This file need to be read by my application , store in database and should display thai character on jsp after fetching the data from database. Currently it is showing junk character on jsp reason being that my database (UTF8 compliant database) has junk data . it has junk data because my application is not able to read it correctly from the file.
    If I save the file with encoding as UTF 8 it works fine. but my business requirement is such that the file is system generated and by default it is encoded in ANSI format. so I need to do the conversion of encoding from ANSI to UTF8 . so Any of you can guide me on the same how to do this conversion ?
    Regards
    Gaurav Nigam

    Guessing the encoding of a text file by examining its contents is tricky at best, and should only be done as a last resort. If the file is auto-generated, I would first try reading it using the system default encoding. That's what you're doing whenever you read a file with a FileReader. If that doesn't work, try using an InputStreamReader and specifying a Thai encoding like TIS-620 or cp838 (I don't really know anything about Thai encodings; I just picked those out of a quick Google search). Once you've read the file correctly, you can write the text to a new file using an OutputStreamWriter and specifying UTF-8 as the encoding. It shouldn't really be necessary to transcode files like this, but without knowing a lot more about your situation, that's all I can suggest.
    As for native2ascii, it isn't for encoding conversions. All it does is replace each non-ASCII character with its six-character Unicode escape, so "voil&#xE1;" becomes "voil\u00e1". In other words, it avoids the problem of character encodings by converting the file's contents to a form that can be stored as ASCII. It's mainly used for converting property or resource files to a form that can be read by the Properties and ResourceBundle classes.

  • UTF-8 encoding trouble

    I need to use UTF8 encoding throughout a site. For that purpose, I have the following
    tags on JSP:
    <%@ page contentType="text/html; charset=UTF-8" %>
    <meta http-equiv="Content-Type" CONTENT="text/html; charset=UTF-8">
    Next, in my weblogic.xml, I have the following:
    <jsp-param>
         <param-name>encoding</param-name>
         <param-value>UTF8</param-value>
    </jsp-param>
    <charset-params>
    <input-charset>
    <resource-path>*.jsp</resource-path>
    <java-charset-name>UTF8</java-charset-name>
    </input-charset>
    </charset-params>
    Having configured this, I have two simple JSP files. The first one submits a field
    (whose contents I enter in Greek), and the second page writes them to a file. The
    code for writing to a file looks like this:
    FileOutputStream of = new FileOutputStream (fileName, false);
    OutputStreamWriter ow = new OutputStreamWriter (of, "UTF-8");
    ow.write (request.getParameter("test"));
    When I enter the Greek character Alpha as input, the file has a weird string +I in
    it. To fix the problem, I did the following (and it works):
    String s = request.getParameter ("TestName");
    byte b[] = new byte [5000];
    b = s.getBytes ();
    s = new String (b, "UTF-8");
    writeToFile (s);
    Which means that for some reason, the page gets the right String, but it seems to
    be encoded with default encoding (not UTF8). When I convert it into bytes, and create
    another String using the same byte-stream but a different encoding, what I get is
    correct UTF-8 encoded string. Please also note that the same problem occurs with
    DB as well (Oracle 8.1.7 with UTF8 on Win2k), and fixing the above code fixes problem
    at both file and database level.
    Rather than the above workaround, what's the proper way to accomplish this?
    Thanks,
    Raja

    In GlassFish i have changed this now below here. Under each listeners both for Network Listeners and Protocols there are an HTTP tab and under that one i have change this,
    Network Config
    Network Listeners
    http-listeners-1
    http-listeners-2
    admin-listeners
    Protocols
    http-listeners-1
    http-listeners-2
    admin-listeners
    URI Encoding: UTF-8
    Default Response Type: text/plain; charset=UTF-8
    Forced Response Type: text/plain; charset=UTF-8
    So when i run curl in a terminal window i get this response:
    Macintosh:~ jespernyqvist$ curl -I http://neptunediving.com/neptune/index.jsp
    HTTP/1.1 200 OK
    Date: Mon, 17 May 2010 04:14:17 GMT
    Server: GlassFish v3
    X-Powered-By: JSP/2.1
    Content-Type: text/html;charset=UTF-8
    Content-Language: en-US
    Transfer-Encoding: chunked
    Set-Cookie: JSESSIONID=478269c08e050484d1d6fa29fc44; Path=/neptune
    As you can see now my HTTP Header is looking good, no more charset=iso-8859-1. The only problem i have here is that there is no space in between text/html;charset=UTF-8. I think this should be like this instead or not, text/html; charset=UTF-8? I have noticed that they are very case sensitive so maybe this is a problem for me?
    On top of my header i have this;
    <%@page import="com.neptunediving.*"%>
    <%@include file="WEB-INF/include/LangSupport.jsp"%>
    <%@page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
    In my header i have this;
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    I have changed in the preferences for Eclipse to use UTF-8. I have gone thru all properties files in my project and changed them to UTF-8 also. So what else are they to change?
    Still my page is nor displayed properly, now in all browsers like Safari, Firefox, Opera and Internet Explorer. So what is wrong with my page since this don't work for me? Can anybody please explain this to me?

  • UTF-8 encoding

    Hi,
    I'm having trouble with parsing XML stored in NCLOB column using UTF-8 encoding.
    Here is what I'm running:
    Windows NT 4.0 Server
    Oracle 8i (8.1.5) EE
    JDeveloper 3.0, JDK 1.1.8
    Oracle XML Parser v2 (2.0.2.5?)
    The following XML sample that I loaded into the dabase contains two UTF-8 multi-byte characters:
    <?xml version="1.0" encoding="UTF-8"?>
    <G><A>GBotingen, BrC<ck_W</A></G>
    G(0xc2, 0x82)otingen, Br(0xc3, 0xbc)ck_W
    If I'm not mistaken, both multibyte characters are valid UTF-8 encodings and they are defined in ISO-8859-1 as:
    0xC2 LATIN CAPITAL LETTER A WITH CIRCUMFLEX
    0xFC LATIN SMALL LETTER U WITH DIAERESIS
    I wrote a Java stored function that uses the default connection object to connect to the database, runs a Select query, gets the OracleResultSet, calls the getCLOB method and calls the getAsciiStream() method on the CLOB object. Then it executes the following piece of code to get the XML into a DOM object:
    DOMParser parser = new DOMParser();
    parser.setPreserveWhitespace(true);
    parser.parse(istr); // istr getAsciiStream
    XMLDocument xmldoc = parser.getDocument();
    Before the stored function can do other thinks, this code seems to throw an exception complaining that the above XML contains "Invalid UTF8 encoding".
    Now, when I remove the first mutlibyte character (0xc2, 0x82) from the XML, it parses fine.
    Also, when I do not remove this character, but connect via the jdbc racle:thin driver (note that now I'm not running inside the RDBMS as stored function anymore) the XML is parsed with no problem and I can do what ever I want with the XMLDocument. Note that I loaded the sample XML into the database using the thin jdbc driver.
    One more thing, I tried two database configurations with WE8ISO8859P1/WE8ISO8859P1 and WE8ISO8859P1/UTF8 and both showed the same problem.
    I'll appreciate any help with this issue. Thanks...

    I inserted the document once by using the oci8 driver and once by using the thin driver. Then I used the DBMS_LOB package to look at the individual characters and convert those characters using the ASCII function.
    It looks like that when I inserted the document using the OCI8 driver, they got converted into a pair of 191 (0xbf) characters. However, when I used the thin driver they ended up being stored as 195 (0xc3) and 130 (0x82).
    So it looks like that the OCI8 driver is corrupting the individual characters and that if the characters is not corrupted they cause a following exception to be thrown:
    Error: 440, SQL execution error, ORA-29532: Java call terminated by uncaught Java exception: java.io.UTFDataFormatException: Invalid UTF8 encoding. ORA-06512: at "SYSTEM.GETWITHSTYLE", line 0 ORA-06512: at line 1
    Note that my other example of mutli-byte character (C<) also gets corrupted by the OCI8 driver but does not cause the above exception to be thrown if it's inserted via the thin driver.
    null

  • Parsing a UTF-8 encoded XML Blob object

    Hi,
    I am having a really strange problem, I am fetching a database BLOB object containing the XMLs and then parsing the XMLs. The XMLs are having some UTF-8 Encoded characters and when I am reading the XML from the BLOB, these characters lose their encoding, I had tried doing several things, but there is no means I am able to retain their UTF encoding. The characters causing real problem are mainly double qoutes, inverted commas, and apostrophe. I am attaching the piece of code below and you can see certain things I had ended up doing. What else can I try, I am using JAXP parser but I dont think that changing the parser may help because, here I am storing the XML file as I get from the database and on the very first stage it gets corrupted and I have to retain the UTF encoding. I tried to get the encoding info from the xml and it tells me cp1252 encoding, where did this come into picture and I couldn't try it retaining back to UTF -8
    Here in the temp.xml itself gets corrupted. I had spend some 3 days on this issue. Help needed!!!
    ResultSet rs = null;
        Statement stmt = null;
        Connection connection = null;
        InputStream inputStream = null;
        long cifElementId = -1;
        //Blob xmlData = null;
        BLOB xmlData=null;
        String xmlText = null;
        RubricBean rubricBean = null;
        ArrayList arrayBean = new ArrayList();
          rs = stmt.executeQuery(strQuery);
         // Iterate till result set has data
          while (rs.next()) {
            rubricBean = new RubricBean();
            cifElementId = rs.getLong("CIF_ELEMENT_ID");
                    // get xml data which is in Blob format
            xmlData = (oracle.sql.BLOB)rs.getBlob("XML");
            // Read Input stream from blob data
             inputStream =(InputStream)xmlData.getBinaryStream(); 
            // Reading the inputstream of data into an array of bytes.
            byte[] bytes = new byte[(int)xmlData.length()];
             inputStream.read(bytes);  
           // Get the String object from byte array
             xmlText = new String(bytes);
           // xmlText=new String(szTemp.getBytes("UTF-8"));
            //xmlText = convertToUTF(xmlText);
            File file = new File("C:\\temp.xml");
            file.createNewFile();
            // Write to temp file
            java.io.BufferedWriter out = new java.io.BufferedWriter(new java.io.FileWriter(file));
            out.write(xmlText);
            out.close();

    What the code you posted is doing:
    // Read Input stream from blob data
    inputStream =(InputStream)xmlData.getBinaryStream();Here you have a stream containing binary octets which encode some text in UTF-8.
    // Reading the inputstream of data into an
    into an array of bytes.
    byte[] bytes = new byte[(int)xmlData.length()];
    inputStream.read(bytes);Here you are reading between zero and xmlData.length() octets into a byte array. read(bytes[]) returns the number of bytes read, which may be less than the size of the array, and you don't check it.
    xmlText = new String(bytes);Here you are creating a string with the data in the byte array, using the platform's default character encoding.
    Since you mention cp1252, I'm guessing your platform is windows
    // xmlText=new new String(szTemp.getBytes("UTF-8"));I don't know what szTemp is, but xmlText = new String(bytes, "UTF-8"); would create a string from the UTF-8 encoded characters; but you don't need to create a string here anyway.
    //xmlText = convertToUTF(xmlText);
    File file = new File("C:\\temp.xml");
    file.createNewFile();
    // Write to temp file
    java.io.BufferedWriter out = new java.io.BufferedWriter(new java.io.FileWriter(file));This creates a Writer to write to the file using the platform's default character encoding, ie cp1252.
    out.write(xmlText);This writes the string to out using cp1252.
    So you have created a string treating UTF-8 as cp1252, then written that string to a file as cp1252, which is to be read as UTF-8. So it gets mis-decoded twice.
    As the data is already UTF-8 encoded, and you want the output, just write the binary data to the output file without trying to convert it to a string and then back again:// not tested, as I don't have your Oracle classes
    final InputStream inputStream = new BufferedInputStream((InputStream)xmlData.getBinaryStream());
    final int length = xmlData.length();
    final int BUFFER_SIZE = 1024;                  // these two can be
    final byte[] buffer = new byte[BUFFER_SIZE];   // allocated outside the method
    final OutputStream out = new BufferedOutputStream(new FileOutputStream(file));
    for (int count = 0; count < length; ) {
       final int bytesRead = inputStream.read(buffer, 0, Math.min(BUFFER_SIZE, (length - count));
       out.write(buffer, 0, bytesRead);
       count += bytesRead;
    }Pete

  • Soap adapter default encoding

    Hi there,
    we have a XI <-> webservice scenario, where the XML messages are sent in a string (the wsdl is wrapped, literal) with digital signing.
    As of now, we have a problem in the digital signing, since the webservice won't validate the signature. We made some tests and this is what I've concluded.
    From what I've observed, there are three ways of sending XML file content in a string (let me know if there are more). They are:
    1. putting the xml content between "<![CDATA[" and "]]>";
    2. replacing '<', '>' and '"' with "&lt;", "&gt;" and "&quot;";
    3. replacing '<' and '>' with "&#60;" and "&#62;".
    2 and 3 appear to be different encodings.
    We tried to send three messages to a test webservice, each with a different way of defining the string, using receiver soap adapter. This test webservice was located in a local machine and we ran a sniffer on it. The three incoming messages were logged by the sniffer, but the upseting news were that the three of them had the same encoding in the input string, which was the third one above ("&#60;" and "&#62;", which by the way is the default by UTF-8).
    This only occurs when using SOAP adapter. I tested File adapter and it sent the file exactly how delivered by the sender service (meaning, with the different encodes or with CDATA). We even thought about using module processor for replacing the characters in the message, but since any custom module processor comes before the main adapter module processor, it won't have any effect on the final message.
    Using XMLSpy, I sent some test messages to our digital signer (which is also a webservice) with the three ways and it returned a different output when using "&lt;" encoding in the input. Then, I sent this signed message to the webservice and it validated the signature!
    We are suspecting that the webserver (which is IBM's WebSphere) always converts the input string to "&lt;", and when they calculate the message hash code, it's different from what we sent (since we always sign a "&#60;" encoded message).
    My question is: is there any way of changing the default encode that the soap adapter module processor uses? Or at least of making it not changing the encode of the payload? If it were possible, we would send the message to the signer webservice with "&lt;" encoding and the problem would be solved.
    Thanks in advance,
    Henrique.

    Morten,
    actually, we've found out that &#60; = &lt; , in both UTF-8 and ISO-8859-1 encodings. It makes no difference if you use one or the other.
    The point which was troublesome for us were the special characters (like á,â,à,ã,é, etc) which have different encoding for UTF-8 and ISO-8859-1. Since we had identified that the service we were accessing wasn't able to identify these characters properly, we just removed them from the string fields and that worked out for us.
    Hope it helps you out.
    Regards,
    Henrique.

  • Problems with Default Encoding in Mail

    I have been trying to change my default encoding settings in Apple Mail to Unicode-8. It has become necessary because some of the outlook clients that my colleagues use see my messages with control-characters/question marks. I know that this problem has been visited before and from the past posts I understand that closing the Mail and typing the following in the Terminal will do the trick:
    defaults write com.apple.mail NSPreferredMailCharset "UTF-8"
    I tried that several times and for some reason when I try to compose a new message, the encoding is set to "Automatic".
    I would appreciate ANY help in this regard. I have been trying to get more information on this but my search has not yet been fruitful.
    Thank you very much.
    MacBook Pro   Mac OS X (10.4.7)  

    I do have the same problem as Steathford do, I tried
    to add a dingbat without success (scissor), I still
    have same "chinese signs everywhere",
    I've never seen a case where the dingbat did not work. Could you send me an example? You need to go to your Sent folder, choose a message where you did this, and do Message > Send Again, and replace the original addressee with my email (tom at bluesky dot org).
    When adding a dingbat, you must use the Character Palette as shown here, and not just change your font.
    http://homepage.mac.com/thgewecke/dingbat.jpg

  • Japanese characters in a UTF-8 encoded .txt file are not showing up correctly since upgrade to 4.0

    I frequently use Firefox to look at Japanese language UTF-8 .txt files on my local disk. Since the 4.0 upgrade the Japanese text looks like this "牢獄 ろうごく". I've played with various options but nothing seems to work.

    Fixed the problem. I set the default encoding to UTF-8 under content--->fonts & colors--->advanced. Doesn't seem to have a negative effect on other pages either.

  • Incorrect UTF-8 encoded date in XML reports under German Win in March (IE error)

    TestStand XML reports are marked as UTF-8 encoded. But those reports generated under Win2k German in March (written as "März" in German) cannot be displayed in the Internet Explorer because the umlaut character of the month name is not correctly UTF-8 encoded.

    Hi
    I have attached the modified modelsupport2.dll and the ReportGen_Xml.seq which fixes the problem. I also attached the modified report.c and modelsupport2.fp files.
    If you have not made changes to modelsupport2.dll and reportgen_xml.seq you can add the modified files to \Components\User\Models\TestStandModels\ folder and the TestStand engine should use the version under the user folder.
    If you have made changes to ReportGen_Xml.seq and ModelSupport2.dll then you will need to move the changes in the below files to the files under the User folder.
    FYI: If you want to create a new component or customize a TestStand component, copy the component files from the NI subdirectory to the User subdirectory before customizin
    g. This ensures that installations of newer versions of TestStand do not overwrite your customization. If you copy the component files as the basis for creating a new component, be sure to rename the files so that your customization do not conflict with the default TestStand components.
    The TestStand Engine searches for sequences and code modules using the TestStand search directory path. The default search precedence places the \Components\User directory tree before the \Components\NI directory tree. This ensures that TestStand loads the sequences and code modules that you customize instead of loading the default TestStand versions of the files.
    I hope this helps.
    Regards
    Anand Jain
    National Instruments.
    Attachments:
    ModifiedFiles.zip ‏384 KB

  • How to change UTF-8 encoding for XML parser (PL/SQL) ?

    Hello,
    I'm trying to parse xml file stored in CLOB.
    p := xmlparser.newParser;
    xmlparser.parseCLOB(p, CLOB_xmlBody);
    Standard PL/SQL parser encoding is UTF-8. But my xml CLOB contain ISO-8859-2 characters.
    Can you advise me, please, how to change encoding for parser?
    Any help would be appreciated.
    null

    Do you documents contain an XML Declaration like this at the top?
    <?xml version="1.0" encoding="ISO-8859-2"?>
    If not, then they need to. The XML 1.0 specification says that if an XML declaration is not present, the processor must default to assume its in UTF-8 encoding.

  • Default Encoding for Mail

    Mail on ipod touch seems to show all mails in ISO8859 or so.
    The problem is I sometimes have to read mails in Chinese/Japanese encoded in EUC-CN/EUC-JP etc.
    Is it possible to change the default encoding for Mail?

    same problem with me. Sometime I can read Chinese/Japanese mails. Sometime the mails are unreadable. Why is Mail on mac be able to read all mails but mail on iTouch have encoding problems? I thought iTouch could be a better protable device system than windows CE in terms of international.

  • Changing Apple Mail default encoding to unicode

    Is it possible to change the mail default encoding to Unicode (UTF-8)?
    thanks

    BlueBird1958 wrote:
    after sending my mails, they become unreadable symbols.
    What does that mean?  Are the recipients unable to read them?  Or are you talking about how they look on your machine, for example in the Sent folder of Mail app?
    Normally Hebrew would always be sent in Unicode.
    But it certainly cannot hurt anything to test adding a unicode dingbat to a test message and see whether it makes a difference for your problem.

Maybe you are looking for