Translation UTF8 - ISO-8859-1

Hi there,
When translating a UTF8 file to ISO-8859-1 during loading of that file (using dbms_lob.loadclobfromfile)
I notice that an UTF8 right single quotation mark ' (2019) is converted to an inverted quotation mark ¿.
The left single quotation mark is (correctly) translated to 0060 Grave accent `
My question is; how can I get the (2019) right single quatation mark to be converted to 00B4 Acute Accent ?
Thanks in advance,
Art

It looks like during the insertion from ASP the Latin 1 string has not been converted to UTF8. Hence you are storing Latin-1 encoding inside a UTF-8 database.
I thought it would automatically be handled by OO4O.True. Did you specify the character set of the NLS_LANG env variable for the OO4O client to WE8ISO8859P1 ? If it was set to UTF8 then Oracle will assume that the encoding coming thru' the ASP page are in UTF-8 , hence no conversion takes place ..
Also may be you should check the CODEPAGE directive and Charset property in your ASP ?
null

Similar Messages

  • IE 9 incorrectly encoding Unicode characters in URIs to ISO-8859-1 instead of UTF8

    Lets take the example word
    präsentation
    In Firefox, if I specify that as a CGI parameter, on the receiving end, I recieve:
    pr\\303\\244sentation
    which decoded as UTF-8 gives me: pr{U+00E4}sentation or my submitted word präsentation.
    What does IE give me, well let's see.
    pr\\344sentation
    which well, doesn't decode as UTF8 because 0o344 is 0xE4. 
    ä in Unicode is at the codeopint 0xE4. Which as we've seen above, encoded to UTF8 is
    0xC3 0xA4
    So question boils down to this.
    Why does IE9 use ISO-8859-1 instead of UTF8 for non-ASCII characters in URIs?

    Hi,
    As my understanding, you could choose the encoding ways by yourself:
    Change your Internet Explorer 9 language
    encoding settings
    Alex Zhao
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • ISO-8859-1 / Invalid UTF8 encoding

    Hello!
    I have nearly the same problem like V Prakash (ISO8859 & UTF 8
    default encoding)
    I am parsing a XML-Document with ISO-8859-1 encoding. The file
    has no XML-Declaration.
    I use the "setEncoding(String)"-method of class
    oracle.xml.parser.v2.XMLDocument to set the right encoding.
    The result is a
    java.io.UTFDataFormatException: Invalid UTF8 encoding
    What's wrong with my code or is it a bug in Oracle XMLParser
    v2.0.0.2 ?
    Thanks
    Peter
    null

    Attachments: "1|type=application/octet-stream|desc=UTF-8 Encoding Error|2719|file=epm00001.xml|"
    Interesting...again, the web page said I didn't have permission.
    Here goes nothing...
    Oracle XML Team wrote:
    : Scott Sosna (guest) wrote:
    : : No go...I'm going to try to attach the files, but the last
    : time
    : : I tried to do so, it didn't work.
    : : Oracle XML Team wrote:
    : : : Scott Sosna (guest) wrote:
    : : : : Here's the test case you wanted.
    : : : : <!ELEMENT B (#PCDATA|PERF)*>
    : : : : <!ELEMENT I (#PCDATA|ALBUM|RATING|B)*>
    : : : : <!ELEMENT P (#PCDATA|B|I|RATING)*>
    : : : : <!ELEMENT EPM (ARTICLE*)>
    : : : : <!ELEMENT ARTICLE
    : : : : (HEADWORD,BIOGRAPHY,
    : : : (DISCOGRAPHY|VIDEOGRAPHY|BIBLIOGRAPHY|FILMOG
    : : : : RAPHY|COMPILATIONS)*,COPYRIGHT)>
    : : : : <!ATTLIST ARTICLE Performer CDATA
    #IMPLIED>
    : : : : <!ATTLIST ARTICLE ArtID CDATA #IMPLIED>
    : : : : <!ELEMENT HEADWORD (P)>
    : : : : <!ELEMENT BIOGRAPHY (#PCDATA|P)*>
    : : : : <!ELEMENT DISCOGRAPHY (I|P)*>
    : : : : <!ELEMENT COPYRIGHT (P)>
    : : : : <!ELEMENT PERF (#PCDATA)>
    : : : : <!ATTLIST PERF LINK CDATA
    #REQUIRED>
    : : : : <!ELEMENT ALBUM (#PCDATA|B)*>
    : : : : <!ATTLIST ALBUM LINK CDATA
    #REQUIRED>
    : : : : <!ELEMENT RATING (#PCDATA)*>
    : : : : <!ATTLIST RATING rank CDATA
    #REQUIRED>
    : : : : <!ATTLIST RATING text CDATA
    #REQUIRED>
    : : : : <!ELEMENT FILMOGRAPHY (P)>
    : : : : <!ELEMENT COMPILATIONS (P)>
    : : : : <!ELEMENT BIBLIOGRAPHY (P)>
    : : : : <!ELEMENT VIDEOGRAPHY (P)>
    : : : : <?xml version="1.0" standalone="no"?>
    : : : : <!DOCTYPE EPM SYSTEM "epm.dtd">
    : : : : <EPM>
    : : : : <ARTICLE Performer="Johansson, Jan"
    : : : : ArtID="13994"><HEADWORD>
    Johansson,
    : : : : Jan</P></HEADWORD><BIOGRAPHY>
    b. 16 September 1931,
    : : : Swderhamn,
    : : : : Sweden, d. 9 November 1968, Stockholm, Sweden. Johansson
    : was
    : : a
    : : : : pianist, composer and arranger who became known to
    : European
    : : : : audiences as a member of <PERF LINK="Getz, Stan">Stan
    : : : : Getz</PERF> &#237;s quartet touring with <PERF
    : : : : LINK="Granz, Norman">Norman Granz</PERF> &#237;s
    JATP
    : : : : concerts in 1960. The following year he joined Arne
    : : : : Domn...rus &#237;s band, and also the Swedish Radio
    : Jazz
    : : : : Group, in 1967, composing and arranging for both.
    : Johansson
    : : : also
    : : : : wrote for film, theatre, ballet (e.g. Rwrelser )
    : and
    : : : : television (e.g. the Pippi Lngstrwmpa tune). He reached
    a
    : : : broad
    : : : : audience with sensitive renditions of Swedish folk
    songs.
    : : His
    : : : : considerable talents as composer and arranger were
    : : especially
    : : : : apparent in his experimental writing for the Radio Jazz
    : : Group,
    : : : : which fused and reinterpreted European art music, folk
    and
    : : : jazz
    : : : : in ways that went beyond the <PERF LINK="Basie,
    : : : Count">Count
    : : : : Basie</PERF> or <PERF LINK="Kenton, Stan">Stan
    : : : : Kenton</PERF> formats for big band jazz ( Den
    Korta
    : : : : Fristen ). As a jazz pianist in a trio context,
    : : Johansson
    : : : : adopted a swinging <PERF LINK="Kelly, Wynton">Wynton
    : : : : Kelly</PERF> -like style, but his skill in fully
    : : exploring
    : : : : the potentialities of a song far exceeded the American
    : : : musician
    : : : : (e.g. &#235;Willow Weep For Me&#237; on 8 Bitar
    : : Not
    : : : to
    : : : : be confused with Jan Johansson, the guitarist and music
    : : : : teacher.</P></BIOGRAPHY>
    : : : : <DISCOGRAPHY>
    Rwrelser (Megafon, 1963)<RATING
    : : : : rank="3" text="Good" >***</RATING>, Jazz P
    Svenska
    : : : : (Megafon 1964), Younger Than Springtime (Artist
    : : : : 1972)<RATING rank="4" text="Excellent" >****</RATING>,
    : 8
    : : : : Bitar/Innertrio (Megafon 1989)<RATING rank="3"
    : : text="Good"
    : : : : >***</RATING>, Jan Johansson &#038; Radiojazzgruppen:
    : Den
    : : : : Korte Fristen (Megafon 1991)<RATING rank="3"
    : text="Good"
    : : : : >***</RATING>, 300.000 Km/h (Heptagon
    1994)<RATING
    : : : : rank="4" text="Excellent" >****</RATING>, Musik Genom
    : : Fyra
    : : : : Sekler Med Jan Johansson (Heptagon 1994)<RATING
    : rank="3"
    : : : : text="Good" >***</RATING>, Jan Johansson Spelar Musik
    : P
    : : : Sitt
    : : : : Eget Vis (Heptagon 1995)<RATING rank="3" text="Good"
    : : : : >***</RATING>, En Resa I Jazz Och Folkton
    (Heptagon
    : : : : 1995)<RATING rank="3" text="Good" >***</RATING>, Jazz
    : P
    : : : : Ungerska/In Pleno (Heptagon 1996)<RATING rank="3"
    : : : : text="Good" >***</RATING>.</P></DISCOGRAPHY>
    : : : : <COPYRIGHT>
    Encyclopedia of Popular Music
    : : : : Copyright Muze UK Ltd. 1989 -
    : 1999</P></COPYRIGHT></ARTICLE>
    : : : : </EPM>
    : : : You need to start with
    : : : <?xml version="1.0" standalone="no"
    encoding="ISO-8859-1"?>
    : : : otherwise it will use UTF-8 as the default encoding. If
    this
    : : : does not solve your problem please post a new message
    using
    : : the
    : : : Attach File option as we have found the act of cutting and
    : : : pasting from a message can "fix" encoding problems.
    : : : Oracle XML Team
    : : : http://technet.oracle.com
    : : : Oracle Technology Network
    : It worked the last time with your DTD File as I see it as an
    : attachement. Simply start a new thread and attach the XML
    file.
    : Oracle XML Team
    : http://technet.oracle.com
    : Oracle Technology Network
    null

  • XMLReader throws "Invalid UTF8 encoding." - Need parser for ISO-8859-1 chrs

    Hi,
    We are facing an issue when we try to send data which is encoded in "ISO-8859-1" charset (german chars) via the EMDClient (agent), which tries to parse it using the oracle.xml.parser.v2.XMLParser . The parser, while trying to read it, is unable to determine the charset encoding of our data and assumes that the encoding is "UTF-8", and when it tries to read it, throws the :
    "java.io.UTFDataFormatException: Invalid UTF8 encoding." exception.
    I looked at the XMLReader's code and found that it tries to read the first 4 bytes (Byte Order Mark - BOM) to determine the encoding. It is probably expecting us to send the data where the first line is probably:
    <?xml version="1.0" encoding="iso88591" ?>
    But, the data that our application sends is typically as below:
    ========================================================
    # listener.ora Network Configuration File: /ade/vivsharm_emsa2/oracle/work/listener.ora
    # Generated by Oracle configuration tools.
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = semsa2)
    (ORACLE_HOME = /ade/vivsharm_emsa2/oracle)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = tcp)(HOST = stadm18.us.oracle.com)(PORT = 15100))
    ========================================================
    the first 4 bytes in our case will be, int[] {35, 32, 108, 105} == chars {#, SPACE, l, i},
    which does not match any of the encodings predefined in oracle.xml.parser.v2.XMLReader.pushXMLReader() method.
    How do we ensure that the parser identifies the encoding properly and instantiates the correct parser for "ISO-8859-1"...
    Should we just add the line <?xml version="1.0" encoding="iso88591" ?> at the beginning of our data?
    We have tried constructing the inputstream (ByteArrayInputStream) by using String.getBytes("ISO-8859-1") and passing that to the parser, but that does not seem to work.
    Please suggest.
    Thanks & Regards,
    Vivek.
    PS: The exception we get is as below:
    java.io.UTFDataFormatException: Invalid UTF8 encoding.
    at oracle.xml.parser.v2.XMLUTF8Reader.checkUTF8Byte(XMLUTF8Reader.java:160)
    at oracle.xml.parser.v2.XMLUTF8Reader.readUTF8Char(XMLUTF8Reader.java:187)
    at oracle.xml.parser.v2.XMLUTF8Reader.fillBuffer(XMLUTF8Reader.java:120)
    at oracle.xml.parser.v2.XMLByteReader.saveBuffer(XMLByteReader.java:450)
    at oracle.xml.parser.v2.XMLReader.fillBuffer(XMLReader.java:2229)
    at oracle.xml.parser.v2.XMLReader.tryRead(XMLReader.java:994)
    at oracle.xml.parser.v2.XMLReader.scanXMLDecl(XMLReader.java:2788)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:502)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:205)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:180)
    at org.xml.sax.helpers.ParserAdapter.parse(ParserAdapter.java:431)
    at oracle.sysman.emSDK.emd.comm.RemoteOperationInputStream.readXML(RemoteOperationInputStream.java:363)
    at oracle.sysman.emSDK.emd.comm.RemoteOperationInputStream.readHeader(RemoteOperationInputStream.java:195)
    at oracle.sysman.emSDK.emd.comm.RemoteOperationInputStream.read(RemoteOperationInputStream.java:151)
    at oracle.sysman.emSDK.emd.comm.EMDClient.remotePut(EMDClient.java:2075)
    at oracle.sysman.emo.net.util.agent.Operation.saveFile(Operation.java:758)
    at oracle.sysman.emo.net.common.WebIOHandler.saveFile(WebIOHandler.java:152)
    at oracle.sysman.emo.net.common.BaseWebConfigContext.saveConfig(BaseWebConfigContext.java:505)

    Vivek
    Your message is not XML. I believe that the XMLParser is going to have problems with that as well. Perhaps you could wrap the message in an XML tag set and begin the document as you suggested with <?xml version="1.0" encoding="iso88591"?>.
    You are correct in that the parser uses only the first 4 bytes to detect the encoding of the document. It can only determine if the document in ASCII or EPCDIC based. If it is ASCII it can detect only between UTF-8 and UTF-16. It will need the encoding attribute to recognize the ISO-8859-1 encoding.
    hope this helps
    tom

  • OSB - Code Page Conversion - From UTF8 to iso-8859-1; cp1252 etc...

    Hi,
    I have a requirement to convert a UTF-8 data to other encoding formats like cp1252; iso-8859-1 etc... Please let me know how this can be done, in OSB? Appreciate your response.
    Regards...

    Hi,
    Yes you can change it in transport configuration tab. Please follow the below link for more details.
    http://docs.oracle.com/cd/E17904_01/doc.1111/e15866/transports.htm#i1268967
    Thanks,
    Durga
    - It is considered good etiquette to reward answerers with points (as "helpful"  or "correct").

  • How to convert back from UTF8 to ISO-8859-1 encoding?

    hi,
    I have a bunch of XML files which were wrongly encoded, and we lost all our accent characters.
    ie: é become é
    so how can I recover my XML files using powershell?
    so I want to change all the UTF8 ecoded characters back to the original ISO accent character
    é -> é
    I try this:
    1")
    $utf8 = [System.text.Encoding]::UTF8
    $utfBytes = $utf8.GetBytes("é")
    $isoBytes = [System.text.Encoding]::Convert($utf8, $iso, $utfBytes)
    $iso.GetString($isoBytes)
    but doesnt works.
    so is there a way to do this in powershell?
    I have to scan hundreds of files...
    thanks.

    You can't.  UTF-8 strips all of the information from the characters so you cannot know which characters are which.  If you know which characters you need to fix (requires knowing the spelling of the words) you could possible develop an matrix of
    replacements. There is no simple one line method.
    ¯\_(ツ)_/¯

  • UTF8 to ISO 8859-2

    I have the following problem:
    i have character mode report that contains national characters from iso 8859-2.
    i have set symbol set in prt file to iso 8859-2 but i still get garbage for national characters.
    even in report builder i don't get god characters when report is run in preview ?!
    can anybody help?
    i didn't modified any other conf file besides prt files for my char report, is there some file i should also modify?
    i am using oracle 10g.
    it is urgent, any help is welcomed
    Thanks and regards ,
    Rade

    You can use convert to convert a text from one characterset to another:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions027.htm#SQLRF00620
    e.g.
    select convert('<your_text>', 'AL32UTF8', 'EE8ISO8859P2') from dualto convert a UTF-8 Text to Latin-2. The text should be encoded in the characterset you passed as source characterset of course.
    cheers

  • WIN-1252 characters in ISO-8859-1 textfields

    My questing concerns the following situation :
    The Apex-Dads is configured to be WE8ISO8859P1, the database has the same character set.
    When submitting a text-field in an APEX-page, a user can submit win1252 characters : chr:128-159, although the page encoding is
    ISO-8859-1.
    These characters are even displayed on the apex-page(in IE, FF on windows) as the represented win1252 character.
    However when these characters are inserted in the database, I have non-display characters.
    My question is, how can I prevent these characters from being entered in the database ?
    I've tried these
    * I can assume that all input arrives in Win1252 and convert all data in the database. (seems not the optimal solution)
    * I can configure the dad to be UTF8, so there will be a conversion from UTF-8-&gt; ISO-8859-1.
    However some characters aren't translated properly.
    Partly this is acceptable : chr128 (euro) isn't available in ISO-8859-1 but chr146 : right shifted quote is 'translated' to ¿ (inverted quotation mark).
    Perhaps someone had the same experience and can give me some more info or tips.
    Thanks in advance,
    Art
    Edited by: amels on Sep 19, 2008 1:11 PM

    Hello Art,
    >> Do you know why this is / should be ?
    This setting became mandatory in version 2.0, when the AJAX framework was introduced. Since then, the APEX environment itself is massively using this technology. Although the XMLHttpRequest object can support deferent character sets, its default setting is UTF-8. Using AL32UTF8 in the DAD simplify the “behind the scenes” AJAX support. This configuration also simplifies import/export of APEX applications and data, minimizing some client-server character set conversions, and I probably don’t know all the reasons that led to make this configuration a mandatory one.
    The important thing to remember is that it is a mandatory setting, and ignoring it will defiantly cause you problems in functionality, both in the development environment, and the run-time environment.
    Regards,
    Arie.

  • Problem with charset  ISO-8859 with dinamic action

    Hello
    I migrated my application that was Apex 3.2, and now Apex is 4.2.3, but a have a problem when I'm use Dynamic Action with ISO-8859.
    When I used a filter "ação" in dinamic action filter mounted by the apex presents 'aà § à £ o.'
    If I force use of PlsqlNLSLanguage = BRAZILIAN PORTUGUESE_BRAZIL.UTF8 in Dads, works correct.
    Please, Help me! I need to keep ISO-8859!
    Thanks

    Using another example to illustrate the problem… I
    found an apparent solution using "encodeURI".
    It sees:
    quote:
    BEFORE:
    function SaveMsg()
    if (document.sender.message == "" )
    return false;
    var mensagem=document.sender.message.value;
    ds2.setURL('responsexml.asp?action=add'&msg='+mensagem);
    ds2.loadData();
    document.sender.message.value="";
    document.sender.message.focus();
    When the entrance of the changeable "mensagem" was
    “João” was recorded in archive XML as
    “Joo” ...
    input: "João"
    output: "Joo"
    quote:
    AFTER:
    function SaveMsg()
    if (document.sender.message == "" )
    return false;
    var mensagem=encodeURI(document.sender.message.value);
    ds2.setURL('responsexml.asp?action=add'&msg='+mensagem);
    ds2.loadData();
    document.sender.message.value="";
    document.sender.message.focus();
    Now ...
    input: "João"
    output: "João"
    This would be a good solution? I will not have problems,
    right? Some opinion/suggestion on this solution?

  • Changing character encoding in ps xml pub. from utf-8 to iso-8859-1

    I am using xml publisher to generate a report in a pdf format, now my problem is user has entered a comment which is not supported by utf but in iso-8559-1 its working fine,
    I tried to change the encoding in people code, xml doc file ,schema and xliff file but still the old formatting exist,should I change somewhere else.
    Following the error i get when trying to generate pdf:"Error generating report output: (235,2309)Error occurred during the process of generating the output file from template file, XML data file, and translation XLIFF file.".The parser is not able to recognise with utf-8 encoding.

    I had the same issue. I created the xml through rowset and used string substitute function and its working.
    Sample:
    &inXMLDoc = CreateXmlDoc("");
    &ret = &inXMLDoc.CopyRowset(&rsHdr);
    &sXMLString = &inXMLDoc.GenFormattedXmlString();
    &sXMLString = Substitute(&sXMLString, "<?xml version=""1.0""?>", "<?xml version=""1.0"" encoding=""ISO-8859-1""?>");
    hope this helps!
    GN.

  • Reverting from UTF-8 to ISO-8859-1

    Hi,
    i have a database installed in UTF-8, it´s a new instalation and the guides i had didnt mention any restrictions on characterset for the teams that were migrating.
    Well the problem is some teams are moving some of their projects to the new server and can´t insert in a VARCHAR2 (3), for example the word "não".
    My question is: Can i change the whole database to ISO-8859-1 instead of UTF-8 in order to have words like "não" inserted correctly? If so, is it a simple alter database or a more complicated operation?
    Another question, is there any possibility of letting the database as is and make it work without expanding the fields value restriction?
    Alx

    You can't change a database character set from ISO-8859-1 to UTF8. You can only move from one character set to a strict superset, which doesn't apply here. The supported way to change the character set here would be to create a new database with the ISO-8859-1 character set, export the existing data, and import it into the new system. That assumes, of course, that all the existing characters have an ISO-8859-1 representation (characters like the Euro symbol or Microsoft's curly quotes do not).
    By default, a VARCHAR2(3) allocates 3 bytes of space for data. That gets complicated when you use a multi-byte character set like UTF-8 where a character like 'ã' requires 2 bytes of storage. You can define the columns as VARCHAR2(3 CHAR) to allocate 3 characters of storage regardless of the character set. You can also set the parameter NLS_LENGTH_SEMANTICS to CHAR to make the default when you create a table that character rather than byte length semantics are set. Personally, if I'm creating a UTF8 database, I'd want to set NLS_LENGTH_SEMANTICS to CHAR.
    Justin

  • HTTP Test Tool Umlaut (Special Character) Problem iso-8859-1 utf-8

    Hi folks,
    I habe a Problem in an HTTP to IDOC Scenario. The configuration works and when I test it, by using the Test Message Tool from the Runtime Workbench i get the following problem:
    I post an IDOC XML Charset iso-8859-1 when it arrive as IDoc in business system german umlauts would be displayd very cryptic
    ä = ä
    ü = ü
    and so on ....
    When I post the XML with UTF-8 charset it works, what can i do to handle this ?
    Thank you

    Hi,
    maybe this document is helpful:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42
    and also this thread:
    Character translation error in Mapping Lookup API (RFC)
    Regards
    Patrick

  • Codepage Conversionerror UTF-8 from System-Codepage to Codepage iso-8859-1

    Hello,
    we have on SAP PI 7.1 the problem that we can't process a IDOC to Plain HTTP.
    The channel throws "Codepage Conversionerror UTF-8 from System-Codepage to Codepage iso-8859-1".
    The IDOC is 25 MB. Does anybody have a idea how we can find out what is wrong with the IDOC?
    Thanks in advance.

    In java strings are always unicode i.e. utf16. Its the byte arrays that are encoded. So use the following codeString iso,utf,temp = "����� � �����";
    byte b8859[] = temp.getBytes("ISO-8859-1");
    byte butf8= temp.getBytes("utf8");
    try{
      iso = new String(b8859,"ISO-8859-1");
      utf = new String(butf8,"UTF-8");
      System.out.println("ISO-8859-1:"+iso);
      System.out.println("UTF-8:"+utf);
      System.out.println("UTF to ISO-8859-1:"+new String(utf.getBytes("iso8859_1"),"ISO-8859-1"));
    System.out.println(utf);
    System.out.println(iso);
    }catch(Exception e){ }Also keep in mind that DOS window doesnot support international characters so write it to a file

  • Cant mount usb sticks with fat32 ISO-8859-1 2.6.31-ARCH

    when trying to mount my usb sticks from xfce4 desktop(halmount).
    Following error occurs (as told by dmesg).
    FAT: IO charset ISO-8859-1 not found
    First i suspected hal being the culprit but now im not so sure anymore.
    im using stock 2.6.31-ARCH.
    when i checked the .config for the stock kernel CONFIG_NLS_ISO8859_1=y is enabled in the kernelconfig, and also everything below to support fat32.
    CONFIG_FAT_FS=m
    CONFIG_MSDOS_FS=m
    CONFIG_VFAT_FS=m
    CONFIG_FAT_DEFAULT_CODEPAGE=437
    CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
    CONFIG_NTFS_FS=m
    but when i list the modules nls_iso8859-1.ko is missing.
    #~ls -a /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso*
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-13.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-14.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-15.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-2.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-3.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-4.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-5.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-6.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-7.ko
    /lib/modules/2.6.31-ARCH/kernel/fs/nls/nls_iso8859-9.ko
    So i suspect the newest kernel broke mounting with hal.
    Last edited by nichlas.johansson (2009-10-17 14:25:56)

    Based on the information given in http://bbs.archlinux.org/viewtopic.php?id=82176, I changed in /etc/rc.conf from LOCALE=en_GB.iso88591 to LOCALE=en_GB.utf8, uncommented en_GB.UTF-8 UTF-8  in /etc/locale.gen, issued # locale-gen and after a reboot, I was able to mount my usb sticks again.
    However, now I get "FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!" in dmesg. Is this something, I need to worry about...?
    Last edited by smurf (2009-10-19 08:29:15)

  • Java App on Linux : Unable to read iso-8859-1 encoded file correctly.

    I have a file which is encoded as iso-8859-1, and contains characters such as ô .
    I am reading this file with java code, something like:
    File in = new File("myfile.csv");
    InputStream fr = new FileInputStream(in);
    byte[] buffer = new byte[4096];
    while (true) {
    int byteCount = fr.read(buffer, 0, buffer.length);
    if (byteCount <= 0) {
    break;
    String s = new String(buffer, 0, byteCount,"ISO-8859-1");
    System.out.println(s);
    However the ô character is always garbled, usually printing as a ? .
    I am running this on a Linux machine. It works fine on my XP machine.
    I have verified that I can see the correct characters when I cat the file on the terminal.
    (Interestingly, but I think maybe only by co-incidence, it works when I run with the -Dfile.encoding=UTF16 option, but not with UTF8, although this appears a hack rather than a fix since this option was not intended for developer use by sun - but I thought mentioning it may provide some clues as to what is going on)

    I think your main probelm is with the console. When you send text to the console, it's sent in the system default encoding. On an English-locale system that might be ASCII, ISO-8859-1, windows-1252, UTF-8, MacRoman, and probably several other possibilities. Then the console decodes the the bytes using whatever encoding it feels like using--on my WinXP machine, it uses cp437 by default (just for laughs, as far as I can tell). If the text happens to be pure, seven-bit ASCII, there's no problem, since all those encodings are identical in that range.
    But if you need to output anything other than ASCII characters, avoid the console. Send the output to a file and specify an encoding that you know will be able to handle your characters--UTF-8 can handle anything. Then open the file with an editor that can read that encoding; most of them can handle UTF-8 these days, and many will even detect it automatically. You also need to be using a font that can display your characters.
    However, you're also going about the reading part wrong. Instead of reading the text in as bytes and passing them to a String constructor, you should use an InputStreamReader and read it as text from the beginning: BufferedReader br = new BufferedReader(
      new InputStreamReader(
        new FileInputStream("myfile.csv"), "ISO-8859-1"));I am curious about your statement that "it works" when you run with the -Dfile.encoding=UTF16 option. I wouldn't be surprised to see it output the correct characters (ASCII characters, anyway), but I would expect to see the characters interspersed with blank spaces or rectangles.

Maybe you are looking for

  • GOS while creating a Purchase order

    Hi, I have a requirement to attach documents in item level while creating a purchase order ME21N. I know that we can have GOS functionality available but, only after document is created. Is there a alternative solution for this? Thanks in advance

  • How can i downgrade to my iOS 6? I dont like iOS7.

    Hello people, I have a iPhone 4S, I've recently upgrade to iOS7...OMG what i've i done !! I dont like it at all... the calendar and notes are pretty useless now, everything turned white, even the date cursor on "starts" option (date/hour) its hard to

  • Blank Fields not coming in SAP Query

    Hi All, I have created an SAP Query through SQ01 to show Vendor Details. I have 5 master tables corresponding to Vendor Details: LFA1, LFB1, LFBK, BNKA, LFM1. All connected through LIFNR. Now, my problem is when i give a company code to fetch details

  • How to change the geometry of an ObjectFile ?

    Hi everybody ! My problem is that when I load my files .OBJ via the loader ObjectFile, objects are deformed by place (I have non flat faces of more than 4 points)... In brief, I will try the various types of geometry (QuadArray, TriangleFanArray, etc

  • Function API in SAP???

    Hi, im am verny new to SAP and try to find a place where i can browse sap funktions with documentation. is the a place like API where i find all sap documented functions? Regards.