JDBC+8859-2 charset

Hi!
I want to use ISO 8859-2 character set. Is there some property, where I could change the default charset, or I have to include it in the JDBC URL?
Somebody can send an example?
Many thanks: Zsolt

This is a combination of the following three effects:
1. HTML browsers generally try to do their best to properly display websites created by people not following required standards. iso-8859-1 has been historically regarded as the character encoding of US and Western European Windows workstations. This has not been correct for a long time already, as Microsoft extended the ISO code page and created what is known as MS Code Page 1252 (windows-1252). Browsers try to correctly display pages that include Windows-specific characters by treating the pages as windows-1252 even if they are defined as iso-8859-1.
2. For performance reasons, Oracle JDBC Thin uses simplified conversion from Unicode UTF-16 to US7ASCII by simply ignoring the upper byte of each two-byte code. This allows such bytes as 0x80 to go through to the database. In one of the new JDBC releases, we plan to introduce a flag to force the conversion to go through standard path so that replacement characters are used as it is in case of OCI.
3. As the HTML pages are marked as iso-8859-1 and not windows-1252, the 0x80 code coming from the browser is not correctly converted to Java UTF-16. The code should be converted to U+20AC but it seems to be converted to U+0080.
-- Sergiusz

Similar Messages

  • Sun Java System Messaging Server doesn't support some ISO 8859 charsets

    Hello,
    I couldn't find any link to report bugs on the Sun Java System Messaging Server, so i guess i'll report it here:
    Sun Java System Messaging Server currently doesn't support the ISO 8859-13 charset, which is the standard charset in the Baltic states. When a person recieves an email in that charset, every non-ASCII character in the message becomes absolutely unreadable when displayed in this webmail application.
    This has been tested with this version of the product:
    X-Mailer: Sun Java(tm) System Messenger Express 6.1 HotFix 0.01 (built Jun 24 2004)
    Furthermore, you can test the case by yourself using this link: http://www.lietuvybe.org/testas/. You can enter your e-mail address there to get a few sample messages in Baltic and Cyrillic charsets. You will see that the Sun Java System Messaging Server passes all those tests except this particular one. So, it's a very nice product. :)
    It would be very very very cool if Sun would fix this small, but very important issue.
    regards,
    Rimas Kudelis

    Hey! :)
    I am feeling some bad vibes, here.Hey, sorry, I didn't want you to think it's personal. I didn't really mean to hurt you or whatever. You are trying to help and I do appreciate that. I really do.
    The reason i'm a bit sad is about the policy of others:
    First, i am not a provider of that webmail service i'm talking about. A big Lithuanian company is. Personally, I rarely use webmail apps at all, and even when i do, i use an app i internationalized myself. :). Meanwhile, i install Horde IMP as a webmail application for my small servers, and i'm satisfied with it.
    The problem i described is literally not my problem. It's a problem of that big Lithuanian company mentioned above (and the users of its webmail system). I suppose this company does have that account already and, furthermore, they are the ones who should post to this forum or file a support request., and they are the ones who should be worried about that bug. However, reality differs. In reality, big companies in Lithuania don't care about correctness of their webmail apps too much. However, there are a few maniacs like me who do. We test their webmail apps, we contact them and describe them their problems and ask them to fix those problems. Sometimes, they do that, but in most cases we either don't get a reply at all, or we get something like "we'll take a look at it later". For example, lately, we had an issue with one of the most popular webmail providers in Lithuania skipping a MIME-version header. Fixing this issue is just a single line of code. However, we had to push them annoyingly for kinda few months until they finally fixed it.
    That was the first aspect.
    The second is that i don't really like to create hundreds of accounts for myself just to ocasionally report bugs like this one. If only Sun would let me to simply file a bug, and forget that, i would gladly do it. But no, i have to find a deeply hiden support page, then fill a form, create myself an account and a password, then log on, then fill some misterious support request... Do I have to do that for a company that won't even consider thanking me? I think that's too much. Furthermore, every party on the net enforces it's own username and password restrictions. That sucks too. I wish i could just log on as "rq" everywhere like this forum, using the same password i could easily remember. However, i have to use "rq", "er-ku" or "erku" or "rq@something" as my username on different platforms, and sometimes even my (long enough) password is not accepted. It's hard to track such accounts, and in most cases like this one, i don't really want to have an account at all, as i'm just passing by.
    To summarize the post: I'm NOT a licenced user, and all I wanted was to file a bug, which affects licenced users and ordinary people.
    How do i remove my account from this forum now? :)

  • Nerving Problem with UTF-8 and ISO-8859-1

    Hi,
    I´m looking for a solution to serve this problem for many hours now, maybe someone can help me:
    1.) We need to send our Mails with the ISO-8859-1-Charset because otherwise Windows-Users get the text in the message twice: once as plain, and after a question mark formated. So I changed the NSPreferredMailCharset in the com.apple.mail.plist to ISO-8859-1:
    defaults write com.apple.mail NSPreferredMailCharset "ISO-8859-1"
    2.) So far so good. It works until I add an attachment to a message. Adding an attachment forces the sending of the mail again as Unicode (UTF-8). I could change the encoding manual, but thats not the way we can work in our company.
    My question is: is there any way to force mail to encode as ISO-8859-1? It can´t be that we have to change the encoding for every message.
    Thanks a lot
    florian
    PS: I´m not sure if this is important: we use the osx in German.

    I was thinking that since he is from Austria & references a company, there is a very strong possibility that the character "€" (the Euro currency symbol, Unicode 20AC, UTF-8 E2 82 AC) would frequently appear in messages.
    Even if he sets a preference for ISO-8859-1 as the default with Terminal, or manually changes messages to ISO-8859-1, it would not be possible to include this symbol in such messages, since there is no "€" in ISO-8859-1.
    Similar problems would occur with other symbols sometimes used in business (for example "™"), in engineering ("Ω"), in mathematics ("∑"), or even with some general punctuation marks such as the dagger ("†").
    Other possible problems are the use of other currency symbols the Euro replaced (the franc's "₣" or the lira's "₤") or others still in use (the Israeli new sheqel's "₪ or rupee's "₨"). Ligatures in an international environment would really complicate things as well, as this Wikipedia article about the Œthel illustrates.
    Note that in none of these cases would the presence or absence of an attachment matter -- ISO-8859-1 simply isn't up to the task.
    I suspect that in some cases, if it is possible, setting the default to Windows-1252 (Windows Latin 1 in Mail's list?) would help, since it does include at least the Euro & dagger. I haven't played around with this much, but I do note that in a new message window containing "€" in the body, if I set the text encoding to Windows Latin 1, Automatic, or UTF-8, Mail doesn't complain, but if I set it to ISO Latin 1, I get an error saying the message can't be saved & an "Invalid Text Encoding" alert if I try to send it.
    As for how messages are received at the other end, Windows apps (not just Outlook) are notorious for continuing to use non-Unicode API's even after the OS itself has long since moved to Unicode as its internal standard. Some of them employ bass-ackwards fixes like deciding ISO-8859-1 declarations are supposed to be Windows-1252 ones. Worse, Windows itself sometimes seems to interpret a few Windows-1252 code positions as their ISO-8859-1 control equivalents!
    All this makes life that much more complicated for people trying to avoid problems like the above.

  • Reading a website in ISO-8859-1

    Hello
    I am trying to read a website using the ISO-8859-1 charset.
    I have searched a bit and found some different ways suggested for this. This is the one I think I want because it seems to be the simpler one.
    byte[] iso88591Data = theString.getBytes("ISO-8859-1");But I don't understand the "flow" of the charsets:
    1. When I read an html that has a #&<code> on it my string is in utf-8 or ISO-8859-1?
    2. When the getBytes command is used, the specified charset is the one I want it to convert it to or the one it is in?
    To understand this problem I did a separate class where I tried the following code.
    import java.io.UnsupportedEncodingException;
    public class charsetConversion {
         public static void main(String[] args) {
              String in = args[0];
              byte bytes[] = in.getBytes();
              try {
                   byte bytesISO[] = in.getBytes("ISO-8859-1");
                   String out1 = new String(bytes, "ISO-8859-1");
                   String out2 = new String(bytesISO, "ISO-8859-1");
                   String out3 = new String(bytes);
                   String out4 = new String(bytesISO);
                   System.out.println(out1);
                   System.out.println(out2);
                   System.out.println(out3);
                   System.out.println(out4);
              } catch (UnsupportedEncodingException e) {
                   e.printStackTrace();
    }I run it with the input Poole&#x27 ;s and always get Poole&#x27 ;s. It doesn't have a space between 7 and ; but if I didn't write like that it always shows ' instead.
    Don't really know what else to do.

    So here is the example of the code
    import java.io.BufferedReader;
    import java.io.FileReader;
    import java.io.UnsupportedEncodingException;
    public class charsetConversion {
         public static void main(String[] args) {
              FileReader fr = null;
              BufferedReader br = null;
              String in = null;
              try {
                   fr = new FileReader(args[0]);
                   br = new BufferedReader(fr);
                   in = br.readLine();
              } catch (Exception e) {
                   System.out.println(e.getMessage());
                   System.exit(0);
              byte bytes[] = in.getBytes();
              try {
                   byte bytesISO[] = in.getBytes("ISO-8859-1");
                   String out1 = new String(bytes, "ISO-8859-1");
                   String out2 = new String(bytesISO, "ISO-8859-1");
                   String out3 = new String(bytes);
                   String out4 = new String(bytesISO);
                   System.out.println(out1);
                   System.out.println(out2);
                   System.out.println(out3);
                   System.out.println(out4);
              } catch (UnsupportedEncodingException e) {
                   e.printStackTrace();
    }As an argument I pass the file. I use this instead Inside the file I have
    Poole&#x27 ;s   (without the space)Without attaching a file it's hard to use the html

  • XMLReader throws "Invalid UTF8 encoding." - Need parser for ISO-8859-1 chrs

    Hi,
    We are facing an issue when we try to send data which is encoded in "ISO-8859-1" charset (german chars) via the EMDClient (agent), which tries to parse it using the oracle.xml.parser.v2.XMLParser . The parser, while trying to read it, is unable to determine the charset encoding of our data and assumes that the encoding is "UTF-8", and when it tries to read it, throws the :
    "java.io.UTFDataFormatException: Invalid UTF8 encoding." exception.
    I looked at the XMLReader's code and found that it tries to read the first 4 bytes (Byte Order Mark - BOM) to determine the encoding. It is probably expecting us to send the data where the first line is probably:
    <?xml version="1.0" encoding="iso88591" ?>
    But, the data that our application sends is typically as below:
    ========================================================
    # listener.ora Network Configuration File: /ade/vivsharm_emsa2/oracle/work/listener.ora
    # Generated by Oracle configuration tools.
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = semsa2)
    (ORACLE_HOME = /ade/vivsharm_emsa2/oracle)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = tcp)(HOST = stadm18.us.oracle.com)(PORT = 15100))
    ========================================================
    the first 4 bytes in our case will be, int[] {35, 32, 108, 105} == chars {#, SPACE, l, i},
    which does not match any of the encodings predefined in oracle.xml.parser.v2.XMLReader.pushXMLReader() method.
    How do we ensure that the parser identifies the encoding properly and instantiates the correct parser for "ISO-8859-1"...
    Should we just add the line <?xml version="1.0" encoding="iso88591" ?> at the beginning of our data?
    We have tried constructing the inputstream (ByteArrayInputStream) by using String.getBytes("ISO-8859-1") and passing that to the parser, but that does not seem to work.
    Please suggest.
    Thanks & Regards,
    Vivek.
    PS: The exception we get is as below:
    java.io.UTFDataFormatException: Invalid UTF8 encoding.
    at oracle.xml.parser.v2.XMLUTF8Reader.checkUTF8Byte(XMLUTF8Reader.java:160)
    at oracle.xml.parser.v2.XMLUTF8Reader.readUTF8Char(XMLUTF8Reader.java:187)
    at oracle.xml.parser.v2.XMLUTF8Reader.fillBuffer(XMLUTF8Reader.java:120)
    at oracle.xml.parser.v2.XMLByteReader.saveBuffer(XMLByteReader.java:450)
    at oracle.xml.parser.v2.XMLReader.fillBuffer(XMLReader.java:2229)
    at oracle.xml.parser.v2.XMLReader.tryRead(XMLReader.java:994)
    at oracle.xml.parser.v2.XMLReader.scanXMLDecl(XMLReader.java:2788)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:502)
    at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:205)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:180)
    at org.xml.sax.helpers.ParserAdapter.parse(ParserAdapter.java:431)
    at oracle.sysman.emSDK.emd.comm.RemoteOperationInputStream.readXML(RemoteOperationInputStream.java:363)
    at oracle.sysman.emSDK.emd.comm.RemoteOperationInputStream.readHeader(RemoteOperationInputStream.java:195)
    at oracle.sysman.emSDK.emd.comm.RemoteOperationInputStream.read(RemoteOperationInputStream.java:151)
    at oracle.sysman.emSDK.emd.comm.EMDClient.remotePut(EMDClient.java:2075)
    at oracle.sysman.emo.net.util.agent.Operation.saveFile(Operation.java:758)
    at oracle.sysman.emo.net.common.WebIOHandler.saveFile(WebIOHandler.java:152)
    at oracle.sysman.emo.net.common.BaseWebConfigContext.saveConfig(BaseWebConfigContext.java:505)

    Vivek
    Your message is not XML. I believe that the XMLParser is going to have problems with that as well. Perhaps you could wrap the message in an XML tag set and begin the document as you suggested with <?xml version="1.0" encoding="iso88591"?>.
    You are correct in that the parser uses only the first 4 bytes to detect the encoding of the document. It can only determine if the document in ASCII or EPCDIC based. If it is ASCII it can detect only between UTF-8 and UTF-16. It will need the encoding attribute to recognize the ISO-8859-1 encoding.
    hope this helps
    tom

  • Insert/Update VO with UTF-8 charset

    Hi,
    I worked so far only with iso-8859-1 charset and everything went fine, but now with UTF-8 I am experiencing strange problems. Every unicode char is converted to html equivalent(e.g. &#xxxx;) and saved in that form to DB. Is there any work around for that issue?
    Thanks in advance.
    Alex

    Hi,
    I worked so far only with iso-8859-1 charset and everything went fine, but now with UTF-8 I am experiencing strange problems. Every unicode char is converted to html equivalent(e.g. &#xxxx;) and saved in that form to DB. Is there any work around for that issue?
    Thanks in advance.
    Alex

  • Charset encoding "in fly"

    Hi all!
    I have problem witch charset encoding. Currently I'm working on Oracle 7.1. All data in db were inserted using win1250 charset (via MS Access or smtng). Now I must present them on the Web using Linux+Apache+PHP but encoded in iso-8859-2 charset. And here is my problem.
    I dont know how to "translate" data from one charset to another. Does Oracle supports a "native" charset translation between server (Windows 1250) and client (ISO-8859-2) in both directions (select, insert, update, delete)?
    I have tried translate data "in fly" using iconv() function in PHP but it is "not so elegant" and too slow solution.
    I would be graceful for all yours suggestions.

    Oracle hasn't supported Oracle 7.1 in many, many years, so I have no idea whether it works completely differently from current versions... I'll describe how this works in modern versions of Oracle.
    What are your client and database character sets? Assuming you're properly identifying the data coming in as Windows 1250, the Oracle database automatically converts the data to the database's encoding. When the data is retrieved, it is automatically converted to the character set of the client (assuming such a conversion is possible).
    I would note that Windows-1250 and ISO-8859-2 are not completely compatible character sets-- both contain characters that do not appear in the other-- so it may not always be possible to convert the data in all cases.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Problem convertting certain extended ascii characters

    I'm having problems with the extended ascii characters in the range 128-159. I'm working with SQL server environment using java. I originally had problems with characters in the range 128-159 when I did a 'select char_col from my_table' I always get junk when I try to retreive it from the ResultSet using the code 'String str = rs.getString(1)'. For example char_col would have the ascii character (in hex) '0x83' but when I retrieved it from the database, my str equaled '0x192'. I'm aware there is a gap in the range 128-159 in ISO-8859-1 charset. I've tracked the problem to be a charset issue converting the extended ascii characters in ISO-8859-1 into java's unicode charset.
    I looked on the forum and it said to try to specify the charset when I retreived it from the resultset so I did 'String str = new String(rs.getBytes(), "ISO-8859-1")' and it was able to read the characters 128-159 correctly except for five characters (129, 141, 143, 144, 157). These characters always returned the character 63 or 0x3f. Does anyone who what's happening here? How come these characters didn't work? Is there a workaround this? I need to use only use java and its default charsets and I don't want to switch to the windows Cp1252 charset cuz I'm using the java code in a unix environment as well.
    thanks.
    -B

    Normally your JDBC driver should understand the charset used in the database, and it should use that charset to produce a correct value for the result of getString(). However it does sometimes happen that the database is created by programs in some other language that ignore the database's charset and do their own encoding, bypassing the database's facilities. It is often difficult to deal with that problem, because the custodians of those other programs don't have a problem, everything is consistent for them, and they will not allow you to "repair" the database.
    I don't mean to say that really is your problem, it is a possibility though. You are using an SQL Server JDBC driver, aren't you? Does its connection URL allow you to specify the charset? If so, try specifying that SQL-Latin1 thing and see if it works.

  • German Special Characters not displayed correctly in RTF  using code

    Hi ,
    In my code we are using webdynpro method
    WDResourceFactory.createResource(
    byte[] data, String resourceName,WDWebResourceType
    Type)
    Here in our code we are implementing this as
    ITemplateElement templateEl = wdContext.currentTemplateElement();
    WDResourceFactory.createResource(
    templateEl.getReportData(),
    reportName.substring(0, reportName.lastIndexOf('.')),
    WDWebResourceType.RTF); reportName.lastIndexOf('.')),
    Here templateEl.getReportData() returns a set of bytes which has some
    german special characters.
    We are generating the Bytes using String.getByes(),Just before
    String text = new String(in);
    collector.putBusinessObject(boName, bo);
    reportDocTemplateParser(collector, text);
    collector.removeBusinessObject(boName);
    String generatedText = collector.generateRTF();
    out = (null != generatedText) ? generatedText.getBytes() : null;
    The out put is the if i am giving a word with german special characters for eg:
    Betriebsübersichten it first gets converted to bytes and then passes through method WDResouseFactory.createResourse(.....) which creates an RTF file and finally in the RTF file it appears as Betriebsbbersichten the special character is not displayed correctly.
    i came to knw that while converting into bytes we have to make it RTF supported encoding.ie for eg generatedText.getBytes('cp1252').i even tried with other charactersets like ISO-8859,cp1253 and so on but none of them worked.
    It would be really great if you could suggest the needful.
    Thanks and Regards
    Neeta

    I soved this by using get_data function of response object. Then converting this into ISO-8859-1 charset.
    See code below.
    DATA :  lv_encoding   TYPE abap_encoding,
              lv_conv       TYPE REF TO cl_abap_conv_in_ce,
              lv_x_string   type xstring.
      lv_x_string = pv_http_client->response->get_data( ).
        lv_encoding = '1100'.
        lv_conv = cl_abap_conv_in_ce=>create(
                              encoding = lv_encoding
                                 input = lv_x_string ).
        lv_conv->read( IMPORTING data = pv_result ).

  • [SOLVED] Cower fails to build

    Heloo,
    I've recently enabled [testing], and received pacman upgrade which installed libalpm.so.9, but cower was built against libalpm.so.8 on my system. First thought - rebuild cower. Unfortunately I received the following error:
    bstaletic@arch cower % makepkg
    ==> Making package: cower 12-1 (Fri Dec 19 16:43:28 CET 2014)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving sources...
    -> Found cower-12.tar.gz
    -> Found cower-12.tar.gz.sig
    ==> Validating source files with md5sums...
    cower-12.tar.gz ... Passed
    cower-12.tar.gz.sig ... Skipped
    ==> Verifying source file signatures with gpg...
    cower-12.tar.gz ... FAILED (the public key 487EACC08557AD082088DABA1EB2638FF56C0C53 is not trusted)
    ==> ERROR: One or more PGP signatures could not be verified!
    How do I fix this? Is the key really not trusted or is there something wrong with my archbox?
    My ~/.gnupg/gpg.conf looks like this:
    # Options for GnuPG
    # Copyright 1998-2003, 2010 Free Software Foundation, Inc.
    # Copyright 1998-2003, 2010 Werner Koch
    # This file is free software; as a special exception the author gives
    # unlimited permission to copy and/or distribute it, with or without
    # modifications, as long as this notice is preserved.
    # This file is distributed in the hope that it will be useful, but
    # WITHOUT ANY WARRANTY, to the extent permitted by law; without even the
    # implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    # Unless you specify which option file to use (with the command line
    # option "--options filename"), GnuPG uses the file ~/.gnupg/gpg.conf
    # by default.
    # An options file can contain any long options which are available in
    # GnuPG. If the first non white space character of a line is a '#',
    # this line is ignored. Empty lines are also ignored.
    # See the man page for a list of options.
    # Uncomment the following option to get rid of the copyright notice
    #no-greeting
    # If you have more than 1 secret key in your keyring, you may want to
    # uncomment the following option and set your preferred keyid.
    #default-key 621CC013
    # If you do not pass a recipient to gpg, it will ask for one. Using
    # this option you can encrypt to a default key. Key validation will
    # not be done in this case. The second form uses the default key as
    # default recipient.
    #default-recipient some-user-id
    #default-recipient-self
    # By default GnuPG creates version 4 signatures for data files as
    # specified by OpenPGP. Some earlier (PGP 6, PGP 7) versions of PGP
    # require the older version 3 signatures. Setting this option forces
    # GnuPG to create version 3 signatures.
    #force-v3-sigs
    # Because some mailers change lines starting with "From " to ">From "
    # it is good to handle such lines in a special way when creating
    # cleartext signatures; all other PGP versions do it this way too.
    # To enable full OpenPGP compliance you may want to use this option.
    #no-escape-from-lines
    # When verifying a signature made from a subkey, ensure that the cross
    # certification "back signature" on the subkey is present and valid.
    # This protects against a subtle attack against subkeys that can sign.
    # Defaults to --no-require-cross-certification. However for new
    # installations it should be enabled.
    require-cross-certification
    # If you do not use the Latin-1 (ISO-8859-1) charset, you should tell
    # GnuPG which is the native character set. Please check the man page
    # for supported character sets. This character set is only used for
    # metadata and not for the actual message which does not undergo any
    # translation. Note that future version of GnuPG will change to UTF-8
    # as default character set.
    #charset utf-8
    # Group names may be defined like this:
    # group mynames = paige 0x12345678 joe patti
    # Any time "mynames" is a recipient (-r or --recipient), it will be
    # expanded to the names "paige", "joe", and "patti", and the key ID
    # "0x12345678". Note there is only one level of expansion - you
    # cannot make an group that points to another group. Note also that
    # if there are spaces in the recipient name, this will appear as two
    # recipients. In these cases it is better to use the key ID.
    #group mynames = paige 0x12345678 joe patti
    # Some old Windows platforms require 8.3 filenames. If your system
    # can handle long filenames, uncomment this.
    #no-mangle-dos-filenames
    # Lock the file only once for the lifetime of a process. If you do
    # not define this, the lock will be obtained and released every time
    # it is needed - normally this is not needed.
    #lock-once
    # GnuPG can send and receive keys to and from a keyserver. These
    # servers can be HKP, email, or LDAP (if GnuPG is built with LDAP
    # support).
    # Example HKP keyservers:
    # hkp://keys.gnupg.net
    # Example LDAP keyservers:
    # ldap://pgp.surfnet.nl:11370
    # Regular URL syntax applies, and you can set an alternate port
    # through the usual method:
    # hkp://keyserver.example.net:22742
    # If you have problems connecting to a HKP server through a buggy http
    # proxy, you can use keyserver option broken-http-proxy (see below),
    # but first you should make sure that you have read the man page
    # regarding proxies (keyserver option honor-http-proxy)
    # Most users just set the name and type of their preferred keyserver.
    # Note that most servers (with the notable exception of
    # ldap://keyserver.pgp.com) synchronize changes with each other. Note
    # also that a single server name may actually point to multiple
    # servers via DNS round-robin. hkp://keys.gnupg.net is an example of
    # such a "server", which spreads the load over a number of physical
    # servers. To see the IP address of the server actually used, you may use
    # the "--keyserver-options debug".
    keyserver hkp://keys.gnupg.net
    #keyserver http://http-keys.gnupg.net
    #keyserver mailto:[email protected]
    # Common options for keyserver functions:
    # include-disabled = when searching, include keys marked as "disabled"
    # on the keyserver (not all keyservers support this).
    # no-include-revoked = when searching, do not include keys marked as
    # "revoked" on the keyserver.
    # verbose = show more information as the keys are fetched.
    # Can be used more than once to increase the amount
    # of information shown.
    # use-temp-files = use temporary files instead of a pipe to talk to the
    # keyserver. Some platforms (Win32 for one) always
    # have this on.
    # keep-temp-files = do not delete temporary files after using them
    # (really only useful for debugging)
    # honor-http-proxy = if the keyserver uses HTTP, honor the http_proxy
    # environment variable
    # broken-http-proxy = try to work around a buggy HTTP proxy
    # auto-key-retrieve = automatically fetch keys as needed from the keyserver
    # when verifying signatures or when importing keys that
    # have been revoked by a revocation key that is not
    # present on the keyring.
    # no-include-attributes = do not include attribute IDs (aka "photo IDs")
    # when sending keys to the keyserver.
    #keyserver-options auto-key-retrieve
    # Uncomment this line to display photo user IDs in key listings and
    # when a signature from a key with a photo is verified.
    #show-photos
    # Use this program to display photo user IDs
    # %i is expanded to a temporary file that contains the photo.
    # %I is the same as %i, but the file isn't deleted afterwards by GnuPG.
    # %k is expanded to the key ID of the key.
    # %K is expanded to the long OpenPGP key ID of the key.
    # %t is expanded to the extension of the image (e.g. "jpg").
    # %T is expanded to the MIME type of the image (e.g. "image/jpeg").
    # %f is expanded to the fingerprint of the key.
    # %% is %, of course.
    # If %i or %I are not present, then the photo is supplied to the
    # viewer on standard input. If your platform supports it, standard
    # input is the best way to do this as it avoids the time and effort in
    # generating and then cleaning up a secure temp file.
    # The default program is "xloadimage -fork -quiet -title 'KeyID 0x%k' stdin"
    # On Mac OS X and Windows, the default is to use your regular JPEG image
    # viewer.
    # Some other viewers:
    # photo-viewer "qiv %i"
    # photo-viewer "ee %i"
    # photo-viewer "display -title 'KeyID 0x%k'"
    # This one saves a copy of the photo ID in your home directory:
    # photo-viewer "cat > ~/photoid-for-key-%k.%t"
    # Use your MIME handler to view photos:
    # photo-viewer "metamail -q -d -b -c %T -s 'KeyID 0x%k' -f GnuPG"
    keyring /etc/pacman.d/gnupg/pubring.gpg
    EDIT:
    I tried moving the directory and using the default, which produced the following error:
    bstaletic@arch cower % makepkg -- INSERT --
    ==> Making package: cower 12-1 (Fri Dec 19 17:47:35 CET 2014)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving sources...
    -> Found cower-12.tar.gz
    -> Found cower-12.tar.gz.sig
    ==> Validating source files with md5sums...
    cower-12.tar.gz ... Passed
    cower-12.tar.gz.sig ... Skipped
    ==> Verifying source file signatures with gpg...
    cower-12.tar.gz ... FAILED (unknown public key 1EB2638FF56C0C53)
    ==> ERROR: One or more PGP signatures could not be verified!
    If I add the last line to gpg.conf, it shows the first error posted.
    Last edited by bstaletic (2014-12-19 20:59:05)

    bstaletic wrote:
    I've read the news and did what was asked of me, yet the problem is still there.
    P.S. Just to be sure, I have just redone the procedure described in the news.
    EDIT:
    The maintainer of cower just updated the package and his signature and now everythin is working.
    As of today I'm having the same issue as you did. Did the maintainer change the gpg key back to the not working one?
    I can't seem to get it added to my keyring for some reason.
    EDIT: when I add the key 1EB26.. to my pacman-key it shows that nothing changed and the key seems to be fine.
    But when I build the package it tells me that the key is an unknown public key.
    Last edited by SimbaClaws (2015-01-26 21:50:30)

  • Error "conversion of a text from codepage '4103' to codepage '1100' "

    Hello,
    Greetings!
    Please help me with this Error while trying to transfer data to application server.
    My data contains this characters "*".
    OPEN DATASET p_file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE.
    Transfer <data> to p_file.
    CLOSE DATASET p_file.
    <data> contains "*" characters.
    At the conversion of a text from codepage '4103' to codepage '1100':
    - a character was found that cannot be displayed in one of the two
    codepages;
    - or it was detected that this conversion is not supported
    The running ABAP program 'ZRPPO_TEST' had to be terminated as
      the conversion
    would have produced incorrect data.
    The number of characters that could not be displayed (and therefore not
    be converted), is 2. If this number is 0, the second error case, as
    mentioned above, has occurred.
    Thanks.
    Regards,

    Hello,
    1100 is  ISO 8859-1 charset.
    The only char there looking like a double quote is U+0022 QUOTATION MARK, the normal ASCII dblquote. This is definitely contained in both codepages.
    BUT: in Unicode (4103), there are tons of other, similar looking, variants of quotes or quotation marks which cannot be converted to 1100.
    What you can do to find out what exact char you have:
    In the Unicode system, copy the character from GUI display (e.g. in ABAP debugger)  into Windows Wordpad, mark this char in Wordpad with the mouse and press ALX+x
    This will convert the char to its Unicode codepoint (22 -> U+0022 for the ASCII dbl quote).
    Regards,
      Alex

  • Special characters change in e-mail body

    When I receive e-mail with ISO 8859-2 charset the special characters (áéőüú etc.) change into something more weird, unreadable characters (ĂĄ). However, mail preview lines show them correctly. I use English language for the phone (as Hungarian is not available), keyboard is Hungarian.
    Anybody facing the same problem? Any solution?

    If you set the document to be a HTML type, then you can use HTML scripting to set
    the font.
    Create send request
       send_request = cl_bcs=>create_persistent( ).
    Create html content:
        mail_line = '.mail TBODY,{font-family:verdana;font-size:10pt;}'.
        append mail_line TO mail_text.
        ... (your text)
    Create document with type 'HTM' and provide your HTML scripting
          document = cl_document_bcs=>create_document( i_type    = 'HTM'
                                                       i_text    = mail_text
                                                       i_subject = 'Subject' ).
    Add document to send request
          CALL METHOD send_request->set_document( document ).
    Send document
          CALL METHOD send_request->send( ).

  • Question about Portal and BI Beans

    Hi,
    I am trying to create a Portlet that displays Thin BI Beans crosstab. Using the "URL-Based Portlet (inline rendering)", I could display the Thin BI Beans crosstab inside the Portal. But, when I try to drill down or change the page edge, it ends up with "No Page Found" error. My question is...
    1) Is it possible to embed a Thin BI Beans crosstab inside the Portal and manipulate it dynamically?
    2) If it is possible, how can I do that?
    I will attach my provider.xml and JSP file that creates the crosstab. Please let me know if you need more information. Thank you very much.
    Seiji Minabe
    Technical Director
    IAF Software, Inc.
    provider.xml
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <?providerDefinition version="3.1"?>
    <provider class="oracle.portal.provider.v2.http.URLProviderDefinition">
    <providerInstanceClass>oracle.portal.provider.v2.http.URLProviderInstance</providerInstanceClass>
    <session>true</session>
         <portlet class="oracle.portal.provider.v2.DefaultPortletDefinition">
              <id>1</id>
              <name>Sample Portal</name>
              <title>SamplePortal</title>
              <shortTitle>SamplePortal</shortTitle>
              <description>SamplePortal.</description>
              <timeout>10000</timeout>
              <timeoutMessage>SamplePortal portlet timed out</timeoutMessage>
              <acceptContentType>text/html</acceptContentType>
              <renderer class="oracle.portal.provider.v2.render.RenderManager">
                   <contentType>text/html</contentType>
                   <charSet>UTF-8</charSet>
                   <showPage>/sampleView.jsp</showPage>
              </renderer>
         </portlet>
         <portlet class="oracle.portal.provider.v2.http.URLPortletDefinition">
              <id>2</id>
              <name>Sample URL Based Portlet</name>
              <title>Sample URL Based Portlet</title>
              <description>Display Sample as a portlet.</description>
              <timeout>100</timeout>
              <timeoutMessage>Timed out waiting for Sample Portlet.</timeoutMessage>
              <acceptContentType>text/html</acceptContentType>
              <showEdit>false</showEdit>
              <showEditToPublic>false</showEditToPublic>
              <showEditDefault>false</showEditDefault>
              <showPreview>false</showPreview>
              <showDetails>false</showDetails>
              <hasHelp>false</hasHelp>
              <hasAbout>false</hasAbout>
              <renderer class="oracle.portal.provider.v2.render.RenderManager">
              <contentType>text/html</contentType>
              <showPage class="oracle.portal.provider.v2.render.http.URLRenderer">
              <contentType>text/html</contentType>
              <charSet>ISO-8859-1</charSet>
              <pageUrl>http://iafsoft06.iafsoft.com:7779/SamplePortal/sampleView.jsp</pageUrl>
              <filter class="oracle.portal.provider.v2.render.HtmlFilter">
              <headerTrimTag>&lt;body</headerTrimTag>
              <footerTrimTag>/body></footerTrimTag>
              <inlineRendering>true</inlineRendering>
              </filter>
              </showPage>
              </renderer>
         </portlet>
    </provider>
    sampleView.jsp
    <%@ taglib uri="http://xmlns.oracle.com/bibeans" prefix="orabi" %>
    <%@ page contentType="text/html;charset=windows-1252"%>
    <%@ page import="oracle.portal.provider.v2.render.*"%>
    <%@ page import="oracle.portal.provider.v2.http.*"%>
    <%-- Start synchronization of the BI tags --%>
    <% synchronized(session){ %>
    <orabi:BIThinSession id="BIThinSession1" configuration="/Project1BIConfig1.xml" >
    <orabi:Presentation id="sampleView_Presentation1" location="sampleCrosstab" />
    </orabi:BIThinSession>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
    <title>
    Sample View
    </title>
    </head>
    <body>
    <FORM name="BIForm">
    <!-- Insert your Business Intelligence tags here -->
    <orabi:Render targetId="sampleView_Presentation1" parentForm="BIForm" />
    <%-- The InsertHiddenFields tag adds state fields to the parent form tag --%>
    <orabi:InsertHiddenFields parentForm="BIForm" biThinSessionId="BIThinSession1" />
    </FORM>
    </body>
    </html>
    <% } %>
    <%-- End synchronization of the BI tags --%>

    The versions of products I use are...
    Oracle Database 9.2.0.2
    9iAS 9.0.2
    JDeveloper903
    bibeans903a

  • Sending � signs with JavaMail

    Hi chaps,
    At our company we use the JavaMail API through JNI to send emails externally to customers.
    Recently the business has decided that it would be nice to send pound (�) through, however it's not going to plan.
    Normally the messages are encoded in 7-bit which would seem to exclude the � (163), however it seems that it is first converted to a (=80, hex with no ascii correlation) and then to a Euro symbol.
    I've tried setting the "allow8bitmime" property in order to enable the SMTP service to support 8-bit encoding, but it's not working AND Microsoft say that our Exchange (v5.5) doesn't support 8-bit MIME types. The odd thing is that sending external emails via Exchange supports the � with no problems and is read in the mail reader as 8-bit encoded (!).
    Has anybody had any experience with issues like this ? Any reply appreciated.
    Thanks
    Ammonite

    Ammonite,
    I am working on a similar problem.. Did you say your
    email gets translated and prints the Euro symbol... I
    actually need out email to print Both the pound and
    the Euro and it does niether right now.. The current implementation I am working on will display a Euro instead of a pound, although this isn't by design. The message is encoded using "Quoted-Printable" and substitutes the pound for a =80 character. How this maps to a Euro is unknown to me :(
    In standard ascii the pound is outside the standard 7-bit encoding range (163) and featured in the Extended Set on the platform we use. I'm not sure about the Euro. The Iso-8859-1 charset which SH provided supports the pound in "Quoted-Printable" encoding, but this hasn't been well received by JavaMail MIME objects that I have tried.
    I haven't managed any sort of 8-bit encoding with JavaMail as yet, due to supposed lack of support in Exchange 5.5 and lack of response to the "allow8bitmime" property.
    does anyone know is using MimeMessage limit the
    message to ASCII characters only and does that rule
    out the Euro and the Pound symbol/
    It shoudn't rule them out, but I'm starting to believe that MimeMessages are necessarily helping things here :)

  • Webservice: Special characters not displayed correctly

    Hi,
    I'm facing a problem when retrieving informations via a webservice. I'm able to use it and data is retrieved but when there are special characters into the response they are not displayed correctly. Repsonse of the webservice is XML formatted.
    It seems to be a charcter set problem, strange thing is that the response is ISO-8859-1 formatted, this charset should normally display special characters like (é à ...) correctly.
    In my code I simply use an if_http_client object to use the webservice.
    pv_result = pv_http_client->response->get_cdata( ).
    Xml repsonse :
    <?xml version="1.0" encoding="ISO-8859-1" ?>
    <![CDATA[Eetcaf�/Steakhuis Baskent]]>
    Should be Eetcafé
    How can I specify a charset for the response object?
    I did the same in .NET, where I can bypass this issue specifying the charset like this.
    Dim reader As New StreamReader(oResponse.GetResponseStream(), System.Text.Encoding.Default)
    Thanks a lot for your help.
    Edited by: dom___35 on Dec 21, 2009 3:30 PM

    I soved this by using get_data function of response object. Then converting this into ISO-8859-1 charset.
    See code below.
    DATA :  lv_encoding   TYPE abap_encoding,
              lv_conv       TYPE REF TO cl_abap_conv_in_ce,
              lv_x_string   type xstring.
      lv_x_string = pv_http_client->response->get_data( ).
        lv_encoding = '1100'.
        lv_conv = cl_abap_conv_in_ce=>create(
                              encoding = lv_encoding
                                 input = lv_x_string ).
        lv_conv->read( IMPORTING data = pv_result ).

Maybe you are looking for

  • Exchange rate conversion po

    Dear all, In PO I want to maintain the exchange rate in USD at header level and it should display in INR after multiplying with the quantity and converting from USD at item level in Condition tab. regards, Madhu.

  • Unable to see Serversettings link in SLD in EP7.0

    Hi All, I have installed Sneak Preview - 2004s in my local system. I am trying to follow the blog - https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/2167. [original link is broken] [original link is broken] [original link is broken] [original li

  • Adobe Premiere CC 2014, Why does the Program monitor show nothing even thought the media file is linked?

    Why do I get this error even when I have perfectly linked the media? In need of a desperate help please? Ta

  • Running OMS and RMAN catDB in one db

    We have a centralized repository db, where RMAN catalog server is running and OMS, running as well. Now, we've discovered the different nodes on our OMS rep. How can I incorporate these into registering new databases for RMAN. If I can connect to dif

  • Big picture - conceptual guidance

    As a newbie buying CS5 Web Premium I started using Flash Catalyst first. I initially found Catalyst was more useful to me as I am more interested in Flash use for web design. I tried out all the main features and loved it. However, there are some thi