UTF-8 or Windows character set?

I'm putting through RH7 several projects previously produced
in RH5x and RH 6. (Not the WebHelp output, but copies of the
original source projects, launched with RH 7.)
Special characters are a problem. I plan to do some testing.
Maybe it's not the fault of RH, and I need to update my source
files manually for these characters.
But meanwhile, I thought I'd ask these questions:
1. Is there a reason to choose UTF-8 for Windows users? Will
IE and other browsers display UTF-8 correctly in a Windows
environment?
2. If UTF-8 is OK, is there a reason not to choose UTF-8 for
a project previously worked in RH 6 or earlier? I expect RH 7 to
produce the correct output, but it is having a problem with older
special characters in the source files. The corollary question is
whether RH 7 recognizes and converts outdated code for special
characters when they occur in imported html (source project) files.
Thanks.
Harvey

It sounds like your problem is with fonts being replaced. You should start by opening your fontbook application and looking for 1 fonts to disable which will more than likely solve your problem.
Disable this font:
Helvetica Fractions
Here's an article from apple about Helvetica fractions:
http://docs.info.apple.com/article.html?artnum=301245

Similar Messages

  • Convertion of byte array in UTF-8 to GSM character set.

    I want to convert byte array in UTF-8 to GSM character set. Please advice How can I do that?

    String s = new String(byteArrayInUTF8,"utf-8");This will convert your byte array to a Java UNICODE UTF-16 encoded String on the assumption that the byte array represents characters encoded as utf-8.
    I don't understand what GSM characters are so someone else will have to help you with the second part.

  • Std::string NLS_LANG character sets

    I'm an OCI user but not a pure one. Because I use the free available OTL (Oracle Template Library) from S.Kuchin which is a wrapper around OIC I hope this not off topic here.
    The library offers the possibility to read database strings from VARCHAR2 fields to
    a std::string. I know that oracle does character converting at client side controlled
    via NLS_LANG environment variable.
    It's clear to me that reading database strings to std::string is no problem for
    one byte character sets liike ISO-8859-1. But how about when I let point NLS_LANG
    to UTF-8 or chinese character set e.g. ZHT16BIG5 ?
    Is it still safe to read the result to a std::string ?
    For example I have a database with default characterset AMERICAN_AMERICA.WE8ISO8859P15. I stored some German umlaut in some
    table. I wrote a small program using OTL and set NLS_LANG to UTF8 on client side.
    I fetched the data from server to client, stored the data in a std::string, pushed them in a file and yes the data were stored as UTF8.
    Is it really so simple or is it dangerous to read the UTF8-converted data to
    std::string ? Wat is the common rule ? When may and when may I not read the data
    to std::string ?

    Hello,
    I think you'll have more accurate answer in the Globalization Support Forum:
    Globalization Support
    Best regards,
    Jean-Valentin

  • Font character set

    For years I've been using this in my head section:
    <!doctype html>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
    even though I'm told that it is better to use this:
    <!doctype html>
    <html lang="en">
    <head>
    <meta charset="utf-8" />
    The both character sets look the same in browser preview (and presumably online.)
    However, with the latter, "more correct" charset, the fonts in Design View look too large and also bold, so I have stayed with the older charset.
    Is this connected with the setting I use for fonts "Western European"?
    Should I be using "Unicode" in DW 5.5?

    Possibly.
    Two things to check first, run your pages through the validators here (especially the html validator) and clear up any errors. It may just be a small error in your html that makes DW's Design View go nutty...
    HTML Validator: http://validator.w3.org
    CSS Validator: http://jigsaw.w3.org/css-validator/
    The CSS validator is a little less useful because it doesn't understand all of the vendor prefixes so it will throw more errors when there really aren't any.
    If you clean your code completely and it's still happening, posting a link to a page on your server is preferred. As a distant second, the complete code of the page and css posted to the forum may help.

  • Finding all the characters in a character set

    How do I find out what all of the characters in a specific character set are? I'm trying to determine which Oracle character sets to recommend to customers using various Windows code pages, and it's easy enough to find a list of the characters in a code page, but it seems impossible to find a list of the characters in an Oracle character set. Can anyone tell me where to look?

    Hi.
    Oracle's ISO and Windows character sets are consistant to the code pages they support. If you have need for other character sets or you just want to check for your self the 9.2 version of locale builder supports displaying and printing all non Unicode code pages.

  • Invalid Characters shown in UTF-8 character set

    There is an XMLP report whose template output character set is ISO-8859-1. The character set ISO-8859-1 is required for this report as per Spanish Authorities. When the report is run, output gets generated in the output directory file of application server. This output file doesn't contain any invalid characters.
    But when the output is opened from SRS window, which opens it in a browser, the invalid characters are shown for characters like Ñ , É etc.
    Investigation done:
    Found that the output generated on the server is having ISO encoding and hence doesn't contain any invalid characters. Whereas the output generated from SRS window, it is in UTF encoding, so it seems the invalid characters are displayed when conversion takes place from ISO to UTF-8 format.
    Created the eText output using the data xml and template using BI publisher tool, the output is in ISO encoding. So if i go and change the encoding to UTF-8 by opening it in explorer or Notepad++, invalid charcters are shown for Ñ, É etc.
    Is there any limitation, that output from SRS window will show only in UTF-8 encoding? If not then please suggest.
    Thanks,
    Saket
    Edited by: 868054 on Aug 2, 2012 3:05 AM
    Edited by: 868054 on Aug 2, 2012 3:05 AM

    Hi Srini,
    When customer is viewing output from the SRS window, then it contains invalid characters because it is in UTF-8 character set. Customer is on Oracle OnDemand so they cannot take the output generated on the server.Every time they have to raise a request to Oracle for the output file. So the concern here is, why don't the output from SRS window show output with valid characters ?
    The reason could be conversion of ISO format to UTF-8. How could this be resolved ? Does SRS window output cannot generate in ISO format ?
    A quick reply will be appreciated as customer is chasing for an update.
    Thanks,
    Saket
    Edited by: 868054 on Aug 7, 2012 11:08 PM

  • Unicode Character sets (e.g UTF-8)

    Hi,
    We are using some third party software which will connect to the oracle database.
    One of the requiremebnts it states is that both the databse client and server must use the Unicode character set e.g UTF-8.
    How do we ensure this when installing the oracle client software.
    Also, why when install orcale client software and select language as English does it put NLS_LANG as American by default.
    Is there an English U.K language option - couldn't see it.
    Many Thanks

    user5716448 wrote:
    Hi,
    We are using some third party software which will connect to the oracle database.
    One of the requiremebnts it states is that both the databse client and server must use the Unicode character set e.g UTF-8.
    Pl post details of OS and database and client versions being installed
    How do we ensure this when installing the oracle client software.
    For the client, set NLS_LANG appropriately when using the client software - there is no setup required during the install - http://www.oracle.com/technetwork/database/globalization/nls-lang-099431.html
    Also, why when install orcale client software and select language as English does it put NLS_LANG as American by default.
    Is there an English U.K language option - couldn't see it.Try "ENGLISH"
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch3globenv.htm
    >
    Many ThanksHTH
    Srini

  • Database character set = UTF-8, but mismatch error on XML file upload

    Dear experts,
    I am having problems trying to upload an XML file into an XMLType table. The Database is 9.2.0.5.0, with the character set details:
    SELECT *
    FROM SYS.PROPS$
    WHERE name like '%CHA%';
    Query results:
    NLS_NCHAR_CHARACTERSET          UTF8     NCHAR Character set
    NLS_SAVED_NCHAR_CS          UTF8
    NLS_NUMERIC_CHARACTERS          .,     Numeric characters
    NLS_CHARACTERSET          UTF8     Character set
    NLS_NCHAR_CONV_EXCP          FALSE     NLS conversion exception
    To upload the XML file into the XMLType table, I am using the command:
    insert into XMLTABLE
    values(xmltype(getClobDocument('ServiceRequest.xml','UTF8')));
    However, I get the error:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00200: could not convert from encoding UTF-8 to UCS2
    Error at line 1
    ORA-06512: at "SYS.XMLTYPE", line 0
    ORA-06512: at line 1
    Why does it mention UCS2, as can't see that on the Database character set?
    Many thanks for your help,
    Mark

    USC2 is known as AL16UTF16(LE/BE) by Oracle...
    Try using AL32UTF8 as the character set name
    AFAIK The main difference between Oracle's UTF8 and AL32UTF8 character set is that is the UTF8 character set does not support those UTF-8 characteres that require 4 bytes..
    -Mark

  • UTF/Japanese character set and my application

    Blankfellaws...
    a simple query about the internationalization of an enterprise application..
    I have a considerably large application running as 4 layers.. namely..
    1) presentation layer - I have a servlet here
    2) business layer - I have an EJB container here with EJBs
    3) messaging layer - I have either Weblogic JMS here in which case it is an
    application server or I will have MQSeries in which case it will be a
    different machine all together
    4) adapter layer - something like a connector layer with some specific or
    rather customized modules which can talk to enterprise repositories
    The Database has few messages in UTF format.. and they are Japanese
    characters
    My requirement : I need thos messages to be picked up from the database by
    the business layer and passed on to the client screen which is a web browser
    through the presentation layer.
    What are the various points to be noted to get this done?
    Where and all I need to set the character set and what should be the ideal
    character set to be used to support maximum characters?
    Are there anything specifically to be done in my application code regarding
    this?
    Are these just the matter of setting the character sets in the application
    servers / web servers / web browsers?
    Please enlighten me on these areas as am into something similar to this and
    trying to figure out what's wrong in my current application. When the data
    comes to the screen through my application, it looks corrupted. But the asme
    message when read through a simple servlet, displays them without a problem.
    Am confused!!
    Thanks in advance
    Manesh

    Hello Manesh,
    For the database I would recommend using UTF-8.
    As for the character problems, could you elaborate which version of WebLogic
    are you using and what is the nature of the problem.
    If your problem is that of displaying the characters from the db and are
    using JSP, you could try putting
    <%@ page language="java" contentType="text/html; charset=UTF-8"%> on the
    first line,
    or if a servlet .... response.setContentType("text/html; charset=UTF-8");
    Also to automatically select the correct charset by the browser, you will
    have to include
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in the
    jsp.
    You could replace the "UTF-8" with other charsets you are using.
    I hope this helps...
    David.
    "m a n E s h" <[email protected]> wrote in message
    news:[email protected]...
    Blankfellaws...
    a simple query about the internationalization of an enterpriseapplication..
    >
    I have a considerably large application running as 4 layers.. namely..
    1) presentation layer - I have a servlet here
    2) business layer - I have an EJB container here with EJBs
    3) messaging layer - I have either Weblogic JMS here in which case it isan
    application server or I will have MQSeries in which case it will be a
    different machine all together
    4) adapter layer - something like a connector layer with some specific or
    rather customized modules which can talk to enterprise repositories
    The Database has few messages in UTF format.. and they are Japanese
    characters
    My requirement : I need thos messages to be picked up from the database by
    the business layer and passed on to the client screen which is a webbrowser
    through the presentation layer.
    What are the various points to be noted to get this done?
    Where and all I need to set the character set and what should be the ideal
    character set to be used to support maximum characters?
    Are there anything specifically to be done in my application coderegarding
    this?
    Are these just the matter of setting the character sets in the application
    servers / web servers / web browsers?
    Please enlighten me on these areas as am into something similar to thisand
    trying to figure out what's wrong in my current application. When the data
    comes to the screen through my application, it looks corrupted. But theasme
    message when read through a simple servlet, displays them without aproblem.
    Am confused!!
    Thanks in advance
    Manesh

  • Character set conversion UTF-8 -- ISO-8859-1 generates question mark (?)

    I'm trying to convert an XML-file in UTF-8 format to another file with character set ISO-8859-1.
    My problem is that the ISO-8859-1 file generates a question mark (?) and puts it as a prefix in the file.
    ?<?xml version="1.0" encoding="UTF-8"?>
    <ns0:messagetype xmlns:ns0="urn:olof">
    <underkat>testv���rde</underkat>
    </ns0:messagetype>
    Is there a way to do the conversion without getting the question mark?
    My code looks as follows:
    public class ConvertEncoding {
         public static void main(String[] args) {
              String from = "UTF-8", to = "ISO-8859-1";
              String infile = "C:\\temp\\infile.xml", outfile = "C:\\temp\\outfile.xml";
              try {
                   convert(infile, outfile, from, to);
              } catch (Exception e) {
                   System.out.println(e.getMessage());
                   System.exit(1);
         private static void convert(String infile, String outfile,
                                            String from, String to)
                             throws IOException, UnsupportedEncodingException
              //Set up byte streams
              InputStream in = null;
              OutputStream out = null;
              if(infile != null) {
                   in = new FileInputStream(infile);
              if(outfile != null) {
                   out = new FileOutputStream(outfile);
              //Set up character streams
              Reader r = new BufferedReader(new InputStreamReader(in, from));
              Writer w = new BufferedWriter(new OutputStreamWriter(out, to));
              /*Copy characters from input to output.
               * The InputSreamreader converts
               * from Unicode to the output encoding.
               * Characters that cannot be represented in
               * the output encoding are output as '?'
              char[] buffer = new char[4096];
              int len;
              while((len = r.read(buffer))!= -1) { //Read a block of output
                   w.write(buffer, 0, len);
              r.close();
              w.flush();
              w.close();
    }

    Yes the next character is the '<'
    The file that I read from is generated by an integration platform. I send a plain file to it (supposedly in UTF-8 encoding) and it returns another file (in between I call my java class that converts the characterset from UTF-8 to ISO-8859-1). The file that I get back contains the '���' if the conversion doesn't work and '?' if the conversion worked.
    My solution so far is to skip the first "junk-characters" when reading from the inputstream. Something like:
    private static final char UTF_BOM = '\uFEFF'; //UTF-BOM = ?
    String from = "UTF-8", to = "ISO-8859-1";
    if (from != null && from.toLowerCase().startsWith("utf-")) { //Are we reading an UTF encoded file?
    /*Read first character of the UTF-Encoded file
    It will return '?' in the first position if we are dealing with UTF encoding If ? is returned we skip this character in the read
    try {
    r.mark(1); //Only allow to read one char for the reset function to work
    char c;
    int i = r.read();
    c = (char) i;
    if (String.valueOf(UTF_BOM).equalsIgnoreCase(String.valueOf(c))) {
    r.reset(); //reset to start position
    r.skip(1); //Skip first character when reading from the stream
    else {
    r.reset();
    } catch (IOException e) {
    e.getMessage();
    //return null;
    }

  • Windows "Regional Options" locale - JCE for 8bit vs 16bit character sets

    I have a Java application that reads in an Encrypted text file. The text file was Encrypted using JCE 1.2.1 and Encrypted on a Windows system with the locale set to English(US). The Encryption uses Sun's version of the DES encryption algorithm.
    This app reads in the Encrypted text file and Decrypts it and processes it's information.
    This works fine on Windows systems if the Regional Options control panel is set to a region that uses 8bit character sets:
    - English
    - Italian
    - Spanish
    - French
    But, if the locale is set to 16bit character set regions, the text file cannot be read and parsed. Such regions include:
    - Russian
    - Greek
    - English (Hong Kong)
    At this point, I think I have two options, but I would love to hear about more:
    - Edit the Encrypting/Decrypting code (or the parsing code - parses through a comma deliminated file) so that the file that is Encrypted and Decrypted can handle either an 8bit or 16bit character sets
    (Don't know how to do this)
    or
    - Programatically change the locale of Windows machines to English(US) at application start-up and then change it back to the previous locale setting on application shut-down
    (Don't know how to do this either)
    I'd appreciate any help. I'm not sure if this is an International issue or an JCE issue.
    Thanks in advance

    I found an answer to the problem I was having.
    The culprit were two special characters that the client was using in the encrypted text to distinguish between different fields and to distinguish carriage returns (� and �). The non Latin alphabet languages didn't know what to do with those characters so they substituted there own characters, thus breaking the parsing logic which was hard coded to look for � and �.
    The problem also was related to the fact that the JCE works with byte[] arrays. FileInputStreams (which deal with byte[] arrays) seem to convert the special characters to new characters in non Lating languages to match what was going on in the JCE logic.
    The easiest fix I could come up with, was to include a new properties file to be read by a separate FileInputStream. This properties text file contained just two characters (��). When I loaded in this new properties file via a FileInputStream, the two characters (��) in the properties file magically change to match the currently active alphabet (or didn't change at all if the computer was using a Latin alphabet).
    By checking the new properties file to see what the characters had changed to (if they had), I was able to know what to use to parse the encrypted data. And as such, regardless of what language the computer was set to, the encrypted data is now parsed correctly, as I took out the hard coding that looked specifically for the characters � and � and instead rewrote the code so it now uses the characters from the properties file (or whatever characters they change to) for parsing the content data.
    I hope others find this useful.

  • Windows "Regional Options" locale (8bit vs 16bit character sets)

    I have a Java application that reads in an Encrypted text file. The text file was Encrypted using JCE 1.2.1 and Encrypted on a Windows system with the locale set to English(US). The Encryption uses Sun's version of the DES encryption algorithm.
    This app reads in the Encrypted text file and Decrypts it and processes it's information.
    This works fine on Windows systems if the Regional Options control panel is set to a region that uses 8bit character sets:
    - English
    - Italian
    - Spanish
    - French
    But, if the locale is set to 16bit character set regions, the text file cannot be read and parsed. Such regions include:
    - Russian
    - Greek
    - English (Hong Kong)
    At this point, I think I have two options, but I would love to hear about more:
    - Edit the Encrypting/Decrypting code (or the parsing code - parses through a comma deliminated file) so that the file that is Encrypted and Decrypted can handle either an 8bit or 16bit character sets
    (Don't know how to do this)
    or
    - Programatically change the locale of Windows machines to English(US) at application start-up and then change it back to the previous locale setting on application shut-down
    (Don't know how to do this either)
    I'd appreciate any help. I'm not sure if this is an International issue or an JCE issue.
    Thanks in advance

    I found an answer to the problem I was having.
    The culprit were two special characters that the client was using in the encrypted text to distinguish between different fields and to distinguish carriage returns (� and �). The non Latin alphabet languages didn't know what to do with those characters so they substituted there own characters, thus breaking the parsing logic which was hard coded to look for � and �.
    The problem also was related to the fact that the JCE works with byte[] arrays. FileInputStreams (which deal with byte[] arrays) seem to convert the special characters to new characters in non Lating languages to match what was going on in the JCE logic.
    The easiest fix I could come up with, was to include a new properties file to be read by a separate FileInputStream. This properties text file contained just two characters (��). When I loaded in this new properties file via a FileInputStream, the two characters (��) in the properties file magically change to match the currently active alphabet (or didn't change at all if the computer was using a Latin alphabet).
    By checking the new properties file to see what the characters had changed to (if they had), I was able to know what to use to parse the encrypted data. And as such, regardless of what language the computer was set to, the encrypted data is now parsed correctly, as I took out the hard coding that looked specifically for the characters � and � and instead rewrote the code so it now uses the characters from the properties file (or whatever characters they change to) for parsing the content data.
    I hope others find this useful.

  • JSF and Character Sets (UTF-8)

    Hi all,
    This question might have been asked before, but I'm going to ask it anyway because I'm completely puzzled by how this works in JSF.
    Let's begin with the basics, I have an application running on an OC4J servlet container, and am using JSF 1.1 (MyFaces). The problems I am having with this setup, is that it seems that the character encodings I want the server/client to use are not coming across correctly. I'm trying to enforce the application to be UTF-8, but after the response is rendered to my client, I've magically been reverted to ISO-8859-1, which is the main character set for the netherlands. However, I'm building the application to support proper internationalization; which means I NEED to use UTF-8.
    I've executed the following steps to reach this goal:
    - All JSP files contain page directives, noting the character set:
    <%@ page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8" %>I've checked the generated source that comes from the JSP's, it looks as expected.
    - I've created a servlet filter to set the character set directly on the request and response objects:
        public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
            // Set the characterencoding for the request and response streams.
            req.setCharacterEncoding("UTF-8");
            res.setContentType("text/html; charset=UTF-8");       
            // Complete (continue) the processing chain.
            chain.doFilter(req, res); 
        }I've debugged the code, and this works fine, except for where JSF comes in. If I use the above situation, without going through JSF, my pages come back UTF-8. When I go through JSF, my pages come back as ISO-8859-1. I'm baffled as to what is causing this. On several forums, writing a filter was proposed as the solution, however this doesn't do it for me.
    It looks like somewhere internally in JSF the character set is changed to ISO. I've been through the sources, and I've found several pieces of code that support that theory. I've seen portions of code where the character set for the response is set to that of the request. Which in my case coming from a dutch system, will be ISO.
    How can this be prevented? Can anyone give some good insight on the inner workings of JSF with regards to character sets in specific? Could this be a servlet container problem?
    Many thanks in advance for your assistance,
    Jarno

    Jarno,
    I've been investigating JSF and character encodings a bit this weekend. And I have to say it's more than a little confusing. But I may have a little insight as to what's going on here.
    I have a post here:
    http://forum.java.sun.com/thread.jspa?threadID=725929&tstart=45
    where I have a number of open questions regarding JSF 1.2's intended handling of character encodings. Please feel free to comment, as you're clearly struggling with some of the same questions I have.
    In MyFaces JSF 1.1 and JSF-RI 1.2 the handling appears to be dependent on the raw Content-Type header. Looking at the MyFaces implementation here -
    http://svn.apache.org/repos/asf/myfaces/legacy/tags/JSF_1_1_started/src/myfaces/org/apache/myfaces/application/jsp/JspViewHandlerImpl.java
    (which I'm not sure is the correct code, but it's the best I've found) it looks like the raw header Content-Type header is being parsed in handleCharacterEncoding. The resulting value (if not null) is used to set the request character encoding.
    The JSF-RI 1.2 code is similar - calculateCharacterEncoding(FacesContext) in ViewHandler appears to parse the raw header, as opposed to using the CharacterEncoding getter on ServletRequest. This is understandable, as this code should be able to handle PortletRequests as well as ServletRequests. And PortletRequests don't have set/getCharacterEncoding methods.
    My first thought is that calling setCharacterEncoding on the request in the filter may not update the raw Content-Type header. (I haven't checked if this is the case) If it doesn't, then the raw header may be getting reparsed and the request encoding getting reset in the ViewHandler. I'd suggest that you check the state of the Content-Type header before and after your call to req.setCharacterEncoding('UTF-8"). If the header charset value is unset or unchanged after this call, you may want to update it manually in your Filter.
    If that doesn't work, I'd suggest writing a simple ViewHandler which prints out the request's character encoding and the value of the Content-Type header to your logs before and after the calls to the underlying ViewHandler for each major method (i.e. renderView, etc.)
    Not sure if that's helpful, but it's my best advice based on the understanding I've reached to date. And I definitely agree - documentation on this point appears to be lacking. Good luck
    Regards,
    Peter

  • Transportable tablespace Windows to Linux different character sets

    Hi,
    Is it absolutely necessary to have the same character set when doing a cross platform transport tablespace from Windows to Linux? We are trying to do it on Oracle10gR2 from Windows to Linux.
    Thank you.

    Hi, yes is necessary that your databases have the same chracter set, please review the limitations into the Note:291024.1.
    Luck.
    Have a good day.
    Regards.

  • Character set on windows

    Hello All,
    Please i need help on how to confirm the character set on windows. i have tried going through registry >HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\NLS\CodePage\ ( nothing like ACP here)
    Under code page i have the following:
    EUDCCodeRange
    Language
    Language Groups
    from command prompt if i run:
    H:\>chcp
    Active code page: 437
    Please what character set is ACP 437 equivalent to?
    Response required urgently#
    Thanks

    We have noticed that the application is not interpreting the request strings in the correct character encoding. Can you be a bit more specific? What problem, exactly, are you having?
    So please how do i confirm if i am on WE8ISO8859P1 or WE8MSWIN1252' on the client side(windows)Are you asking what the NLS_LANG should be for Windows? Or for command-line DOS? Or something else?
    In the same NLS_LANG FAQ I linked to earlier, there is a section on [configuring NLS_LANG for Windows and DOS|http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm#_Toc110410552]. I don't believe that it is possible for a Windows GUI to be using the ISO 8859-1 character set.
    Browse the following registry entry:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\NLS\CodePage\
    There you have (all the way down) an entry with as name ACP. The value of ACP is your current GUI Codepage, for the mapping to the Oracle name. Since there are many registry entries with very similar names, please make sure that you are looking at the right place in the registry.Justin

Maybe you are looking for

  • How to print two-sided automatically in Adobe Reader 10.1.2?

    Hello, when I want to print two-sided with my printer I select this option: And my printer does it automatically, I do not have to reload the paper. However, in the new version of Adobe Reader, after selecting that option this one is automatically ch

  • ITunes 8 - where did export go?

    Is there any way to export a playlist now with iTunes 8? I don't see the option any longer under File. I need to transfer a playlist from my desktop to my laptop and don't seem to be able to do so.

  • Internal Trading Workflow Issue

    Hello all, I am doing Internal Trading in R12.1. I have entered cross charge and submitted. Now that status is Awaiting Receiver Approval. Then I logged into receiver user, but I couldn't view the cross charge in Approve cross charges form. I got the

  • Oracle.apps.fnd.framework.OAException: oracle.jbo.SQLStmtException: JBO-271

    Hi All, For the following VO query , select meas_elem_weight from xxg2c_quota_weights_v where org_id = :0 and quota_id = :1 and comp_plan_id =:2 union select 0 meas_elem_weight from dual where not exists (select meas_elem_weight from xxg2c_quota_weig

  • Service method  of servlet  is called repetitively as response time is more

    Problem: when we try to retrieve data for a particular input, then the retrieve query takes longer time which is as expected i.e more than five minutes as our data ware house is large and the where clause is very complex. After 5 mins what happens is