SUPPORT FOR euro WE8ISO8859P15 CHARACTER SET

Hi,
I cannot run Portal home page after 9iAS release 2 installation on Sun Solaris.
On both nodes (infrastructure and middle tier) and in portal DAD I have NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15.
Is it possible to run portal with this character set?
This is the error message:
Error: The servlet produced the following error stack. java.io.IOException: Unsupported character encoding: "ISO-8859-15"
     at oracle.webdb.page.BaseContentRequest.getResponseJavaEncoding(Unknown Source)
     at oracle.webdb.page.BaseContentRequest.getReader(Unknown Source)
     at oracle.webdb.page.PageBuilder.getMetaData(Unknown Source)
     at oracle.webdb.page.PageBuilder.process(Unknown Source)
     at oracle.webdb.page.ParallelServlet.doGet(Unknown Source)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:244)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:336)
     at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:59)
     at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:283)
     at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:523)
     at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:269)
     at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:735)
     at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:151)
     at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:64)
Best regards,
Zoran

Hi
You dont need anything special, however the JVM you use must support the encoding you want (unless you are going with a unicode flavor like UTF-8 which all JVM's support). The rest of the i18n stuff like ResourceBundles for messages, date currency formats are handled in your code anyway.
regards
deepak

Similar Messages

  • Cdrtools package, support for nls/utf8 character sets

    Hello ppl,
    I've been trying desperetly to burn a cd/dvd(k3b) with greek filenames and directory names. I ended up with file names like "???????????????????? (invalid unicode)"
    After a lot of searching, i managed to isolate and solve the problem. There has been a patch(http://bugs.gentoo.org/attachment.cgi?id=52097) for cdrtools to support nls/utf8 character sets.
    I guess that 90%+ of people using arch and burning cd's/dvd's, ignore the problem cause they just burn cd's/dvd's using standard english characters.
    For all others here it is     :
    # Patched cdrtools to support nls/utf8 character sets
    # Contributor: Akis Maziotis <[email protected]>
    pkgname=cdrtools-utf8support
    pkgver=2.01.01
    pkgrel=3
    pkgdesc="Tools for recording CDs patched for nls/utf8 support!"
    depends=('glibc')
    conflicts=('cdrtools')
    source=(ftp://ftp.berlios.de/pub/cdrecord/alpha/cdrtools-2.01.01a01.tar.gz http://bugs.gentoo.org/attachment.cgi?id=52097)
    md5sums=('fc085b5d287355f59ef85b7a3ccbb298' '1a596f5cae257e97c559716336b30e5b')
    build() {
    cd $startdir/src/cdrtools-2.01.01
    msg "Patching cdrtools ..."
    patch -p1 -i ../attachment.cgi?id=52097
    msg "Patching done "
    make || return 1
    make INS_BASE=$startdir/pkg/usr install
    It's a modified pkgbuild of the official arch cdrtools package (http://cvs.archlinux.org/cgi-bin/viewcv … cvs-markup) patched to support nls/utf8 character sets.
    Worked like a charm. 
    If u want to install it, u should uninstall the cdrtools package
    pacman -Rd cdrtools
    P.S.: I've issued this as a bug in http://bugs.archlinux.org/task/3830 but nobody seemed to care...    :cry:  :cry:  :cry:

    Hi Bharat,
    I have created a Oracle 8.1.7 database with UTF8 character set
    on WINDOWS 2000.
    Now , I want to store and retrieve information in other languages
    say Japanese or Hindi .
    I had set the NLS Language and NLS Terrritory to HINDI and INDIA
    in the SQL*PLUS session but could not see the information.You cannot view Hindi using SQL*Plus. You need iSQL*Plus.
    (Available as a download from OTN, and requiring the Oracle HTTP
    server).
    Then you need the fonts (either Mangal from Microsoft or
    Code2000).
    Have your NLS_LANG settings in your registry to
    AMERICAN_AMERICA.UTF8. (I have not tried with HINDI etc, because
    I need my solution to work with 806,817 and 901, and HINDI was
    not available with 806).
    Install the language pack for Devanagari/Indic languages
    (c_iscii.dll) on Windows NT/2000/XP.
    How can I use the Forms 6i to support this languages ?I am not sure about that.
    Do write back if this does not solve your problem.
    --Shirish

  • Support for non-western character sets

    I've been reading docs for WL portal and for WL server, but basically I need to know...what needs to be set/installed for a Weblogic Portal 10.3 running on Weblogic Server 10.3 to have non-western characters display in the content of any portlet we may have? For instance, Arabic, Japanese, Chinese...thanks!
    Sorry, I want to add: this is assuming the encoding of the content is correct (like an html document), the database that content may be retrieved from is set up correctly, etc.
    Just basically what to configure in WL Portal, WL Server, oh and Workshop too (if anything).
    Edited by: user10697594 on Jan 5, 2010 2:56 PM

    Hi
    You dont need anything special, however the JVM you use must support the encoding you want (unless you are going with a unicode flavor like UTF-8 which all JVM's support). The rest of the i18n stuff like ResourceBundles for messages, date currency formats are handled in your code anyway.
    regards
    deepak

  • Server uses WE8ISO8859P15 character set (possible charset conversion)

    Hi,
    when EXP in 9i I receive :
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in WE8PC850 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P15 character set (possible charset conversion)What is the problem ?
    Thank you.
    I exported just a table, how to see if it is exported ?

    Dear user522961,
    You have not defined or misdefined the NLS_LANG environmental variable before trying to run the export command.
    Here is a little illustration;
    *$ echo $NLS_LANG*
    *AMERICAN_AMERICA.WE8ISO8859P9*
    $ exp system/password@opttest file=ogan.dmp owner=OGAN
    Export: Release 10.2.0.4.0 - Production on Mon Jul 12 18:10:47 2010
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    *Export done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set*
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user OGAN
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user OGAN
    About to export OGAN's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export OGAN's tables via Conventional Path ...
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully without warnings.
    *$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15*
    $ exp system/password@opttest file=ogan.dmp owner=OGAN
    Export: Release 10.2.0.4.0 - Production on Mon Jul 12 18:12:41 2010
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    *Export done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set*
    *server uses WE8ISO8859P9 character set (possible charset conversion)*
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user OGAN
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user OGAN
    About to export OGAN's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export OGAN's tables via Conventional Path ...
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully without warnings.Hope it Helps,
    Ogan

  • Problem: Adding support for non-english charachter sets in UCCX 8.0

    We have just moved from windows-based UCCX 7.0 to UCCX 8.0, the upgrade process went successfully so far, but for some reasons, Cisco Agents are experiencing problems displaying non-english charachter sets, everything was working fine prior to upgrading to the new version.
    Is there a way to add support for these character sets?
    Thanks in advance.

    Hi Bala,
    Follow the command. I believe that the space is normal.
    This command can take significantly long time,
    and can also effect the system wide IOWAIT on your system.
    Continue (y/n)?y
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda6              90G   46G   41G  54% /common
    8.0K    /var/log/inactive/
    admin:
    admin:
    admin:show diskusage activelog
    This command can take significantly long time,
    and can also effect the system wide IOWAIT on your system.
    Continue (y/n)?y
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda6              90G   46G   41G  54% /common
    8.0K    /var/log/active/mgetty
    0       /var/log/active/sa
    4.0K    /var/log/active/platform/snmp/sappagt/sappagt.index
    4.0K    /var/log/active/platform/snmp/sappagt/sappagt.log
    4.0K    /var/log/active/platform/snmp/sappagt/startup.txt
    16K     /var/log/active/platform/snmp/sappagt
    4.0K    /var/log/active/platform/snmp/hostagt/hostagt.index
    Thanks,
    Wilson

  • When will iCS 5.1 support zh-TW(big5) character set?

    If not, is there any way to tail it ?

    one thing that looks odd is that you have specified both UTF-8 and BIG-5 as the encoding in 2 places. I'm not sure what the browser will use. However, here is my JSP page that I've used before to make sure what I'm doing for multiple language support. You should be able to use it as is, I would think.
    <%@ page language="java" contentType="text/html; charset=UTF-8" %>
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <html>
    <head>
         <title></title>
         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    </head>
    <body bgcolor="#ffffff" background="" text="#000000" link="#ff0000" vlink="#800000" alink="#ff00ff">
    <%
    request.setCharacterEncoding("UTF-8");
    String str = "\u7528\u6237\u540d";
    String name = request.getParameter("name");
    if(name != null) {
         // instead of setCharacterEncoding...
         //name = new String(name.getBytes("ISO8859_1"), "UTF8");
    System.out.println(application.getRealPath("/"));
    System.out.println(application.getRealPath("/src"));
    %>
    req enc: <%= request.getCharacterEncoding() %><br />
    rsp enc: <%= response.getCharacterEncoding() %><br />
    str: <%= str %><br />
    name: <%= name %><br />
    <form method="GET" action="_lang.jsp" encoding="UTF-8">
    Name: <input type="text" name="name" value="" >
    <input type="submit" name="submit" value="Submit" />
    </form>
    </body>
    </html>

  • Non supported character set error

    Hello, I have a j2ee application that uses OLAP API to make queries against cubes created with cwm2 packages, in a database (db 1).
    I've replicated the schema in another db (db 2), importing the user (with a dmp file) and throwing the object creation scripts. This works right, and the objects in the catalog are all validated. But my j2ee application doesn't work against this new user; some errors appears.
    The two databases are the same version, the only difference is that db 2 was installed using WE8ISO8859P15 character set. Is there any problem with this set and the olap api ? The character set in db 1 is WE8ISO8859P1.
    The error is:
    "oracle.express.idl.util.OlapiException: Non supported character set: oracle-character-set-46".
    I'm using ojdbc14.zip and orai18n.zip in the WEB-INF/lib directory of my application. My database version is 10g r. 2 (10.2.0.1.0)
    Thanks by your reply.

    At what point does this error occur? Can you provide more of the stack trace?
    Geof

  • Use of UTF8 and AL32UTF8 for database character set

    I will be implementing Unicode on a 10g database, and am considering using AL32UTF8 as the database character set, as opposed to AL16UTF16 as the national character set, primarily to economize storage requirements for primarily English-based string data.
    Is anyone aware of any issues, or tradeoffs, for implementing AL32UTF8 as the database character set, as opposed to using the national character set for storing Unicode data? I am aware of the fact that UTF-8 may require 3 bytes where UTF-16 would only require 2, so my question is more specific to the use of the database character set vs. the national character set, as opposed to differences between the encoding itself. (I realize that I could use UTF8 as the national character set, but don't want to lose the ability to store supplementary characters, which UTF8 does not support, as this Oracle character set supports up to Unicode 3.0 only.)
    Thanks in advance for any counsel.

    I don't have a lot of experience with SQL Server, but my belief is that a fair number of tools that handle SQL Server NCHAR/ NVARCHAR2 columns do not handle Oracle NCHAR/ NVARCHAR2 columns. I'm not sure if that's because of differences in the provided drivers, because of architectural differences, or because I don't have enough data points on the SQL Server side.
    I've not run into any barriers, no. The two most common speedbumps I've seen are
    - I generally prefer in Unicode databases to set NLS_LENGTH_SEMANTICS to CHAR so that a VARCHAR2(100) holds 100 characters rather than 100 bytes (the default). You could also declare the fields as VARCHAR2(100 CHAR), but I'm generally lazy.
    - Making sure that the client NLS_LANG properly identifies the character set of the data going in to the database (and the character set of the data that the client wants to come out) so that Oracle's character set conversion libraries will work. If this is set incorrectly, all manner of grief can befall you. If your client NLS_LANG matches your database character set, for example, Oracle doesn't do a character set conversion, so if you have an application that is passing in Windows-1252 data, Oracle will store it using the same binary representation. If another application thinks that data is really UTF-8, the character set conversion will fail, causing it to display garbage, and then you get to go through the database to figure out which rows in which tables are affected and do a major cleanup. If you have multiple character sets inadvertently stored in the database (i.e. a few rows of Windows-1252, a few of Shift-JIS, and a few of UTF8), you'll have a gigantic mess to clean up. This is a concern whether you're using CHAR/ VARCHAR2 or NCHAR/ NVARCHAR2, and it's actually slightly harder with the N data types, but it's something to be very aware of.
    Justin

  • Multiple Character set for NLS

    Hi,
    I'm using Oracle 8i database. Is it possible to set the different character set for the database? The requirement is to support the two different character set data, one (main) Japanese character set and other Simplified Japanese. Or is there any other way in which i can store these data (Japanese & Chinese)?
    Thanks & Regards,
    Jayesh

    Please don't get me wrong. Currently it is set in the windows database. I did not set nls_lang at the command prompt before import into windows. However nls_lang is already set and it is character set WE8ISO8859P1 the same as the value I specified in creation script, besides the other two values AMERICAN, AMERICA. They are now same in both solaris and windows. Only the character sets are different because I specified a different one. So, is it ok or do I now need another fresh import this time with nls_lang set to AMERICAN_AMERICA.UTF8 ?

  • Does Netweaver 7.0 support the AL32UTF8 character set?

    We are running Netweaver 7.0 for a Vendavo solution running on Oracle 10.2.   We have created the Vendavo schema within the same database as Netweaver.  Vendavo documentation states the character set be AL32UTF8.  SAP sets the character set on Netweaver install to just UTF8.  Does anyone know if Netweaver 7.0 supports changing the DB character set to AL32UTF8?
    Thanks.

    Hi,
    Yes you can as AL32UTF8 is a superset of UTF8 - this is supported. Please check the following note.
    https://service.sap.com/sap/support/notes/456968
    The above note also discusses how to change database character sets.
    Also check the following FAQ SAP Note on Oracle Character Sets.
    https://service.sap.com/sap/support/notes/606359
    Moreover the SAP NW 7 installation sets the Oracle NLS to UTF8 which is find. Note NLS and Database character set is not the same,please check the following sap note for the same.
    https://service.sap.com/sap/support/notes/669902
    Hope this helps.
    - Regards, Dibya

  • Character Set issues.  Please advise

    I have a client who use a version 10gR2 DB that stores both English and French data. There are several times where they will send up .dmp file where we load it into ours.
    2 questions.
    What would be the best charater sets to use here in this setup?
    I am assuming we would use
    NLS_CHARACTERSET = WE8ISO9959P1
    NLS_NCHAR_CHARACTERSET = AL16UTF16
    Also if someone can confirm for me.
    NLS_CHARACTERSET = database character set ???
    NLS_NCHAR_CHARACTERSET = national character set???

    So is it better to say that I should use the AL32UTF8
    instead of AL16UTF16 ?It's not an instead of situation. AL32UTF8 is a valid setting for the database character set, which controls CHAR and VARCHAR2 columns. AL16UTF16 is a valid setting for the national character set which controls NCHAR and NVARCHAR2 columns.
    Could you tell me the difference?The difference between the two encodings comes down to how many bytes are required to store a particular code point (character). AL32UTF8 is a variable-length character set, so 1 character will require between 1 and 3 bytes of storage (4 for the supplemental characters but those are rather rare). AL16UTF16 is a fixed-width character set, so 1 character will require 2 bytes of storage (4 for the rare supplemental characters again).
    Also could you tell me the difference between
    WE8ISO8859P15 and WE8ISO8859P1 ? There's a Wikipedia article that discusses the differences and has links to the two different code tables.
    Werner's point is an excellent one as well. I was assuming that we were talking about how to set up both sides of this proposed system. If the source system already exists, there are additional considerations like ensuring that your target system supports a superset of the characters supported by the source system. Regardless, when doing imports & exports, as Werner points out, you need to ensure that NLS_LANG is set appropriately.
    Justin

  • Std::string NLS_LANG character sets

    I'm an OCI user but not a pure one. Because I use the free available OTL (Oracle Template Library) from S.Kuchin which is a wrapper around OIC I hope this not off topic here.
    The library offers the possibility to read database strings from VARCHAR2 fields to
    a std::string. I know that oracle does character converting at client side controlled
    via NLS_LANG environment variable.
    It's clear to me that reading database strings to std::string is no problem for
    one byte character sets liike ISO-8859-1. But how about when I let point NLS_LANG
    to UTF-8 or chinese character set e.g. ZHT16BIG5 ?
    Is it still safe to read the result to a std::string ?
    For example I have a database with default characterset AMERICAN_AMERICA.WE8ISO8859P15. I stored some German umlaut in some
    table. I wrote a small program using OTL and set NLS_LANG to UTF8 on client side.
    I fetched the data from server to client, stored the data in a std::string, pushed them in a file and yes the data were stored as UTF8.
    Is it really so simple or is it dangerous to read the UTF8-converted data to
    std::string ? Wat is the common rule ? When may and when may I not read the data
    to std::string ?

    Hello,
    I think you'll have more accurate answer in the Globalization Support Forum:
    Globalization Support
    Best regards,
    Jean-Valentin

  • Database support for unicode

    Hello, I am in the process uf upgrading database installation scripts so they will support Unicode. I just want to clarify that by changing the Character set to say for example AL32UTF8 and the National Character Set to UTF8 the database will then be able to support Unicode. Do I need to also change all the VARCHAR2 and CHAR2 data types to NVARCHAR2 and CHAR2? When changing the character sets do the database then default to bytes instead of characters for multibyte character storage? Thank you.
    -- David

    You would not want a situation where some clients have a database character set of AL32UTF8 and are storing the data in CHAR/ VARCHAR2 columns and some clients have a non-Unicode database character set, a Unicode national character set, and store their Unicode data in NCHAR/ NVARCHAR2 columns (I'm assuming from the context that you are some sort of application vendor here so that different clients are trying to run the same application). That would massively increase the complexity of your application code and make testing & supporting the application substantially more difficult.
    If at all possible, it is preferable to change the database character set to Unicode for existing databases. This may involve exporting & importing some or all of the data or it may be possible online (there is a chapter in the Globalization Support document that covers character set migration and the various options you have).
    Storing data in NCHAR/ NVARCHAR2 columns should generally be a last resort (unless you really know what you are doing and want to leverage different Unicode encodings). You are likely to cause yourself all sorts of headaches trying to support national character set data types.
    Justin

  • Oracle support for multiple languages

    I have a VB and oracle application. There is a requirement for having spanish language support.
    The current character set is US7ASCII and National Character set is AL16UTF16.
    After some analysis i thought changing these character sets to UTF8 will be a solution for saving spanish characters.
    Can any please help me whether this is correct?
    Thanks in advance

    That's true.
    However since you are moving from single-byte character set to multi-bytes, you should be aware of the data truncation problem. that's the dangerous I was talking about.
    Data Truncation
    When the database is created using byte semantics, the sizes of the CHAR and VARCHAR2 datatypes are specified in bytes, not characters. For example, the specification CHAR(20) in a table definition allows 20 bytes for storing character data. This is acceptable when the database character set uses a single-byte character encoding scheme because the number of characters is equivalent to the number of bytes. If the database character set uses a multibyte character encoding scheme, then the number of bytes no longer equals the number of characters because a character can consist of one or more bytes.
    During migration to a new character set, it is important to verify the column widths of existing CHAR and VARCHAR2 columns because they might need to be extended to support an encoding that requires multibyte storage. Truncation of data can occur if conversion causes expansion of data.

  • Labview support for ISO 8859-5

    HI,
    does Labview support ISO 8859-5 character sets?
    If yes, then how do i change the default character set in Labview from ASCII to ISO 8859-5?
    If no, then which is the simplest way to display russian characters in Labview 6.1 (Labview 6.1 on SuSE LINUX)
    Thanks and Regards,
    Pavitra

    Hi Pavitra,
    There is a very easy way to do that in Windows. Start>Settings>Regional and Language Options. Make sure Russian is included as an input language language on the language tab, as well as on the advanced tab where you can specify the language for non-unicode programs.
    I have attached a screenshot that demonstrates cyrillic fonts in LabVIEW. The example is done with Bulgarian font, but it should work the same way with Russian. As you can see I have also specified Arial CYR as my font for the front panel and the block diagram.
    Try to use the same logic for a Linux system.
    Hope that helps!
    Kalin T.
    National Instruments
    Attachments:
    cyrillic.jpg ‏51 KB

Maybe you are looking for

  • TS3938 Can no longer scan with my MP800 Canon printer

    I've seen lots of posts on this, and it is extremely frustrating that I'm starting to think it's just assumed that the owners of MP800 Canon printer/scanners will just toss out a perfectly good, little-used piece of hardware and buy something new. I'

  • Album "Edit" function can't be configured to write photos on SD card

    Edit function of Album application, as well as other edition applications (Speedseed, etc.) can't be configured to use SD card, and write editing results into internal memory.

  • Is there anyway to set different folders for content in itunes?

    I get that you can set iTunes media file location.  I assume this will be the location for all media from itunes as well as ripped cd's ect.  Is there anyway to set the location for different media formats independantly. ie All my music is in My Musi

  • Backup photos to DVD (Not Time Machine)

    Hi, I am running Time Machine and backing up locally. What I am interested in doing is backing up my photos to DVDs and storing the DVDs off site. Is there any easy & free way to do that? I know I can make a burn folder and do it by hand. What I am l

  • Preventive Maintenance Planning - change call date of future calls

    Hello, we are using time based maintenance plans to schedule our maintenance obligations towards customers. After creating the plan and scheduling (IP10) we get the scheduled calls proposed by the system. Now we want to use IP10 or IP19 to move those