Cdrtools package, support for nls/utf8 character sets

Hello ppl,
I've been trying desperetly to burn a cd/dvd(k3b) with greek filenames and directory names. I ended up with file names like "???????????????????? (invalid unicode)"
After a lot of searching, i managed to isolate and solve the problem. There has been a patch(http://bugs.gentoo.org/attachment.cgi?id=52097) for cdrtools to support nls/utf8 character sets.
I guess that 90%+ of people using arch and burning cd's/dvd's, ignore the problem cause they just burn cd's/dvd's using standard english characters.
For all others here it is     :
# Patched cdrtools to support nls/utf8 character sets
# Contributor: Akis Maziotis <[email protected]>
pkgname=cdrtools-utf8support
pkgver=2.01.01
pkgrel=3
pkgdesc="Tools for recording CDs patched for nls/utf8 support!"
depends=('glibc')
conflicts=('cdrtools')
source=(ftp://ftp.berlios.de/pub/cdrecord/alpha/cdrtools-2.01.01a01.tar.gz http://bugs.gentoo.org/attachment.cgi?id=52097)
md5sums=('fc085b5d287355f59ef85b7a3ccbb298' '1a596f5cae257e97c559716336b30e5b')
build() {
cd $startdir/src/cdrtools-2.01.01
msg "Patching cdrtools ..."
patch -p1 -i ../attachment.cgi?id=52097
msg "Patching done "
make || return 1
make INS_BASE=$startdir/pkg/usr install
It's a modified pkgbuild of the official arch cdrtools package (http://cvs.archlinux.org/cgi-bin/viewcv … cvs-markup) patched to support nls/utf8 character sets.
Worked like a charm. 
If u want to install it, u should uninstall the cdrtools package
pacman -Rd cdrtools
P.S.: I've issued this as a bug in http://bugs.archlinux.org/task/3830 but nobody seemed to care...    :cry:  :cry:  :cry:

Hi Bharat,
I have created a Oracle 8.1.7 database with UTF8 character set
on WINDOWS 2000.
Now , I want to store and retrieve information in other languages
say Japanese or Hindi .
I had set the NLS Language and NLS Terrritory to HINDI and INDIA
in the SQL*PLUS session but could not see the information.You cannot view Hindi using SQL*Plus. You need iSQL*Plus.
(Available as a download from OTN, and requiring the Oracle HTTP
server).
Then you need the fonts (either Mangal from Microsoft or
Code2000).
Have your NLS_LANG settings in your registry to
AMERICAN_AMERICA.UTF8. (I have not tried with HINDI etc, because
I need my solution to work with 806,817 and 901, and HINDI was
not available with 806).
Install the language pack for Devanagari/Indic languages
(c_iscii.dll) on Windows NT/2000/XP.
How can I use the Forms 6i to support this languages ?I am not sure about that.
Do write back if this does not solve your problem.
--Shirish

Similar Messages

  • SUPPORT FOR euro WE8ISO8859P15 CHARACTER SET

    Hi,
    I cannot run Portal home page after 9iAS release 2 installation on Sun Solaris.
    On both nodes (infrastructure and middle tier) and in portal DAD I have NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15.
    Is it possible to run portal with this character set?
    This is the error message:
    Error: The servlet produced the following error stack. java.io.IOException: Unsupported character encoding: "ISO-8859-15"
         at oracle.webdb.page.BaseContentRequest.getResponseJavaEncoding(Unknown Source)
         at oracle.webdb.page.BaseContentRequest.getReader(Unknown Source)
         at oracle.webdb.page.PageBuilder.getMetaData(Unknown Source)
         at oracle.webdb.page.PageBuilder.process(Unknown Source)
         at oracle.webdb.page.ParallelServlet.doGet(Unknown Source)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:244)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:336)
         at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:59)
         at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:283)
         at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:523)
         at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:269)
         at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:735)
         at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:151)
         at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:64)
    Best regards,
    Zoran

    Hi
    You dont need anything special, however the JVM you use must support the encoding you want (unless you are going with a unicode flavor like UTF-8 which all JVM's support). The rest of the i18n stuff like ResourceBundles for messages, date currency formats are handled in your code anyway.
    regards
    deepak

  • Support for non-western character sets

    I've been reading docs for WL portal and for WL server, but basically I need to know...what needs to be set/installed for a Weblogic Portal 10.3 running on Weblogic Server 10.3 to have non-western characters display in the content of any portlet we may have? For instance, Arabic, Japanese, Chinese...thanks!
    Sorry, I want to add: this is assuming the encoding of the content is correct (like an html document), the database that content may be retrieved from is set up correctly, etc.
    Just basically what to configure in WL Portal, WL Server, oh and Workshop too (if anything).
    Edited by: user10697594 on Jan 5, 2010 2:56 PM

    Hi
    You dont need anything special, however the JVM you use must support the encoding you want (unless you are going with a unicode flavor like UTF-8 which all JVM's support). The rest of the i18n stuff like ResourceBundles for messages, date currency formats are handled in your code anyway.
    regards
    deepak

  • UTF8 character set conversion for chinese Language

    Hi friends,
    Would like to some basic explanation on UTF8 feature,what does it help while converting the data from chinese language.
    Would like to know what all characters this UTF8 will not support while converting from chinese language.
    Thanks & Regards
    Ramya Nomula

    Not exactly sure what you are looking for, but on MetaLink, there are numerous detailed papers on NLS character sets, conversions, etc.
    Bottom line is that for traditional Chinese characters (since they are more complicated), they require 4 bytes to store the characters (such as UTF-8, and AL32UTF8). Some mid-eastern characters sets also fall in this category.
    Do a google search on "utf8 al32utf8 difference", and you will get some good explanations.
    e.g., http://decipherinfosys.wordpress.com/2007/01/28/difference-between-utf8-and-al32utf8-character-sets-in-oracle/
    Recently, one of our clients had a question on the differences between these two character sets since they were in the process of making their application global. In an upcoming whitepaper, we will discuss in detail what it takes (from a RDBMS perspective) to address localization and globalization issues. As far as these two character sets go in Oracle, the only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.
    You may also consider posting your question on the Globalization Suport forum which pertains more to these types of questions.
    Globalization Support

  • Export using UTF8 character set

    Hi,
    My client is having a production database with default character set.
    While exporting I need to export the database to create a dump with UTF8 character set.
    Please let me know how to export with UTF8 option.
    Thanks....

    Hi, I am not sure if I got you correst. Here is what I think I have understood:
    - your client has a db which uses UTF8 as character set.
    - you want to export and make sure that there is no conversion taking place.
    for this you must set NLS_LANG variable in the shell from where you call exp, resp. expdp properly.
    NLS_LANG is composit and consists of:
    NLS_LANGUAGE_NLS_TERRITORY.NLS_CHARACTERSET
    In a bash this would look like this:
    $ export NLS_LANG=american_america.UTF8
    In other shells you might need to first define the varaible and then export:
    $ NLS_LANG=american_america.UTF8
    $ export NLS_LANG
    What you also need to know is that you influence the behavior for decimal separator and grand separator in numeric values and a few more settings by specifying NLS_TERRITORY.
    For instance NLS_TERRITORY uses a "." as decimal separator and a "," as grand separator for numeric values.
    This can be a pitfall! If you have the wrong territory specified then it might destroy all you numerics!
    Hope this helps,
    Lutz

  • Can I user Dot Matrix Printer under UTF8 character set ?

    Hi,
    We are using UTF8 character set. Somebody told me that I can only use the postscript printer. Is it necessary to set up a report server if I need to print on dot-matrix printer? Any workaround solution?
    Thanks and regards
    Toby

    Your printer emulation and the SAP device type for the output device must match.
    For example, if you use IBM Proprinter emulation, then use device type IBM4232.
    If you use Epson emulation, then you may have a look at note 1135053.
    What are your emulation and device type?

  • Error while running package ORA-06553: PLS-553 character set name is not re

    Hi all.
    I have a problem with a package, when I run it returns me code error:
    ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-553: character set name is not recognized
    The full context of the problem is this:
    Previously I had a developing data base, then was migrated to a new server. After that I started to receive the error, so I began to check for the solution.
    My first move was compare the “old database” with the “new database”, so I check the nls parameters, and this was the result:
    select * from nls_database_parameters;
    Result from the old
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    Result from the new
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    As the result was identical, I decided to look for more info in the log file of the new database. What I find was this:
    Database Characterset is US7ASCII
    Threshold validation cannot be done before catproc is loaded.
    Threshold validation cannot be done before catproc is loaded.
    alter database character set INTERNAL_CONVERT WE8MSWIN1252
    Updating character set in controlfile to WE8MSWIN1252
    Synchronizing connection with database character set information
    Refreshing type attributes with new character set information
    Completed: alter database character set INTERNAL_CONVERT WE8MSWIN1252
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    Errors in file e:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_3132.trc:
    Regards

    Ohselotl wrote:
    Hi all.
    I have a problem with a package, when I run it returns me code error:
    ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-553: character set name is not recognized
    The full context of the problem is this:
    Previously I had a developing data base, then was migrated to a new server. After that I started to receive the error, so I began to check for the solution.
    My first move was compare the “old database” with the “new database”, so I check the nls parameters, and this was the result:
    select * from nls_database_parameters;
    Result from the old
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    Result from the new
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    As the result was identical, I decided to look for more info in the log file of the new database. What I find was this:
    Database Characterset is US7ASCII
    Threshold validation cannot be done before catproc is loaded.
    Threshold validation cannot be done before catproc is loaded.
    alter database character set INTERNAL_CONVERT WE8MSWIN1252
    Updating character set in controlfile to WE8MSWIN1252
    Synchronizing connection with database character set information
    Refreshing type attributes with new character set information
    Completed: alter database character set INTERNAL_CONVERT WE8MSWIN1252
    *********************************************************************This is an unsupported method to change the characterset of a database - it has caused the corruption of your database beyond repair. Hopefully you have a backup you can recover from. Whoever did this did not know what they were doing.
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    Errors in file e:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_3132.trc:
    RegardsThe correct way to change the characterset of a database is documented - http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1476
    HTH
    Srini

  • Problem: Adding support for non-english charachter sets in UCCX 8.0

    We have just moved from windows-based UCCX 7.0 to UCCX 8.0, the upgrade process went successfully so far, but for some reasons, Cisco Agents are experiencing problems displaying non-english charachter sets, everything was working fine prior to upgrading to the new version.
    Is there a way to add support for these character sets?
    Thanks in advance.

    Hi Bala,
    Follow the command. I believe that the space is normal.
    This command can take significantly long time,
    and can also effect the system wide IOWAIT on your system.
    Continue (y/n)?y
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda6              90G   46G   41G  54% /common
    8.0K    /var/log/inactive/
    admin:
    admin:
    admin:show diskusage activelog
    This command can take significantly long time,
    and can also effect the system wide IOWAIT on your system.
    Continue (y/n)?y
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda6              90G   46G   41G  54% /common
    8.0K    /var/log/active/mgetty
    0       /var/log/active/sa
    4.0K    /var/log/active/platform/snmp/sappagt/sappagt.index
    4.0K    /var/log/active/platform/snmp/sappagt/sappagt.log
    4.0K    /var/log/active/platform/snmp/sappagt/startup.txt
    16K     /var/log/active/platform/snmp/sappagt
    4.0K    /var/log/active/platform/snmp/hostagt/hostagt.index
    Thanks,
    Wilson

  • What to set in order to select UTF8 character set in sqlplus?

    We are using 9.0.1.
    select * from nls_database_parameters;
    NLS_CHARACTERSET = UTF8
    LANGUAGE = AMERICAN
    TERRITORY = AMERICA
    Web page encoding is UTF8.
    We can see traditional and simplied chinese characters on web page without problem.
    When we want to select thru sqlplus, the problem comes.
    At first, I set the default win2000 language to traditional chinese.
    locales is Chinese (Hong Kong).
    Ansi Code page becomes 950. reboot the machine
    set NLS_LANG=TRADITIONAL CHINESE.HONG KONG=ZHT16MSWIN950 in registry
    web page can show traditonal and simplied chinese characters, but sqlplus fails.
    Then I set the default win2000 language to simplied chinese, set the locales
    to Chinese (PRC). reboot the machine
    set NLS_LANG to simplied chinese character set.
    web page can show traditonal and simplied chinese characters, but sqlplus still fails.
    I have even set the NLS_LANG to _.UTF8. sqlplus still cannot show proper character set. Only the web page is okay.
    I have referenced Oracle globalization web pages. I follow everything. What should I try?

    The good news is that according to the iSQL*Plus FAQ on OTN, iSQL*Plus is available with 9.0.1.1 on Windows, and you can also download it directly from OTN. The bad news is that it does not support the SPOOL command
    Since your requirement is to generate text reports with both Traditional and Simplified Chinese characters only, and that you don't need these characters to be able to display correctly inside a SQL*Plus session. You can set your NLS_LANG to TRADITIONAL CHINESE_HONG KONG.UTF8, launch SQL*Plus and spool out the content of the SELECT statement into a text file. This file will be encoded in UTF-8 and your Chinese data should show up in a Unicode UTF-8 viewer correctly.
    I am not sure if Notepad in Windows 2000 can display UTF-8 text, if not you can open the text file up in a browser, and then set the page encoding to UTF-8 to view it.

  • Change to UTF8 character set

    Hi,
    Currently the character set available is WE8MSWIN1252 and I need it
    to change to UTF8.
    When I tried to change then it throws an error saying that it should be superset of the current character set.
    Please let me know how to resolve this issue ?
    Thanks and Regards,
    A.Mohammed Rafi.

    This transformation is not possible using ALTER .... CHARACTERSET ..., you have to use export/import. Here's a list a possible combinations (up to 9.2),
    for example AL32UTF8 is a superset of UTF8 :
    8.1.6 Subset/Superset Pairs
    ===========================
    A. Current Char set B. New Char set (Superset of A.)
    US7ASCII WE8DEC
    US7ASCII US8PC437
    US7ASCII WE8PC850
    US7ASCII IN8ISCII
    US7ASCII WE8PC858
    US7ASCII WE8ISO8859P1
    US7ASCII EE8ISO8859P2
    US7ASCII SE8ISO8859P3
    US7ASCII NEE8ISO8859P4
    US7ASCII CL8ISO8859P5
    US7ASCII AR8ISO8859P6
    US7ASCII EL8ISO8859P7
    US7ASCII IW8ISO8859P8
    US7ASCII WE8ISO8859P9
    US7ASCII NE8ISO8859P10
    US7ASCII TH8TISASCII
    US7ASCII BN8BSCII
    US7ASCII VN8VN3
    US7ASCII VN8MSWIN1258
    US7ASCII WE8ISO8859P15
    US7ASCII WE8NEXTSTEP
    US7ASCII AR8ASMO708PLUS
    US7ASCII EL8DEC
    US7ASCII TR8DEC
    US7ASCII LA8PASSPORT
    US7ASCII BG8PC437S
    US7ASCII EE8PC852
    US7ASCII RU8PC866
    US7ASCII RU8BESTA
    US7ASCII IW8PC1507
    US7ASCII RU8PC855
    US7ASCII TR8PC857
    US7ASCII CL8MACCYRILLICS
    US7ASCII WE8PC860
    US7ASCII IS8PC861
    US7ASCII EE8MACCES
    US7ASCII EE8MACCROATIANS
    US7ASCII TR8MACTURKISHS
    US7ASCII EL8MACGREEKS
    US7ASCII IW8MACHEBREWS
    US7ASCII EE8MSWIN1250
    US7ASCII CL8MSWIN1251
    US7ASCII ET8MSWIN923
    US7ASCII BG8MSWIN
    US7ASCII EL8MSWIN1253
    US7ASCII IW8MSWIN1255
    US7ASCII LT8MSWIN921
    US7ASCII TR8MSWIN1254
    US7ASCII WE8MSWIN1252
    US7ASCII BLT8MSWIN1257
    US7ASCII N8PC865
    US7ASCII BLT8CP921
    US7ASCII LV8PC1117
    US7ASCII LV8PC8LR
    US7ASCII LV8RST104090
    US7ASCII CL8KOI8R
    US7ASCII BLT8PC775
    US7ASCII WE8DG
    US7ASCII WE8NCR4970
    US7ASCII WE8ROMAN8
    US7ASCII WE8MACROMAN8S
    US7ASCII TH8MACTHAIS
    US7ASCII HU8CWI2
    US7ASCII EL8PC437S
    US7ASCII LT8PC772
    US7ASCII LT8PC774
    US7ASCII EL8PC869
    US7ASCII EL8PC851
    US7ASCII CDN8PC863
    US7ASCII HU8ABMOD
    US7ASCII AR8ASMO8X
    US7ASCII AR8NAFITHA711T
    US7ASCII AR8SAKHR707T
    US7ASCII AR8MUSSAD768T
    US7ASCII AR8ADOS710T
    US7ASCII AR8ADOS720T
    US7ASCII AR8APTEC715T
    US7ASCII AR8NAFITHA721T
    US7ASCII AR8HPARABIC8T
    US7ASCII AR8NAFITHA711
    US7ASCII AR8SAKHR707
    US7ASCII AR8MUSSAD768
    US7ASCII AR8ADOS710
    US7ASCII AR8ADOS720
    US7ASCII AR8APTEC715
    US7ASCII AR8MSAWIN
    US7ASCII AR8NAFITHA721
    US7ASCII AR8SAKHR706
    US7ASCII AR8ARABICMACS
    US7ASCII LA8ISO6937
    US7ASCII JA16VMS
    US7ASCII JA16EUC
    US7ASCII JA16SJIS
    US7ASCII KO16KSC5601
    US7ASCII KO16KSCCS
    US7ASCII KO16MSWIN949
    US7ASCII ZHS16CGB231280
    US7ASCII ZHS16GBK
    US7ASCII ZHT32EUC
    US7ASCII ZHT32SOPS
    US7ASCII ZHT16DBT
    US7ASCII ZHT32TRIS
    US7ASCII ZHT16BIG5
    US7ASCII ZHT16CCDC
    US7ASCII ZHT16MSWIN950
    US7ASCII AL24UTFFSS
    US7ASCII UTF8
    US7ASCII JA16TSTSET2
    US7ASCII JA16TSTSET
    8.1.7 Additions
    ===============
    US7ASCII ZHT16HKSCS
    US7ASCII KO16TSTSET
    WE8DEC TR8DEC
    WE8DEC WE8NCR4970
    WE8PC850 WE8PC858
    D7DEC D7SIEMENS9780X
    I7DEC I7SIEMENS9780X
    WE8ISO8859P1 WE8MSWIN1252
    AR8ISO8859P6 AR8ASMO708PLUS
    AR8ISO8859P6 AR8ASMO8X
    IW8EBCDIC424 IW8EBCDIC1086
    IW8EBCDIC1086 IW8EBCDIC424
    LV8PC8LR LV8RST104090
    DK7SIEMENS9780X N7SIEMENS9780X
    N7SIEMENS9780X DK7SIEMENS9780X
    I7SIEMENS9780X I7DEC
    D7SIEMENS9780X D7DEC
    WE8NCR4970 WE8DEC
    WE8NCR4970 TR8DEC
    AR8SAKHR707T AR8SAKHR707
    AR8MUSSAD768T AR8MUSSAD768
    AR8ADOS720T AR8ADOS720
    AR8NAFITHA711 AR8NAFITHA711T
    AR8SAKHR707 AR8SAKHR707T
    AR8MUSSAD768 AR8MUSSAD768T
    AR8ADOS710 AR8ADOS710T
    AR8ADOS720 AR8ADOS720T
    AR8APTEC715 AR8APTEC715T
    AR8NAFITHA721 AR8NAFITHA721T
    AR8ARABICMAC AR8ARABICMACT
    AR8ARABICMACT AR8ARABICMAC
    KO16KSC5601 KO16MSWIN949
    WE16DECTST2 WE16DECTST
    WE16DECTST WE16DECTST2
    9.0.1 Additions
    ===============
    US7ASCII BLT8ISO8859P13
    US7ASCII CEL8ISO8859P14
    US7ASCII CL8ISOIR111
    US7ASCII CL8KOI8U
    US7ASCII AL32UTF8
    BLT8CP921 BLT8ISO8859P13
    US7ASCII AR8MSWIN1256
    UTF8 AL32UTF8 (added in patchset 9.0.1.2)
    Character Set Subset/Superset Pairs Obsolete from 9.0.1
    =======================================================
    US7ASCII AR8MSAWIN
    AR8ARABICMAC AR8ARABICMACT
    9.2.0 Additions
    ===============
    US7ASCII JA16EUCTILDE
    US7ASCII JA16SJISTILDE
    US7ASCII ZHS32GB18030
    US7ASCII ZHT32EUCTST
    WE8ISO8859P9 TR8MSWIN1254
    LT8MSWIN921 BLT8ISO8859P13
    LT8MSWIN921 BLT8CP921
    BLT8CP921 LT8MSWIN921
    AR8ARABICMAC AR8ARABICMACT
    ZHT32EUC ZHT32EUCTST
    UTF8 AL32UTF8
    Character Set Subset/Superset Pairs Obsolete from 9.2.0
    =======================================================
    LV8PC8LR LV8RST104090

  • When will iCS 5.1 support zh-TW(big5) character set?

    If not, is there any way to tail it ?

    one thing that looks odd is that you have specified both UTF-8 and BIG-5 as the encoding in 2 places. I'm not sure what the browser will use. However, here is my JSP page that I've used before to make sure what I'm doing for multiple language support. You should be able to use it as is, I would think.
    <%@ page language="java" contentType="text/html; charset=UTF-8" %>
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <html>
    <head>
         <title></title>
         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    </head>
    <body bgcolor="#ffffff" background="" text="#000000" link="#ff0000" vlink="#800000" alink="#ff00ff">
    <%
    request.setCharacterEncoding("UTF-8");
    String str = "\u7528\u6237\u540d";
    String name = request.getParameter("name");
    if(name != null) {
         // instead of setCharacterEncoding...
         //name = new String(name.getBytes("ISO8859_1"), "UTF8");
    System.out.println(application.getRealPath("/"));
    System.out.println(application.getRealPath("/src"));
    %>
    req enc: <%= request.getCharacterEncoding() %><br />
    rsp enc: <%= response.getCharacterEncoding() %><br />
    str: <%= str %><br />
    name: <%= name %><br />
    <form method="GET" action="_lang.jsp" encoding="UTF-8">
    Name: <input type="text" name="name" value="" >
    <input type="submit" name="submit" value="Submit" />
    </form>
    </body>
    </html>

  • OEL5 minimum package support for DB

    I've been able to install OEL5 and Database 11g without too many issues and have no problems obtaining any missing packages. My concern is more about any Oracle recommendations for a minimum software install footprint. The media seems to ship with a HUGE amount of stuff (many of which show up with the default install) that I think maybe shouldn't always be part of a "server" distribution, e.g. games, CD authoring, etc. Is there a template or kickstart file or some recommendation for a good "minimum" environment for production environments that leaves out some of the more obvious fluff?

    Hi,
    Please look at the Note:376183.1 - Defining a "default RPMs" installation of the RHEL OS
    https://metalink.oracle.com/metalink/plsql/f?p=130:14:11914223751020402856::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,376183.1,1,1,1,helvetica
    Regards
    zhiqing

  • Use of UTF8 and AL32UTF8 for database character set

    I will be implementing Unicode on a 10g database, and am considering using AL32UTF8 as the database character set, as opposed to AL16UTF16 as the national character set, primarily to economize storage requirements for primarily English-based string data.
    Is anyone aware of any issues, or tradeoffs, for implementing AL32UTF8 as the database character set, as opposed to using the national character set for storing Unicode data? I am aware of the fact that UTF-8 may require 3 bytes where UTF-16 would only require 2, so my question is more specific to the use of the database character set vs. the national character set, as opposed to differences between the encoding itself. (I realize that I could use UTF8 as the national character set, but don't want to lose the ability to store supplementary characters, which UTF8 does not support, as this Oracle character set supports up to Unicode 3.0 only.)
    Thanks in advance for any counsel.

    I don't have a lot of experience with SQL Server, but my belief is that a fair number of tools that handle SQL Server NCHAR/ NVARCHAR2 columns do not handle Oracle NCHAR/ NVARCHAR2 columns. I'm not sure if that's because of differences in the provided drivers, because of architectural differences, or because I don't have enough data points on the SQL Server side.
    I've not run into any barriers, no. The two most common speedbumps I've seen are
    - I generally prefer in Unicode databases to set NLS_LENGTH_SEMANTICS to CHAR so that a VARCHAR2(100) holds 100 characters rather than 100 bytes (the default). You could also declare the fields as VARCHAR2(100 CHAR), but I'm generally lazy.
    - Making sure that the client NLS_LANG properly identifies the character set of the data going in to the database (and the character set of the data that the client wants to come out) so that Oracle's character set conversion libraries will work. If this is set incorrectly, all manner of grief can befall you. If your client NLS_LANG matches your database character set, for example, Oracle doesn't do a character set conversion, so if you have an application that is passing in Windows-1252 data, Oracle will store it using the same binary representation. If another application thinks that data is really UTF-8, the character set conversion will fail, causing it to display garbage, and then you get to go through the database to figure out which rows in which tables are affected and do a major cleanup. If you have multiple character sets inadvertently stored in the database (i.e. a few rows of Windows-1252, a few of Shift-JIS, and a few of UTF8), you'll have a gigantic mess to clean up. This is a concern whether you're using CHAR/ VARCHAR2 or NCHAR/ NVARCHAR2, and it's actually slightly harder with the N data types, but it's something to be very aware of.
    Justin

  • Multiple Character set for NLS

    Hi,
    I'm using Oracle 8i database. Is it possible to set the different character set for the database? The requirement is to support the two different character set data, one (main) Japanese character set and other Simplified Japanese. Or is there any other way in which i can store these data (Japanese & Chinese)?
    Thanks & Regards,
    Jayesh

    Please don't get me wrong. Currently it is set in the windows database. I did not set nls_lang at the command prompt before import into windows. However nls_lang is already set and it is character set WE8ISO8859P1 the same as the value I specified in creation script, besides the other two values AMERICAN, AMERICA. They are now same in both solaris and windows. Only the character sets are different because I specified a different one. So, is it ok or do I now need another fresh import this time with nls_lang set to AMERICAN_AMERICA.UTF8 ?

  • Does Netweaver 7.0 support the AL32UTF8 character set?

    We are running Netweaver 7.0 for a Vendavo solution running on Oracle 10.2.   We have created the Vendavo schema within the same database as Netweaver.  Vendavo documentation states the character set be AL32UTF8.  SAP sets the character set on Netweaver install to just UTF8.  Does anyone know if Netweaver 7.0 supports changing the DB character set to AL32UTF8?
    Thanks.

    Hi,
    Yes you can as AL32UTF8 is a superset of UTF8 - this is supported. Please check the following note.
    https://service.sap.com/sap/support/notes/456968
    The above note also discusses how to change database character sets.
    Also check the following FAQ SAP Note on Oracle Character Sets.
    https://service.sap.com/sap/support/notes/606359
    Moreover the SAP NW 7 installation sets the Oracle NLS to UTF8 which is find. Note NLS and Database character set is not the same,please check the following sap note for the same.
    https://service.sap.com/sap/support/notes/669902
    Hope this helps.
    - Regards, Dibya

Maybe you are looking for

  • Connection problems with Dedicated Application Sever for R/3 System

    Hi all, I'm trying to define a Dedicated Application Sever for R/3 System. I filled in the parameters for: (according to the basis team) Admin Password Admin User ID Application Host Logical System Name SAP Client (300) SAP System ID SAP System Numbe

  • Plug-In for Time-Lapse for Lightroom 5

    I'm doing a time-lapse in Lightroom 5 and understand all the steps expect the one towards the end which deals with the Template Browser.  From my understanding in order to export my time-lapse video as a video I will need to downlaod a free plug-in. 

  • I can't install GRUB2 in one x86_64 machine [Solved]

    Hi to everyone: This is my partitions schema: [root@archerpc ~]# fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 224 heads, 19 sectors/track, 229504 cylinders Units = cylinders of 4256 * 512 = 2179072 bytes Disk identifier: 0x4a7d4a7c Device Boo

  • Strange entry in the /etc/hosts file

    Hi, While doing some testings with my network this afternoon, I noticed that there's this strange line in my /etc/hosts file: ::1 localhost Anybody has any idea what the "::1" is for? The only thing related to network that I had recently installed is

  • What is CPA cache

    Hey guys, Do you understand CPA cache, what is used for? where the CPA caceh located? is it on XI J2EE stack or XI ABAP stack? if you can point me a standard SAP document, that would be appreciated.