Character Set issues.  Please advise

I have a client who use a version 10gR2 DB that stores both English and French data. There are several times where they will send up .dmp file where we load it into ours.
2 questions.
What would be the best charater sets to use here in this setup?
I am assuming we would use
NLS_CHARACTERSET = WE8ISO9959P1
NLS_NCHAR_CHARACTERSET = AL16UTF16
Also if someone can confirm for me.
NLS_CHARACTERSET = database character set ???
NLS_NCHAR_CHARACTERSET = national character set???

So is it better to say that I should use the AL32UTF8
instead of AL16UTF16 ?It's not an instead of situation. AL32UTF8 is a valid setting for the database character set, which controls CHAR and VARCHAR2 columns. AL16UTF16 is a valid setting for the national character set which controls NCHAR and NVARCHAR2 columns.
Could you tell me the difference?The difference between the two encodings comes down to how many bytes are required to store a particular code point (character). AL32UTF8 is a variable-length character set, so 1 character will require between 1 and 3 bytes of storage (4 for the supplemental characters but those are rather rare). AL16UTF16 is a fixed-width character set, so 1 character will require 2 bytes of storage (4 for the rare supplemental characters again).
Also could you tell me the difference between
WE8ISO8859P15 and WE8ISO8859P1 ? There's a Wikipedia article that discusses the differences and has links to the two different code tables.
Werner's point is an excellent one as well. I was assuming that we were talking about how to set up both sides of this proposed system. If the source system already exists, there are additional considerations like ensuring that your target system supports a superset of the characters supported by the source system. Regardless, when doing imports & exports, as Werner points out, you need to ensure that NLS_LANG is set appropriately.
Justin

Similar Messages

  • German character set issues on Solaris

    Hi,
    I am facing an issue with German character settings with my Java application on a solaris box.
    When I run my application on the box, and I pass an input file with German special characters they get converted as ?. However, other normal English characters are formed properly.
    When I run the same application on another Solaris box with a different JRE, the German characters are formed properly.
    I understand that there is a difference in the archiecture between the 2 boxes ie.e
    64 bit SPARC machine v/s 32 bit x86 machine
    the JRE
    1.4.2_03(64bit) v/s 1.4.1_01
    I am tryinbg to evaludate further differences between the 2 environments to pinpoint the issue, and get this resolved on the 1st box.
    Can anyone provide me any inputs?
    Lavin

    When you read the file, please point out what character set using. For example:
    FileInputStream fstream = new FileInputStream(url.getFile());
    DataInputStream in = new DataInputStream(fstream);
    BufferedReader br = new BufferedReader(new InputStreamReader(in, Charset.forName("ISO-8859-1")));
    br.readLine();
    This link possibly can help you.
    http://www.velocityreviews.com/forums/t126128-jdk-14-character-set-change.html

  • Urgent :SQL Loader Arabic Character Set Issue

    HI all,
    I am loading arabic characters into my database using SQL Loader using a fixed length data file. I had set my characterset and NLS_LANG set to UTF8.When I try to load the chararacter 'B' in arabic data i.e. ' لا ' , it gets loaded as junk in the table. All other characters are loaded correctly. Please help me in this issue and its very urgent.
    Thanks,
    Karthik

    Hi,
    Thanks for the responses.
    Even after setting the characterset to arabic and the problem continues to persist. This problem occurs only with the character "b".
    Please find my sample control file,input file and nls_parameters below:
    My control file
    LOAD DATA
    characterset UTF8
    LENGTH SEMANTICS CHAR
    BYTEORDER little endian
    INFILE 'C:\sample tape files\ARAB.txt'
    replace INTO TABLE user1
    TRAILING NULLCOLS
    name POSITION(1:2) CHAR(1),
    id POSITION (3:3) CHAR(1) ,
    salary POSITION (4:5) CHAR(2)
    My Input file - Fixed Format
    ?a01
    ??b02
    ?c03
    The ? indicates arabic characters.Arabic fonts must be installed to view them.
    NLS_PARAMETERS
    PARAMETER     VALUE
    NLS_LANGUAGE     ARABIC
    NLS_TERRITORY     UNITED ARAB EMIRATES
    NLS_CURRENCY     ?.?.
    NLS_ISO_CURRENCY     UNITED ARAB EMIRATES
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD/MM/RR
    NLS_DATE_LANGUAGE     ARABIC
    NLS_SORT     ARABIC
    NLS_TIME_FORMAT     HH12:MI:SSXFF PM
    NLS_TIMESTAMP_FORMAT     DD/MM/RR HH12:MI:SSXFF PM
    NLS_TIME_TZ_FORMAT     HH12:MI:SSXFF PM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD/MM/RR HH12:MI:SSXFF PM TZR
    NLS_DUAL_CURRENCY     ?.?.
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     CHAR
    NLS_NCHAR_CONV_EXCP     FALSE

  • Oracle 10g db character set issue

    I have a database 10g with database character set western
    european "WE8ISO8859P1" and we are receiving data from source
    database with database character set as "UTF8" during data load
    for one of the tables we receive the following error "ORA-29275:
    partial multibyte character" I understand this might be due to
    the fact western european character set is not a subset/superset
    of UTF8 .Am i right ? What would be the way around this ?

    It is certainly possible that the issue is that your database characterset is a subset of UTF8.
    How are you getting the data? Are we talking about a flat file? A query over a database link? Something else?
    Does the data you're getting contain characters that cannot be represented in the ISO-8859 1 character set? It is quite common to send UTF-8 encoded files even when the underlying data is representable in other 8-bit character sets (like ISO-8859 1).
    What are you trying to do with the data? Are you trying to load it into a CHAR/ VARCHAR2 column? A CLOB? A BLOB? An NCHAR/ NVARCHAR2? Something else?
    Justin

  • Foreign character set issue

    Hi
    This might sound a bit silly, but stay with me.
    There's a database that decidedly supports UTF 8. I checked using this query
    select * from nls_database_parameters where parameter like '%CHARACTERSET'
    And got this result
    NLS_CHARACTERSET     UTF8
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    It's Oracle 10g.
    In a particular table, some text is stored in multiple languages. There are seven languages (English, Mandarin, Japanese, German...). Every language has 2-3 rows to itself. There's a column where I have to get rid of the trailing few characters, the number of which depends on the content of the string.
    But I cannot see any of the Eastern languages in TOAD. The column is in VARCHAR2.
    My problem is twofold.
    1. What functions do I use to ensure that only the last few bytes are truncated, and there's no data loss (which many websites gravely warn of when dealing with foreign language data) ?
    2. How can I see this foreign language text in TOAD/SQLPlus?
    (Yes, I'm kind of new to the whole multiple-language-game. Please let me know if I've left out any important detail!)

    Do you have metalink access?
    If so, please see the notes below, there's a lot of good information in them:
    158577.1 - NLS_LANG explained
    260893.1 - Unicode Character Sets in the Database
    788156.1 - UTF8 implications
    With any character set situation there are at least two and a bit sides to the equation.
    First is whether you are storing the correct data.
    You're best using the DUMP function to inspect the stored data, e.g.
    SELECT DUMP(<column_name>) FROM <table_name> WHERE ....This function may help you with your truncation of the last few bytes - not sure why you need to do this?
    The "second and a bit" bit is having the correct client settings - NLS_LANG - and using a client which supports the characters required.
    SQL*Plus has it's limitations here. Toad I don't know well enough but it should support full UTF8 characters.
    SQL*Developer and iSQL*Plus both should support the full UTF8 - I tend to use the former, particularly for UTF8.

  • 9i Character Set Issues

    Hi All,
    I have a done a search through the forums on this topic, and revealed many answers to my question, however I need some more information to help me resolve an Issue.
    Here goes...
    Our company has a French/Canadian client that has installed 9i with the characterset of WE8DEC. The software that we wrote and provided them with does not support this characterset, and in turn they are seeing inverted question marks in place of some characters (Usually accented characters).
    Basically, I want to understand the options that our client has in terms of possible solutions.
    I understand that the characterset can only be altered in the database to a superset of the current characterset, so this affords us no option in this regard.
    My suggestions so far, is to create a new database and transfer the data accross, however will this actually cure the issue? Will the data transfer accross and convert into it's correct character? or will it just transfer as an inverted "?".
    Would an upgrade to 10g help them? Or would the issue remain?
    Is there anything that can be done on the O/S or Client level to rectify this?
    All help greatly appreciated!
    Thanks.

    user10498503 wrote:
    Hi All,
    I have a done a search through the forums on this topic, and revealed many answers to my question, however I need some more information to help me resolve an Issue.
    Here goes...
    Our company has a French/Canadian client that has installed 9i with the characterset of WE8DEC. The software that we wrote and provided them with does not support this characterset, and in turn they are seeing inverted question marks in place of some characters (Usually accented characters).
    Basically, I want to understand the options that our client has in terms of possible solutions.
    I understand that the characterset can only be altered in the database to a superset of the current characterset, so this affords us no option in this regard.
    My suggestions so far, is to create a new database and transfer the data accross, however will this actually cure the issue? Will the data transfer accross and convert into it's correct character? or will it just transfer as an inverted "?".Your issue has a client side component (the NLS_LANG setting already mentioned here) and a potential server side component (corrupted/invalid data in the database), depending on what happened so far.
    The client side issue could lead to corrupted data in the database, and there is no way of "correcting" this automatically, even if you move to a new database using a different character set on 9i, or upgrade to 10g.
    If a client that writes into the database uses a wrong NLS_LANG setting, you're potentially ending up with corrupted data in the database.
    If a client that reads data from the database uses a wrong NLS_LANG setting, you're potentially ending up with corrupted data on the front end application (the typical "?" question mark).
    All this has nothing to do yet if your database character set is actually capable of storing the characters you attempt to store from the client side.
    So I think you first need to determine if your database contains invalid data using the "CSSCAN" utility. For more information, check the manuals and MetaLink:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96529/ch11.htm
    Depending on your findings you'll find the possible actions on MetaLink.
    Then you should determine if your client NLS_LANG is appropriate. There is a very good FAQ available here how to check the current setting and determine the correct setting:
    http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Keynote update to 6.1 Issue - Please Advise

    App Store shows Keynote is available to update to 6.1 (I am currently running 6.01). However, when I select update I keep getting a warning stating "An error has occurred". (An included explanation would be nice.)
    I did notice that it appears as if the update may be in process? (See pic below). Not sure. I do know I can open both versions of Keynote:  '09 v 5.3 & 6.01
    Not sure what to do.
    Please advise.
    This is on my Mac Mini running Mavericks 10.9.1

    I also got an error in updating to keynote 6.1.  in update tab, it shows the following, u can see that I have updated Numbers successfully.
    when i press the update button, finally an error box appears saying the following
    when i switches to purchase tab, repeat the update for keynote
    after click update button, the following happens
    how can u do?

  • PL/SQL XML generation: Character set issues

    Hi,
    I am using the PL/SQL DOM (wrappers to the Java DOM), to generate XML from bits of database information. (On Oracle 8i).
    The output XML must be in UTF-8, and the database character set must be anything I want it to be. So I call
    setCharset(doc, 'UTF8')
    at the beginning, and I call
    writeToClob(doc, xmllob, 'UTF8')
    at the end, just to cover all eventualities.
    However, any character outside ASCII gets
    replaced with the character string "\xBF\xBF", which is rather tedious.
    If, instead, we go via
    writeToBuffer(doc, xmlbuf, 'UTF8')
    and then dump the buffer contents into a clob, the UTF8 encoding is preserved, and everything works.
    (This latter method is not good enough for my needs; I need more than 32K of data...)
    So I was wondering if any kind soul could tell me what I am doing wrong.
    Thanks,
    << Mike Alexander >>
    null

    I have the same problem. Is there any solution found?
    only xslprocessor.valueOf returns values of xml document not loosing special symbols.

  • Character set issue after import?

    Hi,
    Source DB version:10.2.0.1
    OS:Red hat Linux
    Target DB version:10.2.0.1
    OS:Windows server
    source database character set:AL32UTF8
    Performed the export as below
    $export NLS_LANG=AMERICAN.AL32UTF8
    Performed the full database export and it finished successfully with out any warnings
    Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    Now imported into the target database as below.
    target database character set:AL32UTF8
    c:\>set NLS_LANG=AMERICAN.AL32UTF8
    now run import command which imported successfully with out any warnings.
    However I’m having problems with Greek characters. Most of them are shown as ?, while some of them are converted to Latin chars
    For example:
    This was supposed to be Αγγελική ???e????
    And this Κουκουτσάκη ??????ts???
    While this one should be Δήμητρα ??µ?t?a
    From the import log file I can see that ‘import done in AL32UTF8 character set and AL16UTF16 NCHAR character set’ which I believe is correct.
    Can any one tell me how i can over come this problem of greek charecters.
    Thank you all.

    PARAMETER
    VALUE
    NLS_LANGUAGE
    AMERICAN
    NLS_TERRITORY
    AMERICA
    NLS_CURRENCY
    $
    PARAMETER
    VALUE
    NLS_ISO_CURRENCY
    AMERICA
    NLS_NUMERIC_CHARACTERS
    NLS_CHARACTERSET
    AL32UTF8
    PARAMETER
    VALUE
    NLS_CALENDAR
    GREGORIAN
    NLS_DATE_FORMAT
    DD-MON-RR
    NLS_DATE_LANGUAGE
    AMERICAN
    PARAMETER
    VALUE
    NLS_SORT
    BINARY
    NLS_TIME_FORMAT
    HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT
    DD-MON-RR HH.MI.SSXFF AM
    PARAMETER
    VALUE
    NLS_TIME_TZ_FORMAT
    HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT
    DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY
    $
    PARAMETER
    VALUE
    NLS_COMP
    BINARY
    NLS_LENGTH_SEMANTICS
    BYTE
    NLS_NCHAR_CONV_EXCP
    FALSE
    PARAMETER
    VALUE
    NLS_NCHAR_CHARACTERSET
    AL16UTF16
    NLS_RDBMS_VERSION
    10.2.0.1.0
    20 rows selected.

  • Many Initial Issues - Please advise

    I got two identical new MBP on Wed at the Apple store. I had Time Machine backups from two other older MBP that I used. I ran each of those on the new laptops. Since then, one has been perfect. The other has had many issues...
    - Already one crash
    - Insanely long start up times (currently trouble shooting this)
    - The trackpad skips and freezes constantly
    - The fans are nearly running at 100% all the time now. I think this has to do with start-up, but not sure.
    I have run disk permission repair. I have clean cache files. I have run and cleaned out tons of other stuff, with no luck. SHortly, I'm going to do the P,R,Alt,Apple in just a min to see if that does anything.
    Any other ideas? Anyone suggest I just take it back? I was told I have 3 days buyer's remorse with no fees and stuff. Is it possible I just have a lemon? I really do not want to spend the time on a clean install and testing for another day or two... that puts me over that 3 day limit, then I have to pay restocking fees, etc.

    Thank you all for your insights. I stayed up until 6am last night (this morning) trouble shooting. After a few clean installs, restores, etc. I did a clean install, then restore, then an archive & install. That seems to have fixed the problem. The problems only manifested after my restore. But the last time I did an archive/install so all the system files are fresh and not from my TM restore. When using a TM backup, there is an option to not copy over your system/library files, unfortunately this option was checked and grayed out, so I didn't have the option NOT to install those files.
    I do still have one whole day. I'm putting it through it's paces today, and if there's any problems at all, I will return it and hopefully just get a new one right there.
    Thank you all again. If anyone has specific insight into why this happens, I would still like your input.

  • Character set issues

    Hi,
    I was hoping somebody can help me out.....or at least direct me to the right forum...
    Our application has a JBoss server running on a Unix platorm.
    Apache is the webserver.
    My requirement is that a client user should be able to input details on to a for and on submit it will go to a third party tool and generate a PDF.
    Now the catch here is that our clients want to input polish characters.
    I installed the corresponding central european font for PDF generation and tested it out individually and that is working fine!
    I modifed the jsp to include:
    <meta HTTP-EQUIV="Content-Type" content="text/html; charset=iso-8859-2">
    I have also modified the apached httpd.conf to add:
    AddDefaultCharset ISO-8859-2
    All this allows me to display the following characters on the page "&#281; &#261; &#380; � &#322; " and to input them but when submiting the generated PDF seems to relace the &#261; with a plus minus sign.
    I cant seem to paste the charaters here properly.... :( but hopefully u got the idea..
    Thanks a million in advance...............

    Use something like this and try
              String inputStr=request.getParameter(paramName);
              inputStr= (inputStr==null? "": inputStr);
               BufferedReader reader = new BufferedReader(new InputStreamReader(new StringBufferInputStream(inputStr),"UTF8"));replace UTF8 with appropriate encoding
    _boolee                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Character set conversion problem during upgrde.

    Dear Friends,
    I am trying upgrade one of my windows database with version 9.2.0.5 to 10.2.0.4 on unix. I am following exp/imp. During import I am seeing followinig errors for couple of tables,
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column
    IMP-00058: ORACLE error 1461 encountered
    ORA-01461: can bind a LONG value only for insert into a LONG column
    This may be due to character set issue, since database on windows has WE8MSWIN1252 and on unix it has UTF8.
    Please let me know how I can resolve this issue.
    Regards.
    Mahdu

    Hello,
    It's better that your Target Database is created with the same character set than the source one.
    This is an option you can choose at the database creation.
    If you have to stay in UTF8 on your Target database then, you'll have to extend the column size or, use the
    option CHAR (as Unicode may use up to *4 bytes* for one character instead of *1 byte* for WE8MSWIN1252).
    To use the option CHAR you may specify it on the column datatype, for instance:
    col1 VARCHAR2 (100 CHAR)Else, without this option, VARCHAR2 (100) means 100 Bytes (which can store 25 characters in Unicode).
    You also have the parameter NLS_LENGTH_SEMANTICS that you can set to CHAR, but the export/import
    utility doesn't manage it well.
    So, the safest way is to create your target database with the same character set than the source one
    except if you want to migrate to Unicode.
    Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Mar 3, 2010 10:11 PM

  • Change to UTF8 character set

    Hi,
    Currently the character set available is WE8MSWIN1252 and I need it
    to change to UTF8.
    When I tried to change then it throws an error saying that it should be superset of the current character set.
    Please let me know how to resolve this issue ?
    Thanks and Regards,
    A.Mohammed Rafi.

    This transformation is not possible using ALTER .... CHARACTERSET ..., you have to use export/import. Here's a list a possible combinations (up to 9.2),
    for example AL32UTF8 is a superset of UTF8 :
    8.1.6 Subset/Superset Pairs
    ===========================
    A. Current Char set B. New Char set (Superset of A.)
    US7ASCII WE8DEC
    US7ASCII US8PC437
    US7ASCII WE8PC850
    US7ASCII IN8ISCII
    US7ASCII WE8PC858
    US7ASCII WE8ISO8859P1
    US7ASCII EE8ISO8859P2
    US7ASCII SE8ISO8859P3
    US7ASCII NEE8ISO8859P4
    US7ASCII CL8ISO8859P5
    US7ASCII AR8ISO8859P6
    US7ASCII EL8ISO8859P7
    US7ASCII IW8ISO8859P8
    US7ASCII WE8ISO8859P9
    US7ASCII NE8ISO8859P10
    US7ASCII TH8TISASCII
    US7ASCII BN8BSCII
    US7ASCII VN8VN3
    US7ASCII VN8MSWIN1258
    US7ASCII WE8ISO8859P15
    US7ASCII WE8NEXTSTEP
    US7ASCII AR8ASMO708PLUS
    US7ASCII EL8DEC
    US7ASCII TR8DEC
    US7ASCII LA8PASSPORT
    US7ASCII BG8PC437S
    US7ASCII EE8PC852
    US7ASCII RU8PC866
    US7ASCII RU8BESTA
    US7ASCII IW8PC1507
    US7ASCII RU8PC855
    US7ASCII TR8PC857
    US7ASCII CL8MACCYRILLICS
    US7ASCII WE8PC860
    US7ASCII IS8PC861
    US7ASCII EE8MACCES
    US7ASCII EE8MACCROATIANS
    US7ASCII TR8MACTURKISHS
    US7ASCII EL8MACGREEKS
    US7ASCII IW8MACHEBREWS
    US7ASCII EE8MSWIN1250
    US7ASCII CL8MSWIN1251
    US7ASCII ET8MSWIN923
    US7ASCII BG8MSWIN
    US7ASCII EL8MSWIN1253
    US7ASCII IW8MSWIN1255
    US7ASCII LT8MSWIN921
    US7ASCII TR8MSWIN1254
    US7ASCII WE8MSWIN1252
    US7ASCII BLT8MSWIN1257
    US7ASCII N8PC865
    US7ASCII BLT8CP921
    US7ASCII LV8PC1117
    US7ASCII LV8PC8LR
    US7ASCII LV8RST104090
    US7ASCII CL8KOI8R
    US7ASCII BLT8PC775
    US7ASCII WE8DG
    US7ASCII WE8NCR4970
    US7ASCII WE8ROMAN8
    US7ASCII WE8MACROMAN8S
    US7ASCII TH8MACTHAIS
    US7ASCII HU8CWI2
    US7ASCII EL8PC437S
    US7ASCII LT8PC772
    US7ASCII LT8PC774
    US7ASCII EL8PC869
    US7ASCII EL8PC851
    US7ASCII CDN8PC863
    US7ASCII HU8ABMOD
    US7ASCII AR8ASMO8X
    US7ASCII AR8NAFITHA711T
    US7ASCII AR8SAKHR707T
    US7ASCII AR8MUSSAD768T
    US7ASCII AR8ADOS710T
    US7ASCII AR8ADOS720T
    US7ASCII AR8APTEC715T
    US7ASCII AR8NAFITHA721T
    US7ASCII AR8HPARABIC8T
    US7ASCII AR8NAFITHA711
    US7ASCII AR8SAKHR707
    US7ASCII AR8MUSSAD768
    US7ASCII AR8ADOS710
    US7ASCII AR8ADOS720
    US7ASCII AR8APTEC715
    US7ASCII AR8MSAWIN
    US7ASCII AR8NAFITHA721
    US7ASCII AR8SAKHR706
    US7ASCII AR8ARABICMACS
    US7ASCII LA8ISO6937
    US7ASCII JA16VMS
    US7ASCII JA16EUC
    US7ASCII JA16SJIS
    US7ASCII KO16KSC5601
    US7ASCII KO16KSCCS
    US7ASCII KO16MSWIN949
    US7ASCII ZHS16CGB231280
    US7ASCII ZHS16GBK
    US7ASCII ZHT32EUC
    US7ASCII ZHT32SOPS
    US7ASCII ZHT16DBT
    US7ASCII ZHT32TRIS
    US7ASCII ZHT16BIG5
    US7ASCII ZHT16CCDC
    US7ASCII ZHT16MSWIN950
    US7ASCII AL24UTFFSS
    US7ASCII UTF8
    US7ASCII JA16TSTSET2
    US7ASCII JA16TSTSET
    8.1.7 Additions
    ===============
    US7ASCII ZHT16HKSCS
    US7ASCII KO16TSTSET
    WE8DEC TR8DEC
    WE8DEC WE8NCR4970
    WE8PC850 WE8PC858
    D7DEC D7SIEMENS9780X
    I7DEC I7SIEMENS9780X
    WE8ISO8859P1 WE8MSWIN1252
    AR8ISO8859P6 AR8ASMO708PLUS
    AR8ISO8859P6 AR8ASMO8X
    IW8EBCDIC424 IW8EBCDIC1086
    IW8EBCDIC1086 IW8EBCDIC424
    LV8PC8LR LV8RST104090
    DK7SIEMENS9780X N7SIEMENS9780X
    N7SIEMENS9780X DK7SIEMENS9780X
    I7SIEMENS9780X I7DEC
    D7SIEMENS9780X D7DEC
    WE8NCR4970 WE8DEC
    WE8NCR4970 TR8DEC
    AR8SAKHR707T AR8SAKHR707
    AR8MUSSAD768T AR8MUSSAD768
    AR8ADOS720T AR8ADOS720
    AR8NAFITHA711 AR8NAFITHA711T
    AR8SAKHR707 AR8SAKHR707T
    AR8MUSSAD768 AR8MUSSAD768T
    AR8ADOS710 AR8ADOS710T
    AR8ADOS720 AR8ADOS720T
    AR8APTEC715 AR8APTEC715T
    AR8NAFITHA721 AR8NAFITHA721T
    AR8ARABICMAC AR8ARABICMACT
    AR8ARABICMACT AR8ARABICMAC
    KO16KSC5601 KO16MSWIN949
    WE16DECTST2 WE16DECTST
    WE16DECTST WE16DECTST2
    9.0.1 Additions
    ===============
    US7ASCII BLT8ISO8859P13
    US7ASCII CEL8ISO8859P14
    US7ASCII CL8ISOIR111
    US7ASCII CL8KOI8U
    US7ASCII AL32UTF8
    BLT8CP921 BLT8ISO8859P13
    US7ASCII AR8MSWIN1256
    UTF8 AL32UTF8 (added in patchset 9.0.1.2)
    Character Set Subset/Superset Pairs Obsolete from 9.0.1
    =======================================================
    US7ASCII AR8MSAWIN
    AR8ARABICMAC AR8ARABICMACT
    9.2.0 Additions
    ===============
    US7ASCII JA16EUCTILDE
    US7ASCII JA16SJISTILDE
    US7ASCII ZHS32GB18030
    US7ASCII ZHT32EUCTST
    WE8ISO8859P9 TR8MSWIN1254
    LT8MSWIN921 BLT8ISO8859P13
    LT8MSWIN921 BLT8CP921
    BLT8CP921 LT8MSWIN921
    AR8ARABICMAC AR8ARABICMACT
    ZHT32EUC ZHT32EUCTST
    UTF8 AL32UTF8
    Character Set Subset/Superset Pairs Obsolete from 9.2.0
    =======================================================
    LV8PC8LR LV8RST104090

  • JNLS - National Character Sets

    If I open the Oracle Database Configuration Assistent the following Error is returning:
    JNLS Exception: Oracle.ntpg.jnls.JNLS Exception
    Unable to find any National Character Sets. Please check your Oracle installation.
    How can I fix this problem under th following configuration: Linux Slackware 7.0, KDE, Oracle 8i EE.

    Hi,
    I am having the same problem you had two years ago. Could you please let me know if you got a solution to it. And if so, how.
    Thankyou very much.
    Sincerely,
    Simon.

Maybe you are looking for

  • In PHP how to pass a recordset value through a url

    I have a blog set up with PHP/MySQL with two tables (1 table for blog entry and 1 table for comments on the blog) I can display the blog and the comment with a LEFT JOIN recordset like this--- SELECT golf_blog.blog_ID, DATE_FORMAT( golf_blog.blog_cre

  • Folder deletion in Project

    Hi Experts, I have a scenario wherein,a General project is created followed by sub folders containing .hbdprocedures,.hdbtables,schemas etc. I had created a similar folder inside the project which was blank but later deleted it .After that activated

  • IPad screen zoomed in and won't zoom out

    My first gen iPad screen is zoomed in and I can't get it to zoom out. I have restarted, reset and done all I can think of short of doing a restore. Can I restore without doing the update? I don't mind doing a restore I just don't want to do the upgra

  • Help wanted to sell my mini

    I really want a nano, but im not going to buy one without selling my mini first. I have thought about ebay but decided not to because on average an ipod mini sells for 170 dollars (from what ive seen from previous completed aucitions) I currently hav

  • IOS 5.1.1 and IMAP mail not working?

    I'm wondering if iOS 5.1.1 broke IMAP mail on the iPad2 and iPhone 4S. Background I have used several email accounts on the iPad2 and iPhone 4S  (and the iPhone 4 before that) successfully for quite some time.  Just recently I noticed that there were