Converting UTF8 to US7ASCII

Hello,
We have a database which is 9i and has a NLS_CHARACTERSET set to US7ASCII.
We created a new database (version 10g 10.2.0.2.0) on a new server which has a NLS_CHARACTERSET set to UTF 8. When we exported the database from the 9i database to 10g database, obviously because of the NLS_CHARACTERSET there was an issue of data corruption (Columns width increasing by 3 times, understandable). Is there a way to convert the UTF8 character set on the 10g database to US7ASCII character set, and then re-importing and exporting the 9i database to 10g database. I know we can convert a subset to superset. I want to find out if there is a way to convert a superset to a subset.
Or do I have to re-create the whole database again.
Thanks,
Kalyan

You shouldn't have problem migrate a US7ASCII to UTF8. UTF8 is superset of US7ASCII.
The problem you are facing is when your schema has column defined as CHAR(20) for example, a single byte character becomes a two-byte character in UTF8, you now have Data Truncation issue. Same problem can also happens in varchar type.
You can't change from superset to subset for obvious reason but you can convert your 9i from US7ASCII to UTF8 if you like.
Also run this character set scanner before you make conversion.
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96529/ch11.htm#1005049

Similar Messages

  • Convert UTF8 to WE8ISO8859P1 or WE8ISO8859P15

    Hi!
    If got the following situation:
    I've got a database with the character set WE8ISO8859P1.
    I've got a second database with UTF8.
    I think all databases are 10.2.xx
    There is a pl/sql interface on the WE8ISO8859P1 Database which reads data from the UTF8 database via database link.
    But after inserting UTF8-data into the WE8ISO8859P1 database, the are not converted correctly automaticaly.
    How can i convert UTF8-data within my WE8ISO8859P1 database to WE8ISO8859P1-data?
    Is there a standard function within the WE8ISO8859P1 database?
    e.g. Select standard_convert_func(my_col, 'UTF8', 'WE8ISO8859P1') from myTable@db_link
    Or is the better way to convert this utf8-data to WE8ISO8859P1 within the UTF-8 database?
    insert into my_interface_tabele(my_col) select standard_convert_func(my_col, 'UTF8', 'WE8ISO8859P1') from my_utf8_base_tabel;
    Thank you for your help!
    Best regards,
    Thomas

    Hi!
    Within my ISO-DB i receive the following results in SQL*Plus:
    SQL> select convert(DN_DIENSTTITEL, 'WE8ISO8859P1', 'UTF8')
    2 from dn_stammtest@lsal_n_pep_test_link;
    CONVERT(DN_DIENSTTITEL,'WE8ISO8859P1','UTF8')
    A K B A R I A N `ag6 ¿ N a t a l
    SQL> ed
    Datei afiedt.buf wurde geschrieben
    1 select dump(DN_DIENSTTITEL)
    2* from dn_stammtest@lsal_n_pep_test_link
    SQL> r
    1 select dump(DN_DIENSTTITEL)
    2* from dn_stammtest@lsal_n_pep_test_link
    DUMP(DN_DIENSTTITEL)
    Typ=1 Len=40: 0,65,0,75,0,66,0,65,0,82,0,73,0,65,0,78,0,32,1,96,1,97,1,103,4,54,32,172,0,32,0,78,0,9
    I will contact the DB-Admin to do this select within the UTF8 DB.
    Best regards.
    Thomas

  • Converting to UTF8 from US7ASCII

    This is what I did to change the characterset of my PeopleSoft HRMS database to change the character set from US7ASCII to UTF8.Everything seems to have gone well but there is one value that is still not right.I am not sure how to change this or if I let it remain as it is, how is it going to affect the data.
    Stop all queues with dbms_aqadm.stop_queue( queue_name => '<queue name>');
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
    ALTER SYSTEM SET AQ_TM_PROCESSES=0;
    ALTER DATABASE OPEN;
    select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    TRUNCATE TABLE SYS.METASTYLESHEET; ---to get rid of ORA-12716 per Metalink Note - 213015.1
    ALTER DATABASE CHARACTER SET UTF8;
    --ALTER DATABASE CHARACTER SET INTERNAL_USE UTF8;
    SHUTDOWN IMMEDIATE;
    STARTUP;
    @$ORACLE_HOME/rdbms/admin/catmet.sql
    SHUTDOWN IMMEDIATE;
    STARTUP;
    Start all queues with dbms_aqadm.start_queue( queue_name => '<queue name>');
    Now when I query for the nls parameters in the database, this is what I get:
    DICT.BASE     2
    DBTIMEZONE     0:00
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     UTF8
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_NCHAR_CHARACTERSET     UTF8
    GLOBAL_DB_NAME     HRDEV
    EXPORT_VIEWS_VERSION     8
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_RDBMS_VERSION     9.2.0.4.0
    DEFAULT_TEMP_TABLESPACE     SYSTEM
    NLS_SAVED_NCHAR_CS     US7ASCII
    I am concerned about this value NLS_SAVED_NCHAR_CS which still is US7ASCII.Will this affect the application data anyway.

    I would be too.
    But given that this is PeopleSoft this is not the kind of thing where you should be taking advice from strangers at a website.
    Open an SR with Oracle metalink and get advice supported by Oracle Corp.

  • Convert WE8MSWIN1252 to US7ASCII - Urgent

    Could some one help me to convert the character setting of my Oracle 8i database from WE8MSWIN1252 to US7ASCII.
    Is it possible in the first place as it tells me that the WE8MSWIN1252 is a super set... !
    Cheers - Aravind.

    Actually that's not possible (the opposite could be possible).

  • Convert utf8 char in a NSString

    Hi to all! I recieve from my server a XML file. I use NSXMLParser to retrieve attributes and values. The problem is this: i've many utf8 encoded char in many attributes, and i recieve them in NSStrings.
    //in - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict DELEGATE
    NSString *myAttribute = [[NSString alloc] initWithString:[attributeDict valueForKey:@"attribute"]];
    The NSString at key "attribute" can contain UTF8 char as ò and similar.
    How can i decode them to obtain a NSString with ò instead of ò and so on?

    I understood that you helped me, I will need to use
    parseInt to convert the string to number...No.
    Assuming you've got the "15:00" part, say in a String called timeStr, you'd just pass that to dateFormat.parse(timeStr)
    but I
    need to know where in the String are my numbers...
    isn't it? How may I do this? Well, that depends.
    What's your logic for finding it manually? Is it the first occurrence of any numerical digit in the string? Is it at a known, fixed character index? Some other logic?

  • Convert UTF8 to 8-bit intellegently

    I need to convert a UTF8 file into 8-bit.
    Of coarse, there is no way to do this perfectly, but it would be nice if there was a function that would make an intellegent substitution for characters that aren't in ASCII. For instance, the curly double quotes character would become a normal double quotes.
    I tried using String.getBytes("ISO-8859-1"), but it just subsitutes a question mark for all characters that aren't in ASCII.
    Anyone know of a function that can do this?

    just wrote Very long winded method for most of the common utf-8
    //list of unexcepted frequent chars: �����������������������������������������������������������
    StrAllData = StrAllData.replaceAll("�", "\"");
              StrAllData = StrAllData.replaceAll("�", "\"");
              StrAllData = StrAllData.replaceAll("�", "\'");
              StrAllData = StrAllData.replaceAll("�", "\'");      
              StrAllData = StrAllData.replaceAll("�", "A");                StrAllData = StrAllData.replaceAll("�", "A");
              StrAllData = StrAllData.replaceAll("�", "A");                StrAllData = StrAllData.replaceAll("�", "A");
              StrAllData = StrAllData.replaceAll("�", "E");                StrAllData = StrAllData.replaceAll("�", "E");
              StrAllData = StrAllData.replaceAll("�", "E");                StrAllData = StrAllData.replaceAll("�", "E");
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "I");
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "I");
              StrAllData = StrAllData.replaceAll("�", "N");                StrAllData = StrAllData.replaceAll("�", "O");
              StrAllData = StrAllData.replaceAll("�", "O");                StrAllData = StrAllData.replaceAll("�", "O");
              StrAllData = StrAllData.replaceAll("�", "A");                StrAllData = StrAllData.replaceAll("�", "E");      
              StrAllData = StrAllData.replaceAll("�", "E");                StrAllData = StrAllData.replaceAll("�", "E");      
              StrAllData = StrAllData.replaceAll("�", "E");               StrAllData = StrAllData.replaceAll("�", "I");      
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "B");      
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "N");      
              StrAllData = StrAllData.replaceAll("�", "a");                StrAllData = StrAllData.replaceAll("�", "O");      
              StrAllData = StrAllData.replaceAll("�", "a");               StrAllData = StrAllData.replaceAll("�", "a");
              StrAllData = StrAllData.replaceAll("�", "a");               StrAllData = StrAllData.replaceAll("�", "a");
         StrAllData = StrAllData.replaceAll("�", "a");               StrAllData = StrAllData.replaceAll("�", "c");               
         StrAllData = StrAllData.replaceAll("�", "e"); StrAllData = StrAllData.replaceAll("�", "e");
         StrAllData = StrAllData.replaceAll("�", "e");               StrAllData = StrAllData.replaceAll("�", "e");
              StrAllData = StrAllData.replaceAll("�", "e");               StrAllData = StrAllData.replaceAll("�", "i");
              StrAllData = StrAllData.replaceAll("�", "i");               StrAllData = StrAllData.replaceAll("�", "i");
              StrAllData = StrAllData.replaceAll("�", "i");               StrAllData = StrAllData.replaceAll("�", "n");               
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "o");
              StrAllData = StrAllData.replaceAll("�", "u");               StrAllData = StrAllData.replaceAll("�", "e");
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "u");
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "u");
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "u");
              StrAllData = StrAllData.replaceAll("�", "y");               StrAllData = StrAllData.replaceAll("�", "Z");               
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "y");
              StrAllData = StrAllData.replaceAll("�", "z");               StrAllData = StrAllData.replaceAll("�", "e");
              StrAllData = StrAllData.replaceAll("�", "S");               StrAllData = StrAllData.replaceAll("�", "o");
              StrAllData = StrAllData.replaceAll("�", "s");               StrAllData = StrAllData.replaceAll("�", "Y");

  • Convert UTF8 clob charset

    Thanks for your advice.
    Actually NLS support may not apply to my case because I need to specify different charset when I need the CLOB UTF8 data from my database. Maybe this time I need Big5, next time I need GBK. The thing is, I know there is a function to convert varchar2 from UTF8 to different charset.
    Question is: can I convert CLOB as well?
    Thanks for any advice!

    There are no SQL convert functions to handle CLOB conversion in Oracle8i . In 9i all SQL functions for VARCHAR2 will work with CLOBs too.
    Why do you need to have do the conversion explicitly, if you set your client NLS_LANG character set to ZHT16BIG5 or ZHT16GBK , they these CLOBs should be converted to the client Character set automatically.

  • Converting "UTF8" files to other encodings in Text Wrangler

    Hi and thanks for your help
    I have several text files I can easily use in different ways using Text Wrangler.
    I need to convert text files originally written with different enchodings and I get errors.
    I can overcome this using this script:
    tell application "TextWrangler"
       tell document 1
           set line breaks to Unix
           set encoding to "Cyrillic (Windows)"
       end tell
    end tell
    However my attempts to create a loop always return this error
    TextWrangler got an error: An unexpected error occurred while processing an Apple Event (MacOS Error code: -10000)
    What is wrong with my script?
    set inputfolder to (choose folder)
    set theFiles to list folder inputfolder without invisibles
    tell application "TextWrangler"
    repeat with x from 1 to count of theFiles
              set thefile to item x of theFiles
              set inputfile to quoted form of (POSIX path of inputfolder & thefile)
      set line breaks to Unix
           set encoding to "Cyrillic (Windows)"
       end tell
    end repeat

    If its not typo you have the end tell inside the end repeat. Should be the other way.
          set encoding to "Cyrillic (Windows)"
       end tell
    end repeat
    end
    That shouldn't even compile.

  • Converting UTF8 to WE8MSWIN1252

    I understand the WE8MSWI1252 is not a subset of UTF8 and that to use "ALTER DATABASE CHARACTER SET ....", the new characterset has to be a superset of the current.
    Thou i ran the database scanner utility (csscan.exe) specifying the target character set to be WE8MSWIN1252 and it detected the current to be UTF8. This utility reported only very few/minor dataconverstion issues, which can be easily ignored in my case.
    My question is can I still execute "ALTER DATABASE CHARACTER SET..." command???
    Any help is greatly appreciated!
    Thanks,
    -B

    This is the appropiate forum for your question. Post it there as well.
    Forums Home » Oracle Technology Network (OTN) » Products » Database » Globalization and NLS
    Globalization Support
    Joel Pérez

  • How to convert back from UTF8 to ISO-8859-1 encoding?

    hi,
    I have a bunch of XML files which were wrongly encoded, and we lost all our accent characters.
    ie: é become é
    so how can I recover my XML files using powershell?
    so I want to change all the UTF8 ecoded characters back to the original ISO accent character
    é -> é
    I try this:
    1")
    $utf8 = [System.text.Encoding]::UTF8
    $utfBytes = $utf8.GetBytes("é")
    $isoBytes = [System.text.Encoding]::Convert($utf8, $iso, $utfBytes)
    $iso.GetString($isoBytes)
    but doesnt works.
    so is there a way to do this in powershell?
    I have to scan hundreds of files...
    thanks.

    You can't.  UTF-8 strips all of the information from the characters so you cannot know which characters are which.  If you know which characters you need to fix (requires knowing the spelling of the words) you could possible develop an matrix of
    replacements. There is no simple one line method.
    ¯\_(ツ)_/¯

  • 9.2 convert ASCII to UTF8 welsh language

    hello
    I have a 9.2 ascii database that i cant convert to UTF8 yet
    1 for an output (util file) i need to convert an ascii text string to utf-8 on export
    2 i have two characters that are not supported by ascii, ŵŷ the users will represent these by typing w^y^
    I tryed using UNISTR but non of the characters below are corectly converted
    SELECT UNISTR(ASCIISTR( '剔搙)) FROM DUAL ;
    how would you recomend converting a ascii latin 1 extended string to UTF-8 for export?
    is it sencible to use the character replacement plan above for ŵŷ?
    thanks
    james

    Probably the unconverted characters are not contained in the first charset.
    If this is right.
    http://en.wikipedia.org/wiki/Windows-1252
    ...there is no conversion for values outside the first charset.
    But I may made a mistake.
    Are you sure Â, â, î, Ê, ê and ô are in the 1252 charset?
    I am not able to see if there is a difference between the similar chars in the table on wikipedia and the ones you posted, that is why I asked.
    Anyway this output seems to verify my indication.
    Processing ...
    SELECT convert ('Ââî€Êêô','WE8MSWIN1252','UTF8') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','WE8MSWIN1252','UTF8')
    ¨¨¨¨                                    
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','UTF8','UTF8') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','UTF8','UTF8')
    ¨âêô                         
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','UTF8','WE8MSWIN1252') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','UTF8','WE8MSWIN1252')
    ¶¨Ç?¶îÇ?¶¨Ç?¶ô                          
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','WE8PC858','UTF8') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','WE8PC858','UTF8')
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','UTF8','WE8PC858') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','UTF8','WE8PC858')
    ƒ??Ç?¶¯Ç?ƒ??ƒ??Ç?                   
    1 row(s) retrievedSome characters are not supported on my DB so try these queries on yours to prove it.
    SELECT convert ('Ââî€Êêô','WE8MSWIN1252','UTF8') FROM DUAL;
    SELECT convert ('Ââî€Êêô','UTF8','UTF8') FROM DUAL;
    SELECT convert ('Ââî€Êêô','UTF8','WE8MSWIN1252') FROM DUAL;
    SELECT convert ('Ââî€Êêô','WE8PC858','UTF8') FROM DUAL;
    SELECT convert ('Ââî€Êêô','UTF8','WE8PC858') FROM DUAL;Bye Alessandro

  • UTF8 conversion of wwv_mig_acc_load

    We are converting our database to UTF8 from US7ASCII and csscan reports lossy data in the package body of wwv_mig_acc_load. Does anyone know of any reason any reason why I can not try to re-install wwv_mig_acc_load.plb
    Thanks
    Wayne
    Additional information: Tried re-installing wwv_mig_acc_load.plb (in a test enviroment). Still getting lossy data and truncation errors in csscan. Downloaded 3.1 again and tried from fresh download - still get error. I guess I'll have to open a SR with metalink.
    Message was edited by:
    wcoleku

    Hi
    Well, I just finished converting and had some issues, nothing major.
    Followed Joel's steps, except for dropping the package. Only dropped the package body since the the package was ok. Re-installed package body (wwv_mig_acc_load.plb) and ran the validate. Got the following error. (Guessing it did not get imported in, but couldn't find an error in the import log - it exists in production)
    SQL> exec sys.validate_apex;
    BEGIN sys.validate_apex; END;
    ERROR at line 1:
    ORA-06550: line 1, column 7:
    PLS-00201: identifier 'SYS.VALIDATE_APEX' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    Had lots of invalid objects after import due to grants - after granting (followed note 467593.1) and re-installing wwv_dbms_sql was able to clear up all invalid objects
    One note of difference. In our production env. wwv_mig_acc_load is owned by flows_030100 not sys.

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Using convert function on nvarchar2

    Hi experts,
    I am having a bit of a problem with the convert function. We use convert to compare street- and citynames and ignore any special characters, such as é ç etc.
    Something like:
    select ...
    from   ...
    where  convert(new_street_name, 'US7ASCII') = convert(existing_street_name, 'US7ASCII')
    This works fine if the datatype is varchar2, for instance:
    SQL> select convert('äàáâçëèéêïìíîöòóôüùúûÿ','US7ASCII') text from dual;
    TEXT
    aaaaceeeeiiiioooouuuuy
    If the datatype if nvarchar2 however, the result is not as expected:
    SQL> select convert(cast('äàáâçëèéêïìíîöòóôüùúûÿ'as nvarchar2(64)),'US7ASCII') text from dual;
    TEXT
    慡慡捥敥敩楩楯潯潵畵
    The NLS character settings on our database (10.2.0.4) are:
    NLS_CHARACTERSET       AL32UTF8  Character set
    NLS_NCHAR_CONV_EXCP    FALSE     NLS conversion exception
    NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
    I have tried several combinations... but no luck so far. Is it possible to use convert on an nvarchar2 to go from é to e?
    Maybe it is better just to use the translate function and define each conversion explicitly. Convert seemed a nice option because it works without any additional parameters... on a varchar2 at least
    Thanks!

    The usage of convert is not encouraged by the docs and in my opinion it's rather by accident that this works in your specific case than something other.
    What's going on?
    Convert returns the char-datatype of the input.
    We can use simple to_char to use the convert funtion in the way you intend.
    (You shoud take care when handling n_char data in sql statements, especially in 10 g enviromnments. You have to set the environment parameter ORA_NCHAR_LITERAL_REPLACE=TRUE and use the n-prefix. Take a look at the globalization guide for more details.)
    CREATE TABLE  "TESTNCHAR"
       ( "ID" NUMBER,
    "STR" NVARCHAR2(30),
    "STR2" VARCHAR2(300)
    insert into testnchar values (1, n'ßäàáâçëèéêïìíîöòóôüùúûÿ','ßäàáâçëèéêïìíîöòóôüùúûÿ')
    select
    id
    ,str,str2
    ,dump(str,1010) dmp
    ,dump(str2,1010) dmp2
    ,dump(convert(str,'US7ASCII')) dc
    ,dump(convert(str2,'US7ASCII')) dc2
    ,convert(to_char(str),'US7ASCII') c
    ,convert(str2,'US7ASCII') c2
    from testnchar
    ID
    STR
    STR2
    DMP
    DMP2
    DC
    DC2
    C
    C2
    1
    ßäàáâçëèéêïìíîöòóôüùúûÿ
    ßäàáâçëèéêïìíîöòóôüùúûÿ
    Typ=1 Len=46 CharacterSet=AL16UTF16: 0,223,0,228,0,224,0,225,0,226,0,231,0,235,0,232,0,233,0,234,0,239,0,236,0,237,0,238,0,246,0,242,0,243,0,244,0,252,0,249,0,250,0,251,0,255
    Typ=1 Len=46 CharacterSet=AL32UTF8: 195,159,195,164,195,160,195,161,195,162,195,167,195,171,195,168,195,169,195,170,195,175,195,172,195,173,195,174,195,182,195,178,195,179,195,180,195,188,195,185,195,186,195,187,195,191
    Typ=1 Len=23: 63,97,97,97,97,99,101,101,101,101,105,105,105,105,111,111,111,111,117,117,117,117,121
    Typ=1 Len=23: 63,97,97,97,97,99,101,101,101,101,105,105,105,105,111,111,111,111,117,117,117,117,121
    ?aaaaceeeeiiiioooouuuuy
    ?aaaaceeeeiiiioooouuuuy
    We can see that this already fails for ß.
    To give you an idea of alternative approaches:
    create table test_comp (nid number, str1 varchar2(300), str2 varchar2(300))
    insert into test_comp values (1, 'ßäàáâçëèéêïìíîöòóôüùúûÿ','ssäàáâçëèéêïìíîöòóôüùúûÿ')
    insert into test_comp values (2, 'ßäàáâçëèéêïìíîöòóôüùúûÿ','säàáâçëèéêïìíîöòóôüùúûÿ')
    select
    from test_comp
    where
    str2 like str1
    no data found
    select
    from test_comp
    where
    upper(str2) like NLS_UPPER(str1, 'NLS_SORT = XGERMAN')
    NID
    STR1
    STR2
    1
    ßäàáâçëèéêïìíîöòóôüùúûÿ
    ssäàáâçëèéêïìíîöòóôüùúûÿ

  • How to setup NLS_LANG on Windows XP

    Hi,
    We have an oracle database with character set AMERICAN_AMERICA.US7ASCII setting. and our
    production application inserts different language characters directly to the database without any
    UTF8 conversion. Another function has been coded which can convert the different character set
    to UTF8 via using convert() function before the client side can be used. it has been proved it works
    fine with a PHP application which can show UTF8 on the webpage, based on a unix-box server.
    Now I have installed a oracle application express server with character set AMERICAN_AMERICA.WE8MSWIN1252
    (found from registry) on my windows XP. After reading the article from metalink
    https://metalink.oracle.com/metalink/plsql/f?p=130:14:1393556852351104955::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,158577.1,1,1,1,helvetica
    , I understood that oracle server side would convert the character to the client side if it found
    their character set are difference, which is not what I want in this case, so I should set my client side as same as
    server side so that no any conversion therefore the converted UTF8 code can directly delivered to front application.
    I have try to set the oracle client side as AMERICAN_AMERICA.US7ASCII, WE8ISO8859P1 or others, none of them works.
    Give you a simple instance here is that
    1.Insert ZHS16CGB231280(chinese code) to Production database(US7ASCII)
    2. retreive it by convert('string', 'UTF8', 'ZHS16CGB231280')
    3. send to client side(characterset US7ASCII)
    doesn't work
    Does anyone has experience and point me out what wrong it is. it lasts me several weeks trying to
    figure out, but failure.
    Thanks for your help
    Eric
    [email protected]

    Generally speaking, if you want to store chinese characters, your database character set must be able to store it correctly without using CONVERT function: you could use one of the
    universal character set. I'm surprised that you are able to store chinese characters in the US7ASCII character set but I'm not surprised that it does not work in other cases.
    The OTN NLS_LANG FAQ give some common values under Windows, assuming that each Windows node will only be used in one given national environment.

Maybe you are looking for

  • After updating my iPod touch 3rd generation to IOS5 some default apps are not working (anymore)

    Hi, I updated my iPod touch 3g from IOS 4.x to IOS 5 last week and since then my iPod makes troublle! First of all, during the update I did not see any Problems. After the update, I mean the step where the data will be restored to the iPod, I synchro

  • Windows 7 Home Share Not Working Unless Logged In to Computer

    So my old computer with Windows Home Server blew up last week.  I bought a new computer with Windows 7 running and have everything set up. Here's the rub...  I cannot access my itunes home share unless I am logged in or remotely logged int to the com

  • Displaying in the GUI from another called class

    Hi, New to creating GUIs in java and I'm having two problems: 1- How do I display several images in my GUI in different areas of the GUI window? 2- For these images and my text boxes, how do I display things when it's not the class the GUI was create

  • Archlinux not cooperating with cable modem?

    I dual-boot Windows 7 Pro and Archlinux, both in 64-bit versions. For whatever reason, my Archlinux install does not cooperate very well with my cable internet connection. I have a basic 3/4 M connection through Time Warner, and the Motorola Surfboar

  • Web Form Data

    This issue has come up a few times before on the forums, but I can't find quite what I'm looking for. Goals: 1) To log in to a website using a simple text name and password. 2) Once logged in, modify HTML forms, etc Rather than using Datasocket read/