Convert UTF8 to WE8ISO8859P1 or WE8ISO8859P15

Hi!
If got the following situation:
I've got a database with the character set WE8ISO8859P1.
I've got a second database with UTF8.
I think all databases are 10.2.xx
There is a pl/sql interface on the WE8ISO8859P1 Database which reads data from the UTF8 database via database link.
But after inserting UTF8-data into the WE8ISO8859P1 database, the are not converted correctly automaticaly.
How can i convert UTF8-data within my WE8ISO8859P1 database to WE8ISO8859P1-data?
Is there a standard function within the WE8ISO8859P1 database?
e.g. Select standard_convert_func(my_col, 'UTF8', 'WE8ISO8859P1') from myTable@db_link
Or is the better way to convert this utf8-data to WE8ISO8859P1 within the UTF-8 database?
insert into my_interface_tabele(my_col) select standard_convert_func(my_col, 'UTF8', 'WE8ISO8859P1') from my_utf8_base_tabel;
Thank you for your help!
Best regards,
Thomas

Hi!
Within my ISO-DB i receive the following results in SQL*Plus:
SQL> select convert(DN_DIENSTTITEL, 'WE8ISO8859P1', 'UTF8')
2 from dn_stammtest@lsal_n_pep_test_link;
CONVERT(DN_DIENSTTITEL,'WE8ISO8859P1','UTF8')
A K B A R I A N `ag6 ¿ N a t a l
SQL> ed
Datei afiedt.buf wurde geschrieben
1 select dump(DN_DIENSTTITEL)
2* from dn_stammtest@lsal_n_pep_test_link
SQL> r
1 select dump(DN_DIENSTTITEL)
2* from dn_stammtest@lsal_n_pep_test_link
DUMP(DN_DIENSTTITEL)
Typ=1 Len=40: 0,65,0,75,0,66,0,65,0,82,0,73,0,65,0,78,0,32,1,96,1,97,1,103,4,54,32,172,0,32,0,78,0,9
I will contact the DB-Admin to do this select within the UTF8 DB.
Best regards.
Thomas

Similar Messages

  • Converting db from WE8ISO8859P1 to UTF-8

    I am running an Oracle 10g database with approx 60 tables. We are going global so we need to convert to unicode. Currently our database doesn't contain any special characters (all english ASCII data).
    I have read through the "Character Set Migration" book from Oracle (http://www.cs.umb.edu/cs634/ora9idocs/server.920/a96529/ch10.htm) but i'm still not clear on whether i will need to manually alter all my tables to expand the size of my fields.
    I'm hoping someone who has done this conversion can advise me. If my db doesn't currently store any special charcters then i shouldn't lose data in the conversion, correct? Do i still need to expand my field in case in the future special characters are entered into the db. For example a name field that is VARCHAR2(10). If Janpanese characters are entered then the field can only hold 5 characters???
    If i do need to expand all my fields is there any slick oracle utility to expand my field by 3 times?? or do i have to manually run alters on my tables??
    Also, can i run the conversion by issuing the ALTER DATABASE command or do i have to do the full export/import? Oracle states "if, and only if, the new character set is a strict superset of the current character set, it is possible to use the ALTER DATABASE CHARACTER SET statement to expedite the change in the database character set." Isn't UTF-8 a superset of WE8ISO8859P1??
    Any feedback or other gotchas would be greatly appreciated.

    A few suggestions:
    1. If your database is 10g and your client software is only 9i and 10g based, migrate to AL32UTF8, not to UTF8.
    2. Do not use ALTER DATABASE with 10g. Use csscan to scan your database to confirm that you really have only ASCII. Your scan should show only Changless data, no Convertible or Exceptional Data. Convertible data is allowed in Data Dictionary columns of type CLOB only.
    3. Then, you should be able to change the character set using the script
    ?/rdbms/admin/csalter.plb
    It will change the database character set to the last TOCHAR value
    from the csscan run. Do not specify the FROMCHAR parameter.
    DO NOT play with csalter.plb without a BACKUP!!!
    4. VARCHAR2(10) in [AL32]UTF8 will be able to hold only 3 Japanese characters.
    5. I am afraid, the trick with NLS_LENGTH_SEMANTICS and import will not work,
    because Export emits the BYTE keyword in CREATE TABLE.
    You would have to use the impdp sqlfile and run the statements from there.
    -- Sergiusz

  • Is there a benefit of characterset WE8ISO8859P1 vs WE8ISO8859P15? Differences?

    Hello all,
    I'm getting ready to clone a database, and I'm on the new server running dbca.
    I'm at the characterset section, and on the original database, I see the characterset is WE8ISO8859P1. However, this isn't "found" when the Show Recommended character sets only box is checked. It shows WE8ISO8859P15 only until the box is un-checked.
    They both say they are West European.
    Is there a real difference between the two? Should I change from the WE8ISO8859P1 to the WE8ISO8859P15 for any good reason? I'm planning to export out of the older database and import into the newer one, will changing the characterset here cause any problems?
    So far, I've not been able to find much on the web about the differences between the two, and wondered if anyone here had experience with this.
    Thanks in advance,
    cayenne

    There is a great ML document
    Choosing between WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252 as NLS_CHARACTERSET [ID 264294.1]
    which in line with what others already suggested, and I agree that you should stick to AL32UTF8 flexible globalization support in your application is important and you want to avoid headaches in the future with export-import to a database when you'll need all of a sudden a new region to support.
    8859p1 is binary subset of win1252 (no recoding is needed). So in this case choice is simple.
    8859p15 is logical subset of win1252 (the latter has all characters that p15 has, but they may just have different codes).
    Bottom line. AL32UTF8 whenever you can. If you got to use one-byte charsets and sacrifice flexibility of Unicode, use we8mswin1252 for western European languages.

  • Converting UTF8 to US7ASCII

    Hello,
    We have a database which is 9i and has a NLS_CHARACTERSET set to US7ASCII.
    We created a new database (version 10g 10.2.0.2.0) on a new server which has a NLS_CHARACTERSET set to UTF 8. When we exported the database from the 9i database to 10g database, obviously because of the NLS_CHARACTERSET there was an issue of data corruption (Columns width increasing by 3 times, understandable). Is there a way to convert the UTF8 character set on the 10g database to US7ASCII character set, and then re-importing and exporting the 9i database to 10g database. I know we can convert a subset to superset. I want to find out if there is a way to convert a superset to a subset.
    Or do I have to re-create the whole database again.
    Thanks,
    Kalyan

    You shouldn't have problem migrate a US7ASCII to UTF8. UTF8 is superset of US7ASCII.
    The problem you are facing is when your schema has column defined as CHAR(20) for example, a single byte character becomes a two-byte character in UTF8, you now have Data Truncation issue. Same problem can also happens in varchar type.
    You can't change from superset to subset for obvious reason but you can convert your 9i from US7ASCII to UTF8 if you like.
    Also run this character set scanner before you make conversion.
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96529/ch11.htm#1005049

  • Convert utf8 char in a NSString

    Hi to all! I recieve from my server a XML file. I use NSXMLParser to retrieve attributes and values. The problem is this: i've many utf8 encoded char in many attributes, and i recieve them in NSStrings.
    //in - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict DELEGATE
    NSString *myAttribute = [[NSString alloc] initWithString:[attributeDict valueForKey:@"attribute"]];
    The NSString at key "attribute" can contain UTF8 char as ò and similar.
    How can i decode them to obtain a NSString with ò instead of ò and so on?

    I understood that you helped me, I will need to use
    parseInt to convert the string to number...No.
    Assuming you've got the "15:00" part, say in a String called timeStr, you'd just pass that to dateFormat.parse(timeStr)
    but I
    need to know where in the String are my numbers...
    isn't it? How may I do this? Well, that depends.
    What's your logic for finding it manually? Is it the first occurrence of any numerical digit in the string? Is it at a known, fixed character index? Some other logic?

  • Convert UTF8 to 8-bit intellegently

    I need to convert a UTF8 file into 8-bit.
    Of coarse, there is no way to do this perfectly, but it would be nice if there was a function that would make an intellegent substitution for characters that aren't in ASCII. For instance, the curly double quotes character would become a normal double quotes.
    I tried using String.getBytes("ISO-8859-1"), but it just subsitutes a question mark for all characters that aren't in ASCII.
    Anyone know of a function that can do this?

    just wrote Very long winded method for most of the common utf-8
    //list of unexcepted frequent chars: �����������������������������������������������������������
    StrAllData = StrAllData.replaceAll("�", "\"");
              StrAllData = StrAllData.replaceAll("�", "\"");
              StrAllData = StrAllData.replaceAll("�", "\'");
              StrAllData = StrAllData.replaceAll("�", "\'");      
              StrAllData = StrAllData.replaceAll("�", "A");                StrAllData = StrAllData.replaceAll("�", "A");
              StrAllData = StrAllData.replaceAll("�", "A");                StrAllData = StrAllData.replaceAll("�", "A");
              StrAllData = StrAllData.replaceAll("�", "E");                StrAllData = StrAllData.replaceAll("�", "E");
              StrAllData = StrAllData.replaceAll("�", "E");                StrAllData = StrAllData.replaceAll("�", "E");
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "I");
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "I");
              StrAllData = StrAllData.replaceAll("�", "N");                StrAllData = StrAllData.replaceAll("�", "O");
              StrAllData = StrAllData.replaceAll("�", "O");                StrAllData = StrAllData.replaceAll("�", "O");
              StrAllData = StrAllData.replaceAll("�", "A");                StrAllData = StrAllData.replaceAll("�", "E");      
              StrAllData = StrAllData.replaceAll("�", "E");                StrAllData = StrAllData.replaceAll("�", "E");      
              StrAllData = StrAllData.replaceAll("�", "E");               StrAllData = StrAllData.replaceAll("�", "I");      
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "B");      
              StrAllData = StrAllData.replaceAll("�", "I");                StrAllData = StrAllData.replaceAll("�", "N");      
              StrAllData = StrAllData.replaceAll("�", "a");                StrAllData = StrAllData.replaceAll("�", "O");      
              StrAllData = StrAllData.replaceAll("�", "a");               StrAllData = StrAllData.replaceAll("�", "a");
              StrAllData = StrAllData.replaceAll("�", "a");               StrAllData = StrAllData.replaceAll("�", "a");
         StrAllData = StrAllData.replaceAll("�", "a");               StrAllData = StrAllData.replaceAll("�", "c");               
         StrAllData = StrAllData.replaceAll("�", "e"); StrAllData = StrAllData.replaceAll("�", "e");
         StrAllData = StrAllData.replaceAll("�", "e");               StrAllData = StrAllData.replaceAll("�", "e");
              StrAllData = StrAllData.replaceAll("�", "e");               StrAllData = StrAllData.replaceAll("�", "i");
              StrAllData = StrAllData.replaceAll("�", "i");               StrAllData = StrAllData.replaceAll("�", "i");
              StrAllData = StrAllData.replaceAll("�", "i");               StrAllData = StrAllData.replaceAll("�", "n");               
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "o");
              StrAllData = StrAllData.replaceAll("�", "u");               StrAllData = StrAllData.replaceAll("�", "e");
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "u");
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "u");
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "u");
              StrAllData = StrAllData.replaceAll("�", "y");               StrAllData = StrAllData.replaceAll("�", "Z");               
              StrAllData = StrAllData.replaceAll("�", "o");               StrAllData = StrAllData.replaceAll("�", "y");
              StrAllData = StrAllData.replaceAll("�", "z");               StrAllData = StrAllData.replaceAll("�", "e");
              StrAllData = StrAllData.replaceAll("�", "S");               StrAllData = StrAllData.replaceAll("�", "o");
              StrAllData = StrAllData.replaceAll("�", "s");               StrAllData = StrAllData.replaceAll("�", "Y");

  • Convert UTF8 clob charset

    Thanks for your advice.
    Actually NLS support may not apply to my case because I need to specify different charset when I need the CLOB UTF8 data from my database. Maybe this time I need Big5, next time I need GBK. The thing is, I know there is a function to convert varchar2 from UTF8 to different charset.
    Question is: can I convert CLOB as well?
    Thanks for any advice!

    There are no SQL convert functions to handle CLOB conversion in Oracle8i . In 9i all SQL functions for VARCHAR2 will work with CLOBs too.
    Why do you need to have do the conversion explicitly, if you set your client NLS_LANG character set to ZHT16BIG5 or ZHT16GBK , they these CLOBs should be converted to the client Character set automatically.

  • Converting "UTF8" files to other encodings in Text Wrangler

    Hi and thanks for your help
    I have several text files I can easily use in different ways using Text Wrangler.
    I need to convert text files originally written with different enchodings and I get errors.
    I can overcome this using this script:
    tell application "TextWrangler"
       tell document 1
           set line breaks to Unix
           set encoding to "Cyrillic (Windows)"
       end tell
    end tell
    However my attempts to create a loop always return this error
    TextWrangler got an error: An unexpected error occurred while processing an Apple Event (MacOS Error code: -10000)
    What is wrong with my script?
    set inputfolder to (choose folder)
    set theFiles to list folder inputfolder without invisibles
    tell application "TextWrangler"
    repeat with x from 1 to count of theFiles
              set thefile to item x of theFiles
              set inputfile to quoted form of (POSIX path of inputfolder & thefile)
      set line breaks to Unix
           set encoding to "Cyrillic (Windows)"
       end tell
    end repeat

    If its not typo you have the end tell inside the end repeat. Should be the other way.
          set encoding to "Cyrillic (Windows)"
       end tell
    end repeat
    end
    That shouldn't even compile.

  • Converting UTF8 to WE8MSWIN1252

    I understand the WE8MSWI1252 is not a subset of UTF8 and that to use "ALTER DATABASE CHARACTER SET ....", the new characterset has to be a superset of the current.
    Thou i ran the database scanner utility (csscan.exe) specifying the target character set to be WE8MSWIN1252 and it detected the current to be UTF8. This utility reported only very few/minor dataconverstion issues, which can be easily ignored in my case.
    My question is can I still execute "ALTER DATABASE CHARACTER SET..." command???
    Any help is greatly appreciated!
    Thanks,
    -B

    This is the appropiate forum for your question. Post it there as well.
    Forums Home » Oracle Technology Network (OTN) » Products » Database » Globalization and NLS
    Globalization Support
    Joel Pérez

  • Convert characterset WE8MSWIN1252 to UTF8

    Hi all
    I am using Oracle 10g Database. Now the Characterset as WE8MSWIN1252. I want to change my CharacterSet to UTF8. It is possible.
    Can anyone please post me the steps involved.
    Very Urgent !!!!!!!
    Regds
    Nirmal

    Subject: Changing WE8ISO8859P1/ WE8ISO8859P15 or WE8MSWIN1252 to (AL32)UTF8
    Doc ID: Note:260192.1 Type: BULLETIN
    Last Revision Date: 24-JUL-2007 Status: PUBLISHED
    Changing the database character set to (AL32)UTF8
    =================================================
    When changing a Oracle Applications Database:
    Please see the following note for Oracle Applications database
    Note 124721.1 Migrating an Applications Installation to a New Character Set
    If you have any doubt log an Oracle Applications TAR for assistance.
    It might be usefull to read this note, even when using Oracle Applications
    seen it explains what to do with "lossy" and "truncation" in the csscan output.
    Scope:
    You can't simply use "ALTER DATABASE CHARACTER SET" to go from WE8ISO8859P1 or
    WE8ISO8859P15 or WE8MSWIN1252 to (AL32)UTF8 because (AL32)UTF8 is not a
    binary superset of any of these character sets.
    You will run into ORA-12712 or ORA-12710 because the code points for the
    "extended ASCII" characters are different between these 3 character sets
    and (AL32)UTF8.
    This note will describe a method of still using a
    "ALTER DATABASE CHARACTER SET" in a limited way.
    Note that we strongly recommend to use the SAME flow when doing a full
    export / import.
    The choise between using FULL exp/imp and a PARTIAL exp/imp is made in point
    7)
    DO NOT USE THIS NOTE WITH ANY OTHER CHARACTERSETS
    WITHOUT CHECKING THIS WITH ORACLE SUPPORT
    THIS NOTE IS SPECIFIC TO CHANGING:
    FROM: WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252
    TO: AL32UTF8 or UTF8
    AL32UTF8 and UTF8 are both Unicode character sets in the oracle database.
    UTF8 encodes Unicode version 3.0 and will remain like that.
    AL32UTF8 is kept up to date with the Unicode standard and encodes the Unicode
    standards 3.0 (in database 9.0), 3.1 (database 9.2) or 3.2 (database 10g).
    For the purposes of this note we shall only use AL32UTF8 from here on forward,
    you can substitute that for UTF8 without any modifications.
    If you use 8i or lower clients please have a look at
    Note 237593.1 Problems connecting to AL32UTF8 databases from older versions (8i and lower)
    WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252 are the 3 main character sets that
    are used to store Western European or English/American data in.
    All standard ASCII characters that are used for English/American do not have to
    be converted into AL32UTF8 - they are the same in AL32UTF8. However, all other
    characters, like accented characters, the Euro sign, MS "smart quotes", etc.
    etc., have a different code point in AL32UTF8.
    That means that if you make extensive use of these types of characters the
    preferred way of changing to AL32UTF8 would be to export the entire database and
    import the data into a new AL32UTF8 database.
    However, if you mainly use standard ASCII characters and not a lot else (for
    example if you only store English text, maybe with some Euro signs or smart
    quotes here and there), then it could be a lot quicker to proceed with this
    method.
    Please DO read in any case before going to UTF8 this note:
    Note 119119.1 AL32UTF8 / UTF8 (unicode) Database Character Set Implications
    and consider to use CHAR semantics if on 9i or higher:
    Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
    It's best to change the tables and so to CHAR semantics before the change
    to UTF8.
    This procedure is valid for Oracle 8i, 9i and 10g.
    Note:
    * If you are on 9i please make sure you are at least on Patch 9204, see
    Note 250802.1 Changing character set takes a very long time and uses lots of rollback space
    * if you have any function-based indexes on columns using CHAR length semantics
    then these have to be removed and re-created after the character set has
    been changed. Failure to do so will result in ORA-604 / ORA-2262 /ORA-904
    when the "alter database character set" statement is used in step 4.
    Actions to take:
    1) install the csscan tool.
    1A)For 10g use the csscan 2.x found in /bin, no need to install a newer version
    Goto 1C)
    1B)For 9.2 and lower:
    Please DO install the version 1.2 or higher from TechNet for you version.
    http://technet.oracle.com/software/tech/globalization/content.html
    and install this.
    copy all scripts and executables found in the zip file you downloaded
    to your oracle_home overwriting the old versions.
    goto 1C).
    Note: do NOT use the CSSCAN of a 10g installation for 9i/8i!
    1C)Run csminst.sql using these commands and SQL statements:
    cd $ORACLE_HOME/rdbms/admin
    set oracle_sid=<your SID>
    sqlplus "sys as sysdba"
    SQL>set TERMOUT ON
    SQL>set ECHO ON
    SQL>spool csminst.log
    SQL> START csminst.sql
    Check the csminst.log for errors.
    If you get when running CSSCAN the error
    "Character set migrate utility schema not compatible."
    then
    1ca) or you are starting the old executable, please do overwrite all old files with the files
    from the newer version from technet (1.2 has more files than some older versions, that's normal).
    1cb) or check your PATH , you are not starting csscan from this ORACLE_HOME
    1cc) or you have not runned the csminst.sql from the newer version from technet
    More info is in Note 123670.1 Use Scanner Utility before Altering the Database Character Set
    Please, make sure you use/install csscan version 1.2 .
    2) Check if you have no invalid code points in the current character set:
    Run csscan with the following syntax:
    csscan FULL=Y FROMCHAR=<existing database character set> TOCHAR=<existing database character set> LOG=WE8check CAPTURE=Y ARRAY=1000000 PROCESS=2
    Always run CSSCAN with 'sys as sysdba'
    This will create 3 files :
    WE8check.out a log of the output of csscan
    WE8check.txt a Database Scan Summary Report
    WE8check.err contains the rowid's of the rows reported in WE8check.txt
    At this moment we are just checking that all data is stored correctly in the
    current character set. Because you've entered the TO and FROM character sets as
    the same you will not have any "Convertible" or "Truncation" data.
    If all the data in the database is stored correctly at the moment then there
    should only be "Changeless" data.
    If there is any "Lossy" data then those rows contain code points that are not
    currently stored correctly and they should be cleared up before you can continue
    with the steps in this note. Please see the following note for clearing up any
    "Lossy" data:
    Note 225938.1 Database Character Set Healthcheck
    Only if ALL data in WE8check.txt is reported as "Changeless" it is safe to
    proceed to point 3)
    NOTE:
    if you have a WE8ISO8859P1 database and lossy then changing your WE8ISO8859P1 to
    WE8MSWIN1252 will most likly solve you lossy.
    Why ? this is explained in
    Note 252352.1 Euro Symbol Turns up as Upside-Down Questionmark
    Do first a
    csscan FULL=Y FROMCHAR=WE8MSWIN1252 TOCHAR=WE8MSWIN1252 LOG=1252check CAPTURE=Y ARRAY=1000000 PROCESS=2
    Always run CSSCAN with 'sys as sysdba'
    For 9i, 8i:
    Only if ALL data in 1252check.txt is reported as "Changeless" it is safe to
    proceed to the next point. If not, log a tar and provide the 3 generated files.
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    2.1. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all.
    If you are using RAC see
    Note 221646.1 Changing the Character Set for a RAC Database Fails with an ORA-12720 Error
    2.2. Execute the following commands in sqlplus connected as "/ AS SYSDBA":
    SPOOL Nswitch.log
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
    ALTER SYSTEM SET AQ_TM_PROCESSES=0;
    ALTER DATABASE OPEN;
    ALTER DATABASE CHARACTER SET WE8MSWIN1252;
    SHUTDOWN IMMEDIATE;
    STARTUP RESTRICT;
    SHUTDOWN;
    The extra restart/shutdown is necessary in Oracle8(i) because of a SGA
    initialization bug which is fixed in Oracle9i.
    -- a alter database takes typically only a few minutes or less,
    -- it depends on the number of columns in the database, not the amount of data
    2.3. Restore the parallel_server parameter in INIT.ORA, if necessary.
    2.4. STARTUP;
    now go to point 3) of this note of course your database is then WE8MSWIN1252, so
    you need to replace <existing database character set> with WE8MSWIN1252 from now on.
    For 10g and up:
    When using CSSCAN 2.x (10g database) you should see in 1252check.txt this:
    All character type data in the data dictionary remain the same in the new character set
    All character type application data remain the same in the new character set
    and
    The data dictionary can be safely migrated using the CSALTER script
    IF you see this then you need first to go to WE8MSWIN1252
    If not, log a tar and provide all 3 generated files.
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    Then you do in sqlplus connected as "/ AS SYSDBA":
    -- check if you are using spfile
    sho parameter pfile
    -- if this "spfile" then you are using spfile
    -- in that case note the
    sho parameter job_queue_processes
    sho parameter aq_tm_processes
    -- (this is Bug 6005344 fixed in 11g )
    -- then do
    shutdown immediate
    startup restrict
    SPOOL Nswitch.log
    @@?\rdbms\admin\csalter.plb
    -- Csalter will aks confirmation - do not copy paste the whole actions on one time
    -- sample Csalter output:
    -- 3 rows created.
    -- This script will update the content of the Oracle Data Dictionary.
    -- Please ensure you have a full backup before initiating this procedure.
    -- Would you like to proceed (Y/N)?y
    -- old 6: if (UPPER('&conf') <> 'Y') then
    -- New 6: if (UPPER('y') <> 'Y') then
    -- Checking data validility...
    -- begin converting system objects
    -- PL/SQL procedure successfully completed.
    -- Alter the database character set...
    -- CSALTER operation completed, please restart database
    -- PL/SQL procedure successfully completed.
    -- Procedure dropped.
    -- if you are using spfile then you need to also
    -- ALTER SYSTEM SET job_queue_processes=<original value> SCOPE=BOTH;
    -- ALTER SYSTEM SET aq_tm_processes=<original value> SCOPE=BOTH;
    shutdown
    startup
    and the 10g database will be WE8MSWIN1252
    now go to point 3) of this note of course your database is then WE8MSWIN1252, so
    you need to replace <existing database character set> with WE8MSWIN1252 from now on.
    3) Check which rows contain data for which the code point will change
    Run csscan with the following syntax:
    csscan FULL=Y FROMCHAR=<your database character set> TOCHAR=AL32UTF8 LOG=WE8TOUTF8 CAPTURE=Y ARRAY=1000000 PROCESS=2
    Always run CSSCAN with 'sys as sysdba'
    This will create 3 files :
    WE8TOUTF8.out a log of the output of csscan
    WE8TOUTF8.txt a Database Scan Summary Report
    WE8TOUTF8.err a contains the rowid's of the rows reported in WE8check.txt
    + You should have NO entries under Lossy, because they should have been filtered
    out in step 2), if you have data under Lossy then please redo step 2).
    + If you have any entries under Truncation then go to step 4)
    + If you only have entries for Convertible (and Changeless) then solve those in
    step 5).
    + If you have NO entry's under the Convertible, Truncation or Lossy,
    and all data is reported as "Changeless" then proceed to step 6).
    4) If you have Truncation entries.
    Whichever way you migrate from WE8(...) to AL32UTF8, you will always have to
    solve the entries under Truncation.
    Standard ASCII characters require 1 byte of storage space under in WE8(...) and
    in AL32UTF8, however, other characters (like accented characters and the Euro
    sign) require only 1 byte of storage space in WE8(...), but they require 2 or
    more bytes of space in AL32UTF8.
    That means that the total amount of space needed to store a string can exceed
    the defined column size.
    For more information about this see:
    Note 119119.1 AL32UTF8 / UTF8 (unicode) Database Character Set Implications
    and
    "Truncation" data is always also "Convertible" data, which means that whatever
    else you do, these rows have to be exported before the character set is changed
    and re-imported after the character set has changed. If you proceed with that
    without dealing with the truncation issue then the import will fail on these
    columns because the size of the data exceeds the maximum size of the column.
    So these truncation issues will always require some work, there are a number of
    ways to deal with them:
    A) Update these rows in the source database so that they contain less data
    B) Update the table definition in the source database so that it can contain
    longer data. You can do this by either making the column larger, or by using
    CHAR length semantics instead of BYTE length semantics (only possible in
    Oracle9i).
    C) Pre-create the table before the import so that it can contain 'longer' data.
    Again you have a choice between simply making it larger, or switching from BYTE
    to CHAR length semantics.
    If you've chosen option A or B then please rerun csscan to make sure there is no
    Truncation data left. If that also means there is no Convertible data left then
    proceed to step 6), otherwise proceed to step 5).
    To know how much the data expands simply check the csscan output.
    you can find that in the .err file as "Max Post Conversion Data Size"
    For example, check in the .txt file wich table has "Truncation",
    let's assume you have there a row that say's
    -- snip from WE8TOUTF8.txt
    [Distribution of Convertible, Truncated and Lossy Data by Table]
    USER.TABLE Convertible Truncation Lossy
    SCOTT.TESTUTF8 69 6 0
    -- snip from WE8TOUTF8.txt
    then look in the .err file for "TESTUTF8" until the
    "Max Post Conversion Data Size" is bigger then the column size for that table.
    User : SCOTT
    Table : TESTUTF8
    Column: ITEM_NAME
    Type : VARCHAR2(80)
    Number of Exceptions : 6
    Max Post Conversion Data Size: 81
    -> the max size after going to UT8 will be 81 bytes for this column.
    5) If you have Convertible entries.
    This is where you have to make a choice whether or not you want to continue
    on this path or if it's simpler to do a complete export/import in the
    traditional way of changing character sets.
    All the data that is marked as Convertible needs to be exported and then
    re-imported after the character set has changed.
    6) check if you have functional indexes on CHAR based columns and purge the RECYCLEBIN.
    select OWNER, INDEX_NAME , INDEX_TYPE, TABLE_OWNER, TABLE_NAME, STATUS,
    FUNCIDX_STATUS from ALL_INDEXES where INDEX_TYPE not in
    ('NORMAL', 'BITMAP','IOT - TOP') and TABLE_NAME in (select unique
    (table_name) from dba_tab_columns where char_used ='C');
    if this gives rows back then the change will fail with
    ORA-30556: functional index is defined on the column to be modified
    if you have functional indexes on CHAR based columns you need to drop the
    index and recreate after the change , note that a disable will not be enough.
    On 10g check ,while connected as sysdba, if there are objects in the recyclebin
    SQL> show recyclebin
    If so do also a PURGE DBA_RECYCLEBIN; other wise you will recieve a ORA-38301 during CSALTER.
    7) Choose on how to do the actual change
    you have 2 choices now:
    Option 1 - exp/imp the entire database and stop using the rest of this note.
    a. Export the current entire database (with NLS_LANG set to <your old
    database character set>)
    b. Create a new database in the AL32UTF8 character set
    c. Import all data into the new database (with NLS_LANG set to <your old database character set>)
    d. The conversion is complete, do not continue with this note.
    note that you do need to deal with truncation issues described in step 4), even
    if you use the export/import method.
    Option 2 - export only the convertible data and continue using this note.
    For 9i and lower:
    a. If you have "convertible" data for the sys objects SYS.METASTYLESHEET,
    SYS.RULE$ or SYS.JOB$ then follow the following note for those objects:
    Note 258904.1 Convertible data in data dictionary: Workarounds when changing character set
    make sure to combine the next steps in the example script given in that note.
    b. Export all the tables that csscan shows have convertible data
    (make sure that the character set part of the NLS_LANG is set to the current
    database character set during the export session)
    c. Truncate those tables
    d. Run csscan again to verify you only have "changeless" application data left
    e. If this now reports only Changeless data then proceed to step 8), otherwise
    do the same again for the rows you've missed out.
    For 10g and up:
    a. Export all the USER tables that csscan shows have convertible data
    (make sure that the character set part of the NLS_LANG is set to the current
    database character set during the export session)
    b. Fix any "convertible" in the SYS schema, note that the 10g way to change
    the characterset (= the CSALTER script) will deal with any CLOB data in the
    sys schema. All "no 9i only" fixes in
    Note 258904.1 Convertible data in data dictionary: Workarounds when changing character set
    should NOT be done in 10g
    c. Truncate the exported user tables.
    d. Run csscan again to verify you only have "changeless" application data left
    e. If this now reports only Changeless data then proceed to step 8), otherwise
    do the same again for the rows you've missed out.
    When using CSSCAN 2.x (10g database) you should see in WE8TOUTF8.txt this:
    The data dictionary can be safely migrated using the CSALTER script
    If you do NOT have this when working on a 10g system CSALTER will NOT work and this
    means you have missed something or not followed all steps in this note.
    8) Perform the character set change:
    Perform a backup of the database.
    Check the backup.
    Double-check the backup.
    For 9i and below:
    Then use the "alter database" command, this changes the current database
    character set definition WITHOUT changing the actual stored data.
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    1. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all.
    If you are using RAC see
    Note 221646.1 Changing the Character Set for a RAC Database Fails with an ORA-12720 Error
    2. Execute the following commands in sqlplus connected as "/ AS SYSDBA":
    SPOOL Nswitch.log
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
    ALTER SYSTEM SET AQ_TM_PROCESSES=0;
    ALTER DATABASE OPEN;
    ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8;
    SHUTDOWN IMMEDIATE;
    -- a alter database takes typically only a few minutes or less,
    -- it depends on the number of columns in the database, not the amount of data
    3. Restore the parallel_server parameter in INIT.ORA, if necessary.
    4. STARTUP;
    Without the INTERNAL_USE you get a ORA-12712: new character set must be a superset of old character set
    WARNING WARNING WARNING
    Do NEVER use "INTERNAL_USE" unless you did follow the guidelines STEP BY STEP
    here in this note and you have a good idea what you are doing.
    Do NEVER use "INTERNAL_USE" to "fix" display problems, but follow Note 225938.1
    If you use the INTERNAL_USE clause on a database where there is data listed
    as convertible without exporting that data then the data will be corrupted by
    changing the database character set !
    For 10g and up:
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    Then you do in sqlplus connected as "/ AS SYSDBA":
    -- check if you are using spfile
    sho parameter pfile
    -- if this "spfile" then you are using spfile
    -- in that case note the
    sho parameter job_queue_processes
    sho parameter aq_tm_processes
    -- (this is Bug 6005344 fixed in 11g )
    -- then do
    shutdown
    startup restrict
    SPOOL Nswitch.log
    @@?\rdbms\admin\csalter.plb
    -- Csalter will aks confirmation - do not copy paste the whole actions on one time
    -- sample Csalter output:
    -- 3 rows created.
    -- This script will update the content of the Oracle Data Dictionary.
    -- Please ensure you have a full backup before initiating this procedure.
    -- Would you like to proceed (Y/N)?y
    -- old 6: if (UPPER('&conf') <> 'Y') then
    -- New 6: if (UPPER('y') <> 'Y') then
    -- Checking data validility...
    -- begin converting system objects
    -- PL/SQL procedure successfully completed.
    -- Alter the database character set...
    -- CSALTER operation completed, please restart database
    -- PL/SQL procedure successfully completed.
    -- Procedure dropped.
    -- if you are using spfile then you need to also
    -- ALTER SYSTEM SET job_queue_processes=<original value> SCOPE=BOTH;
    -- ALTER SYSTEM SET aq_tm_processes=<original value> SCOPE=BOTH;
    shutdown
    startup
    and the 10g database will be AL32UTF8
    9) Reload the data pump packages after a change to AL32UTF8 / UTF8 in Oracle10
    If you use Oracle10 then the datapump packages need to be reloaded after
    a conversion to UTF8/AL32UTF8. In order to do this run the following 3
    scripts from $ORACLE_HOME/rdbms/admin in sqlplus connected as "/ AS SYSDBA":
    For 10.2.X:
    catnodp.sql
    catdph.sql
    catdpb.sql
    For 10.1.X:
    catnodp.sql
    catdp.sql
    10) Reimporting the exported data:
    If you exported any data in step 5) then you now need to reimport that data.
    Make sure that the character set part of the NLS_LANG is still set to the
    original database character set during the import session (just as it was during
    the export session).
    11) Verify the clients NLS_LANG:
    Make sure your clients are using the correct NLS_LANG setting:
    Regards,
    Chotu,
    Bangalore

  • Difference between WE8ISO8859P1, WE8ISO8859P15 on Oracle 9i and 10 Vs 11g

    Below is the SQL lists of WE8ISO8859P1 and WE8ISO8859P15 character sets that differ by code position only.
    set serveroutput on
    declare
    i number;
    begin
    for i in 0..255 loop
    declare
    ch varchar2(1);
    begin
    ch := chr(i);
    if convert( ch, 'WE8ISO8859P1', 'WE8ISO8859P15') != ch
    then
    dbms_output.put_line('Difference- Decimal:'|| i ||' Hexa:'|| to_char(i,'XXXX'));
    end if;
    end;
    end loop;
    end;
    when i run this on oracle 9i and 10g I am getting 40 charaters and 11g i am only getting 7
    Result on 9i/10g database
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.5.0 - Production
    SQL> set serveroutput on
    SQL> declare
    2 i number;
    3 begin
    4 for i in 0..255 loop
    5 declare
    6 ch varchar2(1);
    7 begin
    8 ch := chr(i);
    9 if convert( ch, 'WE8ISO8859P1', 'WE8ISO8859P15') != ch
    10 then
    11 dbms_output.put_line('Difference- Decimal:'|| i ||' Hexa:'|| to_cha
    r(i,'XXXX'));
    12 end if;
    13 end;
    14 end loop;
    15 end;
    16 /
    Difference- Decimal:128 Hexa: 80
    Difference- Decimal:129 Hexa: 81
    Difference- Decimal:130 Hexa: 82
    Difference- Decimal:131 Hexa: 83
    Difference- Decimal:132 Hexa: 84
    Difference- Decimal:133 Hexa: 85
    Difference- Decimal:134 Hexa: 86
    Difference- Decimal:135 Hexa: 87
    Difference- Decimal:136 Hexa: 88
    Difference- Decimal:137 Hexa: 89
    Difference- Decimal:138 Hexa: 8A
    Difference- Decimal:139 Hexa: 8B
    Difference- Decimal:140 Hexa: 8C
    Difference- Decimal:141 Hexa: 8D
    Difference- Decimal:142 Hexa: 8E
    Difference- Decimal:143 Hexa: 8F
    Difference- Decimal:144 Hexa: 90
    Difference- Decimal:145 Hexa: 91
    Difference- Decimal:146 Hexa: 92
    Difference- Decimal:147 Hexa: 93
    Difference- Decimal:148 Hexa: 94
    Difference- Decimal:149 Hexa: 95
    Difference- Decimal:150 Hexa: 96
    Difference- Decimal:151 Hexa: 97
    Difference- Decimal:152 Hexa: 98
    Difference- Decimal:153 Hexa: 99
    Difference- Decimal:154 Hexa: 9A
    Difference- Decimal:155 Hexa: 9B
    Difference- Decimal:156 Hexa: 9C
    Difference- Decimal:157 Hexa: 9D
    Difference- Decimal:158 Hexa: 9E
    Difference- Decimal:159 Hexa: 9F
    Difference- Decimal:164 Hexa: A4
    Difference- Decimal:166 Hexa: A6
    Difference- Decimal:168 Hexa: A8
    Difference- Decimal:180 Hexa: B4
    Difference- Decimal:184 Hexa: B8
    Difference- Decimal:188 Hexa: BC
    Difference- Decimal:189 Hexa: BD
    Difference- Decimal:190 Hexa: BE
    PL/SQL procedure successfully completed.
    Results on 11G database.
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 21 12:07:40 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Enter password:
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select * from nls_database_parameters
    2 where parameter like '%CHARACTERSET';
    PARAMETER VALUE
    NLS_CHARACTERSET WE8ISO8859P15
    NLS_NCHAR_CHARACTERSET AL16UTF16
    SQL> set serveroutput on
    SQL> declare
    2 i number;
    3 begin
    4 for i in 0..255 loop
    5 declare
    6 ch varchar2(1);
    7 begin
    8 ch := chr(i);
    9 if convert( ch, 'WE8ISO8859P1', 'WE8ISO8859P15') != ch
    10 then
    11 dbms_output.put_line('Difference- Decimal:'|| i ||' Hexa:'|| to_cha
    r(i,'XXXX'));
    12 end if;
    13 end;
    14 end loop;
    15 end;
    16 /
    Difference- Decimal:164 Hexa: A4
    Difference- Decimal:166 Hexa: A6
    Difference- Decimal:168 Hexa: A8
    Difference- Decimal:180 Hexa: B4
    Difference- Decimal:184 Hexa: B8
    Difference- Decimal:188 Hexa: BC
    Difference- Decimal:189 Hexa: BD
    Difference- Decimal:190 Hexa: BE
    PL/SQL procedure successfully completed.
    SQL>
    Edited by: 915920 on 21-Feb-2012 06:55

    There was a change in the definition for codes 0x80-0x9f in WE8ISO8859P1 and WE8ISO8859P15. In 9i those codes map to the Unicode default replacement character U+FFFD, which means they are considered undefined. In 11g, they are mapped to the corresponding Unicode control codes U+0081 - U+009F. CONVERT in your test changes each undefined code into the reversed question mark, which differs from the original and gets reported.
    -- Sergiusz

  • How to convert back from UTF8 to ISO-8859-1 encoding?

    hi,
    I have a bunch of XML files which were wrongly encoded, and we lost all our accent characters.
    ie: é become é
    so how can I recover my XML files using powershell?
    so I want to change all the UTF8 ecoded characters back to the original ISO accent character
    é -> é
    I try this:
    1")
    $utf8 = [System.text.Encoding]::UTF8
    $utfBytes = $utf8.GetBytes("é")
    $isoBytes = [System.text.Encoding]::Convert($utf8, $iso, $utfBytes)
    $iso.GetString($isoBytes)
    but doesnt works.
    so is there a way to do this in powershell?
    I have to scan hundreds of files...
    thanks.

    You can't.  UTF-8 strips all of the information from the characters so you cannot know which characters are which.  If you know which characters you need to fix (requires knowing the spelling of the words) you could possible develop an matrix of
    replacements. There is no simple one line method.
    ¯\_(ツ)_/¯

  • 9.2 convert ASCII to UTF8 welsh language

    hello
    I have a 9.2 ascii database that i cant convert to UTF8 yet
    1 for an output (util file) i need to convert an ascii text string to utf-8 on export
    2 i have two characters that are not supported by ascii, ŵŷ the users will represent these by typing w^y^
    I tryed using UNISTR but non of the characters below are corectly converted
    SELECT UNISTR(ASCIISTR( '剔搙)) FROM DUAL ;
    how would you recomend converting a ascii latin 1 extended string to UTF-8 for export?
    is it sencible to use the character replacement plan above for ŵŷ?
    thanks
    james

    Probably the unconverted characters are not contained in the first charset.
    If this is right.
    http://en.wikipedia.org/wiki/Windows-1252
    ...there is no conversion for values outside the first charset.
    But I may made a mistake.
    Are you sure Â, â, î, Ê, ê and ô are in the 1252 charset?
    I am not able to see if there is a difference between the similar chars in the table on wikipedia and the ones you posted, that is why I asked.
    Anyway this output seems to verify my indication.
    Processing ...
    SELECT convert ('Ââî€Êêô','WE8MSWIN1252','UTF8') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','WE8MSWIN1252','UTF8')
    ¨¨¨¨                                    
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','UTF8','UTF8') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','UTF8','UTF8')
    ¨âêô                         
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','UTF8','WE8MSWIN1252') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','UTF8','WE8MSWIN1252')
    ¶¨Ç?¶îÇ?¶¨Ç?¶ô                          
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','WE8PC858','UTF8') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','WE8PC858','UTF8')
    1 row(s) retrieved
    Processing ...
    SELECT convert ('Ââî€Êêô','UTF8','WE8PC858') FROM DUAL
    Query finished, retrieving results...
    CONVERT('¨âêô','UTF8','WE8PC858')
    ƒ??Ç?¶¯Ç?ƒ??ƒ??Ç?                   
    1 row(s) retrievedSome characters are not supported on my DB so try these queries on yours to prove it.
    SELECT convert ('Ââî€Êêô','WE8MSWIN1252','UTF8') FROM DUAL;
    SELECT convert ('Ââî€Êêô','UTF8','UTF8') FROM DUAL;
    SELECT convert ('Ââî€Êêô','UTF8','WE8MSWIN1252') FROM DUAL;
    SELECT convert ('Ââî€Êêô','WE8PC858','UTF8') FROM DUAL;
    SELECT convert ('Ââî€Êêô','UTF8','WE8PC858') FROM DUAL;Bye Alessandro

  • Converting charaterset.

    Hi All,
    My DB is in 11.1.0.7.0 and NLS_Characterset is AL32UTF8.
    One of my table column is in varchar2 and it contains different characterset (� ). My BO reports are failing because of this character set (BO character set is in AL32UTF8)
    Using covert function one set/pattern of data i converted to the original characterset. [ Select column1, convert(column2, 'AL32UTF8', 'WE8ISO8859P1') FROM xyz ]
    I also create a new table with same filed as nvarchar2 and inserted the data from the coverted records. From the new table BO reports are fine.
    Now i wanted to move complete table data (around 20 GB, not sure how many rows and and what all types of charater set having non-UTF8 character) to the new table with nvarchar2. Problem is that Since it is worldwide application there are many types of characterset involved in the database. Converting each one like previous query won't be possible and accurate.
    Is there any way i can perform this task
    1. Identify all the rows that not UTF8 format.
    2. Convert to a correct character set and move to a new table with nvarchar2 that by default will take care of this (I hope so). Pelase note that i can't remove the charaters that have non-utf8 format
    So looks like diffrent character set data is in UTF8 format and we can't really see the extact charater set since it is in AL32UTF8. we need to convert back to the original character set and save it in nvarchar2 format.
    Please share your suggestions. Thanks in advance.
    Regards,
    Anto.

    Hi Anto,
    I would doubt that, as NVARCHAR2 columns are in the NCHAR characterset.
    The original idea was: you can use characters in the characterset in both columns and identifiers, so you would use a more or less restrictive characterset for that.
    For the NCHAR characterset, used in colum values only, you would typically use an unicode characterset. You had to designate a literal as NCHAR by prefixing N before the literal.
    Now Oracle allows AL32UTF8 for the normal characterset about the only NCHAR characterset is AL16UTF16. The N prefix is removed. If the change to NVARCHAR2 would force you to change you anything in your code, I wouldn't do that. You should be able to resolve it using AL32UTF8.
    I once, several years ago, read Oracle was going to desupport single byte characters in the next major release of Oracle. You may best use Oracle's recommendations which is to use AL32UTF8 as database characterset when you need unicode.
    Sybrand Bakker
    Senior Oracle DBA

  • Euro and Poundsterling symbol problem even with WE8ISO8859P15

    Dear all,
    We are using the Primavera Enterprise (project management software) run on Oracle 8.1.7.0.0, the euro and pounds symbol is not displayed correctly (# and ¤ instead).
    Our configuration is :
    AIX 4.3.3.0
    NLS_LANG= WE8ISO8859P1 (configured in .profile for oracle user account)
    Originally the database was created with the NLS_CHARACTERSET WE8ISO8859P1, as we discovered that this nls does not support euro, what we have done then :
    - export it (.dmp WE8ISO8859P1)
    - create a new database instance with WE8ISO8859P15 (hosted in the above AIX server)
    - import the .dmp above to a new created database
    - the log file generated saying that :
    1. import done using WE8ISO8859P1 and WE8ISO8859P15
    2. the server characterset is WE8ISO8859P15, character conversion possible
    3. imported from export file with the WE8ISO8859P1
    - we queried the database with 'select * from sys.props$ whre name like 'NLS%'; and we are sure that the database nls_characterset is WE8ISO8859P15
    - we then update the euro and symbol from Primavera application
    - we close the Primavera application and try to reopen it
    - the euro and pounds symbol are back to as before (ie. # and ¤)
    Note : as the above server is a production server, we can not modify the NLS_LANG of the machine to WE8ISO8859P15
    Please advice.
    [email protected]

    Barry & all, sorry to mention the client/server architecture with Primavera Enterprise.
    The primavera application is installed in windows 2000/xp
    - we tried several NLS in windows regedit of Oracle home e.g. NLS_LANG=FRENCH_FRANCE.WE8MSWIN1252 and NLS_LANG=FRENCH_FRANCE.WE8ISO8859P15)
    - once we imported the database, we updated the euro and symbol from Primavera application, the symbol euro is displayed correctly in the application
    - we close the Primavera application and try to reopen it
    - the euro and pounds symbol are back to as before (ie. # and ¤)
    - the same thing when we query the database directly, it shows the same signs (ie. # and ¤)

Maybe you are looking for