AL32UTF8
CMS install docs (http://download-west.oracle.com/docs/html/B13614_01/database.htm#i634543) say:
Select Unicode (UTF8) as the database character set to enable full multi-language functionality in Oracle CM SDK. Specifying a different database character set can limit Oracle CM SDK functionality.
Our corporate standard is to use AL32UTF8 since it supports Unicode 3.2 vs. UTF8 which is only 3.0. This is important for some of our customers.
Will this "limit Oracle CM SDK functionality" or is this part of the docs incomplete and AL32UTF8 should be fine as well?
Oracle strongly recommends using this characterset.
You will otherwise get a warning during RCU run.
Similar Messages
-
ORA-12712 error while changing nls character set to AL32UTF8
Hi,
It is strongly recommend to use database character set AL32UTF8 when ever a database is going to used with our XML capabilities. The database character set in the installed DB is WE8MSWIN1252. For making use of XML DB features, I need to change it to AL32UTF8. But, when I try doing this, I'm getting ORA-12712: new character set must be a superset of old character set. Is there a way to solve this issue?
Thanks in advance,
Divya.Hi,
a change from we8mswin1252 to al32utf8 is not directly possible. This is because al32utf is not a binary superset of we8mswin1252.
There are 2 options:
- use full export and import
- Use of the Alter in a sort of restricted way
The method you can choose depends on the characters in the database, is it only ASCII then the second one can work, in other cases the first one is needed.
It is all described in the Support Note 260192.1, "Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)". Get it from the support/metalink site.
You can also read the chapters about this issue in the Globalization Guide: [url http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430]Change characterset.
Herald ten Dam
http://htendam.wordpress.com -
CSSCAN in 11g - Characterset not changing from WE8IMSWIN1252 to AL32UTF8
All,
We have installed a 11g database in Linux box and once after that we wanted to change the character set to AL32UTF8 from default WE8MSWIN1252.
We took the cs-alter approach and ran cs-scan utility, upon going through csscan.txt files generated by csscan utility we found that there are no lossy data but convertible data was found in data dictionary. Below is the output from csscan.txt
This is the Scan Summary
*[Scan Summary]*
All character type data in the data dictionary are convertible to the new character set
All character type application data are convertible to the new character set
Database Scan Summary Report
Time Started : 2012-10-17 21:42:17
Time Completed: 2012-10-17 21:42:47
Process ID Time Started Time Completed
1 2012-10-17 21:42:18 2012-10-17 21:42:46
2 2012-10-17 21:42:18 2012-10-17 21:42:46
3 2012-10-17 21:42:18 2012-10-17 21:42:46
[Database Size]
Tablespace Used Free Total Expansion
SYSTEM 709.75M 256.00K 710.00M 2.42M
SYSAUX 645.63M 34.38M 680.00M 12.52M
UNDOTBS1 13.13M 16.88M 30.00M .00K
TEMP .00K .00K .00K .00K
USERS 1.31M 3.69M 5.00M .00K
HYPE_DATA 1,024.00K 19,999.00M 20,000.00M .00K
HYPE_INDX 1,024.00K 19,999.00M 20,000.00M .00K
Total 1,371.81M 40,053.19M 41,425.00M 14.94M
The size of the largest CLOB is 1625114 bytes
[Database Scan Parameters]
Parameter Value
CSSCAN Version v2.1
Instance Name dvhp081
Database Version 11.2.0.3.0
Scan type Full database
Scan CHAR data? YES
Database character set WE8MSWIN1252
FROMCHAR WE8MSWIN1252
TOCHAR al32utf8
Scan NCHAR data? NO
Array fetch buffer size 10240
Number of processes 3
Capture convertible data? NO
[Scan Summary]
All character type data in the data dictionary are convertible to the new character set
All character type application data are convertible to the new character set
[Data Dictionary Conversion Summary]
Data Dictionary Tables:
Datatype Changeless Convertible Truncation Lossy
VARCHAR2 5,408,302 0 0 0
CHAR 4,261 0 0 0
LONG 249,018 0 0 0
CLOB 67,652 3,794 0 0
VARRAY 49,807 0 0 0
Total 5,779,040 3,794 0 0
Total in percentage 99.934% 0.066% 0.000% 0.000%
The data dictionary can be safely migrated using the CSALTER script
XML CSX Dictionary Tables:
Datatype Changeless Convertible Truncation Lossy
VARCHAR2 702 0 0 0
CHAR 0 0 0 0
LONG 0 0 0 0
CLOB 0 0 0 0
VARRAY 0 0 0 0
Total 702 0 0 0
Total in percentage 100.000% 0.000% 0.000% 0.000%
[Application Data Conversion Summary]
Datatype Changeless Convertible Truncation Lossy
VARCHAR2 2,550,581 0 0 0
CHAR 0 0 0 0
LONG 0 0 0 0
CLOB 22,187 8,287 0 0
VARRAY 0 0 0 0
Total 2,572,768 8,287 0 0
Total in percentage 99.679% 0.321% 0.000% 0.000%
[Distribution of Convertible, Truncated and Lossy Data by Table]
Data Dictionary Tables:
USER.TABLE Convertible Truncation Lossy
MDSYS.SDO_COORD_OP_PARAM_VALS 200 0 0
MDSYS.SDO_GEOR_XMLSCHEMA_TABLE 1 0 0
MDSYS.SDO_STYLES_TABLE 78 0 0
MDSYS.SDO_XML_SCHEMAS 5 0 0
SYS.METASTYLESHEET 179 0 0
SYS.RULE$ 1 0 0
SYS.SCHEDULER$_EVENT_LOG 356 0 0
SYS.WRH$_SQLTEXT 537 0 0
SYS.WRH$_SQL_PLAN 514 0 0
SYS.WRI$_ADV_DIRECTIVE_META 5 0 0
SYS.WRI$_ADV_OBJECTS 28 0 0
SYS.WRI$_ADV_SQLT_PLANS 2 0 0
SYS.WRI$_ADV_SQLT_PLAN_STATS 2 0 0
SYS.WRI$_DBU_FEATURE_METADATA 193 0 0
SYS.WRI$_DBU_FEATURE_USAGE 9 0 0
SYS.WRI$_DBU_HWM_METADATA 21 0 0
SYS.WRI$_REPT_FILES 27 0 0
SYSMAN.MGMT_IP_ELEM_DEFAULT_PARAMS 130 0 0
SYSMAN.MGMT_IP_REPORT_ELEM_PARAMS 1,475 0 0
SYSMAN.MGMT_IP_SQL_STATEMENTS 31 0 0
XML CSX Dictionary Tables:
USER.TABLE Convertible Truncation Lossy
Application Data:
USER.TABLE Convertible Truncation Lossy
APEX_030200.WWV_FLOW_BANNER 10 0 0
APEX_030200.WWV_FLOW_BUTTON_TEMPLATES 12 0 0
APEX_030200.WWV_FLOW_CUSTOM_AUTH_SETUPS 19 0 0
APEX_030200.WWV_FLOW_FLASH_CHART_SERIES 5 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES 298 0 0
APEX_030200.WWV_FLOW_PAGE_GENERIC_ATTR 44 0 0
APEX_030200.WWV_FLOW_PAGE_PLUGS 3,240 0 0
APEX_030200.WWV_FLOW_PAGE_PLUG_TEMPLATES 254 0 0
APEX_030200.WWV_FLOW_PROCESSING 45 0 0
APEX_030200.WWV_FLOW_ROW_TEMPLATES 66 0 0
APEX_030200.WWV_FLOW_SHORTCUTS 39 0 0
APEX_030200.WWV_FLOW_STEPS 1,795 0 0
APEX_030200.WWV_FLOW_STEP_PROCESSING 2,238 0 0
APEX_030200.WWV_FLOW_TEMPLATES 192 0 0
APEX_030200.WWV_FLOW_WORKSHEETS 30 0 0
[Distribution of Convertible, Truncated and Lossy Data by Column]
Data Dictionary Tables:
USER.TABLE|COLUMN Convertible Truncation Lossy
MDSYS.SDO_COORD_OP_PARAM_VALS|PARAM_VALUE_FILE 200 0 0
MDSYS.SDO_GEOR_XMLSCHEMA_TABLE|XMLSCHEMA 1 0 0
MDSYS.SDO_STYLES_TABLE|DEFINITION 78 0 0
MDSYS.SDO_XML_SCHEMAS|XMLSCHEMA 5 0 0
SYS.METASTYLESHEET|STYLESHEET 179 0 0
SYS.RULE$|CONDITION 1 0 0
SYS.SCHEDULER$_EVENT_LOG|ADDITIONAL_INFO 356 0 0
SYS.WRH$_SQLTEXT|SQL_TEXT 537 0 0
SYS.WRH$_SQL_PLAN|OTHER_XML 514 0 0
SYS.WRI$_ADV_DIRECTIVE_META|DATA 5 0 0
SYS.WRI$_ADV_OBJECTS|ATTR4 28 0 0
SYS.WRI$_ADV_SQLT_PLANS|OTHER_XML 2 0 0
SYS.WRI$_ADV_SQLT_PLAN_STATS|OTHER 2 0 0
SYS.WRI$_DBU_FEATURE_METADATA|INST_CHK_LOGIC 22 0 0
SYS.WRI$_DBU_FEATURE_METADATA|USG_DET_LOGIC 171 0 0
SYS.WRI$_DBU_FEATURE_USAGE|FEATURE_INFO 9 0 0
SYS.WRI$_DBU_HWM_METADATA|LOGIC 21 0 0
SYS.WRI$_REPT_FILES|SYS_NC00005$ 27 0 0
SYSMAN.MGMT_IP_ELEM_DEFAULT_PARAMS|VALUE 130 0 0
SYSMAN.MGMT_IP_REPORT_ELEM_PARAMS|VALUE 1,475 0 0
SYSMAN.MGMT_IP_SQL_STATEMENTS|SQL_STATEMENT 31 0 0
XML CSX Dictionary Tables:
USER.TABLE|COLUMN Convertible Truncation Lossy
Application Data:
USER.TABLE|COLUMN Convertible Truncation Lossy
APEX_030200.WWV_FLOW_BANNER|BANNER 10 0 0
APEX_030200.WWV_FLOW_BUTTON_TEMPLATES|TEMPLATE 12 0 0
APEX_030200.WWV_FLOW_CUSTOM_AUTH_SETUPS|AUTH_FUNC 8 0 0
APEX_030200.WWV_FLOW_CUSTOM_AUTH_SETUPS|PAGE_SENT 10 0 0
APEX_030200.WWV_FLOW_CUSTOM_AUTH_SETUPS|POST_AUTH 1 0 0
APEX_030200.WWV_FLOW_FLASH_CHART_SERIES|SERIES_QU 5 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|ITEM_TEMPLATE 20 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|ITEM_TEMPLATE 20 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|LIST_TEMPLATE 105 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|LIST_TEMPLATE 105 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|SUB_LIST_ITEM 12 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|SUB_LIST_ITEM 12 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|SUB_TEMPLATE_ 12 0 0
APEX_030200.WWV_FLOW_LIST_TEMPLATES|SUB_TEMPLATE_ 12 0 0
APEX_030200.WWV_FLOW_PAGE_GENERIC_ATTR|ATTRIBUTE_ 44 0 0
APEX_030200.WWV_FLOW_PAGE_PLUGS|PLUG_SOURCE 3,240 0 0
APEX_030200.WWV_FLOW_PAGE_PLUG_TEMPLATES|TEMPLATE 166 0 0
APEX_030200.WWV_FLOW_PAGE_PLUG_TEMPLATES|TEMPLATE 88 0 0
APEX_030200.WWV_FLOW_PROCESSING|PROCESS_SQL_CLOB 45 0 0
APEX_030200.WWV_FLOW_ROW_TEMPLATES|ROW_TEMPLATE1 54 0 0
APEX_030200.WWV_FLOW_ROW_TEMPLATES|ROW_TEMPLATE2 10 0 0
APEX_030200.WWV_FLOW_ROW_TEMPLATES|ROW_TEMPLATE3 2 0 0
APEX_030200.WWV_FLOW_SHORTCUTS|SHORTCUT 39 0 0
APEX_030200.WWV_FLOW_STEPS|HELP_TEXT 1,513 0 0
APEX_030200.WWV_FLOW_STEPS|HTML_PAGE_HEADER 282 0 0
APEX_030200.WWV_FLOW_STEP_PROCESSING|PROCESS_SQL_ 2,238 0 0
APEX_030200.WWV_FLOW_TEMPLATES|BOX 64 0 0
APEX_030200.WWV_FLOW_TEMPLATES|FOOTER_TEMPLATE 64 0 0
APEX_030200.WWV_FLOW_TEMPLATES|HEADER_TEMPLATE 64 0 0
APEX_030200.WWV_FLOW_WORKSHEETS|SQL_QUERY 30 0 0
[Indexes to be Rebuilt]
USER.INDEX on USER.TABLE(COLUMN)
APEX_030200.WWV_FLOW_WORKSHEETS_UNQ_IDX on APEX_030200.WWV_FLOW_WORKSHEETS(SYS_NC00078$)
APEX_030200.WWV_FLOW_WORKSHEETS_UNQ_IDX on APEX_030200.WWV_FLOW_WORKSHEETS(SYS_NC00079$)
APEX_030200.WWV_FLOW_WORKSHEETS_UNQ_IDX on APEX_030200.WWV_FLOW_WORKSHEETS(SYS_NC00080$)
APEX_030200.WWV_FLOW_WORKSHEETS_UNQ_IDX on APEX_030200.WWV_FLOW_WORKSHEETS(SYS_NC00081$)
APEX_030200.WWV_FLOW_WS_UNQ_ALIAS_IDX on APEX_030200.WWV_FLOW_WORKSHEETS(SYS_NC00082$)
APEX_030200.WWV_FLOW_WS_UNQ_ALIAS_IDX on APEX_030200.WWV_FLOW_WORKSHEETS(ALIAS)
----------------------------------------------------------------------------------We followed few metalink documents *Solving Convertible or Lossy data in Data Dictionary objects reported by Csscan when changing the NLS_CHARACTERSET [ID 258904.1]* and found that we are good to go as convertible was found only in data dictionary and that too CLOB data. But while running csalter.plb csalter came out without changing the characterset. We ran the following query given the said document and it returned no rows which again confirms there is no problem and go ahead with running csalter.
SELECT DISTINCT z.owner_name
|| '.'
|| z.table_name
|| '('
|| z.column_name
|| ') - '
|| z.column_type
|| ' - '
|| z.error_type
|| ' ' NotHandledDataDictColumns
FROM csmig.csmv$errors z
WHERE z.owner_name IN
(SELECT DISTINCT username FROM csmig.csm$dictusers
) minus
SELECT DISTINCT z.owner_name
|| '.'
|| z.table_name
|| '('
|| z.column_name
|| ') - '
|| z.column_type
|| ' - '
|| z.error_type
|| ' ' DataDictConvCLob
FROM csmig.csmv$errors z
WHERE z.error_type ='CONVERTIBLE'
AND z.column_type = 'CLOB'
AND z.owner_name IN
(SELECT DISTINCT username FROM csmig.csm$dictusers
ORDER BY NotHandledDataDictColumns
/Sorry to have made the thread so big but to make sure and give a complete picture of the issue pasted the csscan contents. Request the PRO's to help us in this issue.You have convertible data in the application tables. CLOB or not, such data prevents csalter.plb from changing the character set.
You are on 11.2.0.3, so use the DMU (http://www.oracle.com/technetwork/products/globalization/dmu/overview/index.html). It can cope with such data.
-- Sergiusz -
Field length misreported in application with AL32UTF8 database
Hi all,
I have the following problem with two different applications running against an Oracle 10.2 database with AL32UTF8 character set. The tables in the database are created using NLS_LENGTH_SEMANTICS=CHAR. The problem is that in the applications, the length of the fields are misreported as being 3 or 4 times longer, so that the users can input too much data which can then not be stored. For example with a field created as VARCHAR2(20 CHAR), it is reported in the application as either 60 or 80 characters depending on where it is run from.
I suspect this is because both applications don't implement proper support for length semantics. One is a Win32 application written in Delphi and using the SqlDirect interface to OCI. The other is a Microsoft .NET framework 1.1 application using the Microsoft System.Data.OracleClient interface to OCI. The application can read and write data including international characters but the lengths are incorrect.
I don't know much about the low-level OCI details but when scanning the documentation I found some information about OCI_ATTR_CHAR_SIZE which is what I suppose the application should use to get the character length of a field.
I made a quick test program using the Oracle Data Provider for .NET and with that it seems to report the column lengths correctly, but changing to that for the .NET application is not something that can be done quickly.
I would be very interested to know if anyone else has encountered similar issues, and if there was a solution that did not involve changing the application, i.e. is there some parameter or something that could be changed so that the application reports the size in characters?
If this is indeed an application/driver bug as I suspect, then I will take the question to the corresponding supplier.
Thank you very much in advance for any input you can give on this issue
RobertThe third-part drivers, like those you mentioned, are usually built on OCI and, if they are not aware of character semantics, they always use OCI_ATTR_DATA_SIZE attribute to retrieve the column length. This attribute is always in bytes and there is no option to change this.
-- Sergiusz -
Hello All,
I have oracle 10gr2 and i want to get support of Portuguese character set, for that , i suppose AL32UTF8 is recommended, but when i try to modify the character set i get this error.
SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
VALUE
WE8ISO8859P1
SQL> alter database character set AL32UTF8;
alter database character set AL32UTF8
ERROR at line 1:
ORA-12712: new character set must be a superset of old character set
what should i do now? any idea?
thankshere it is:
SQL> select news_detail from news_tbl;
NEWS_DETAIL
<p>A Directora nacional-Adjunta do Patrim?nio ligado ao Minist?rio das Finan?as
de Mo?ambique, Albertina Fruquia, esteve nesta quarta-feira, do dia 22 de Novemb
ro, na Secretaria de Log?stica e Tecnologia de Informa??o ( SLTI )para conhecer
o sistema de compras do Governo Federal Brasileiro e verificar a possibilidade d
e estabelecer acordos de coopera??o nessa ?rea.</p>
<p> </p>
<p>Na opini?o da Directora Mo?ambicana, o Brasil tem uma experi?ncia importante
em compras p?blicas que pode colaborar com Mo?ambique nesta ?rea. Ela lembrou qu
e o pa?s Africano est? imlpementando um novo regulamento de compras que constitu
i o preg?o presencial, modalidade regulamentada no Brasil desde 2000.</p>
<p> </p>
you can see that even in sqlplus the data is not shown the portugueses language chars, now what you think, that whether this data is stored wrong or either i can recover it by changing chars set or language?
Message was edited by:
nayyares
Message was edited by:
nayyares -
Character set Conversion (US7ASCII to AL32UTF8) -- ORA-31011 problem
Hello,
We've run into some problems as part of our character set conversion from US7ASCII to AL32UTF8. The latest problem is that we have a query that works in US7ASCII, but after converting to AL32UTF8 it no longer works and generates an ORA-31011 error. This is very concerning to us as this error indicates an XML parsing problem and we are doing no XML whatsoever in our DB. We do not have XML columns (nor even CLOBs or BLOBs) nor XML tables and it's not XMLDB.
For reference, we're running 11.2.0.2.0 over Solaris.
Has anyone seen this kind of problem before?
If need be, I'll find a way to post table definitions. However, it's safe to assume that we are only using DATE, VARCHAR2 and NUMBER column types in these tables. All of the tables are local to the DB.
ThanksWe converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
A summary:
1) We replaced the lossy characters by parsing a csscan output file
2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
Our actual error message:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00210: expected '<' instead of '�Error at line 1
31011. 00000 - "XML parsing failed"
*Cause: XML parser returned an error while trying to parse the document.
*Action: Check if the document to be parsed is valid.
Error at Line: 24 Column: 15
This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
Please advise if more information is needed from my end. -
Changing database character set from US7ASCII to AL32UTF8
Our database is running on Oracle database 10.1.0.4.0 (AIX) The following are its parameters:
SQL> select value from NLS_DATABASE_PARAMETERS where parameter='NLS_CHARACTERSET';
VALUE
US7ASCII
We would like to change the database character set to AL32UTF8. After following Metalink notes: 260192.1 (which helped us resolve "Lossy" and "Truncated" data, the final output of the CSSCAN utility is:
[Scan Summary]
All character type data in the data dictionary are convertible to the new character set
All character type application data are convertible to the new character set
[Data Dictionary Conversion Summary]
The data dictionary can be safely migrated using the CSALTER script
We have no (0) Truncation and Lossy entries on the .txt file. We only have Changeless and Convertible. Now accdg to the documentation, we can do a FULL EXP and FULL IMP. But it did not detail how to do the conversion on the same database. The discussion on the document tells how to do it from one database to another database. But how about on the same database?
We cannot use CSALTER as stated on the document.
(Step 6
Step 12
12.c) When using Csalter/Alter database to go to AL32UTF8 and there was NO "Truncation" data, only "Convertible" and "Changeless" in the csscan done in point 4:)
After performing a FULL export of the database, how can we change its character set? What do we need to do the the existing database to change its character set to AL32UTF8 before we import back our dump file into the same database?
Please help.There you are! Thanks! Seems like I am right in my understanding about the Oracle Official Documentation. Thanks!
Hmmmmm...when you say:
*"you can do selective export of only convertible tables, truncate the tables, use CSALTER, and re-import."*
This means that:
1. After running csscan on database PROD, i will take note of the convertible tables in the .txt output file.
2. Perform selective EXPORT on PROD (EXP the convertible tables)
3. Truncate the convertible tables on PROD database
4. Use CSALTER on PROD database
5. Re-import the tables into PROD database
6. Housekeeping.
Will you tell me if these steps are the correct one? Based on our scenario: This is what i have understood referring to the Official Doc.
Am i correct?
I really appreciate your help Sergiusz. -
Oracle11g: how I change character set to AL32UTF8?
Hi, a software is requiring to have a database with AL32UTF8 character set.
For what I understand I have an instance of db with
nls_language=american
I tried:
SQL> alter database character set AL32UTF8;
alter database character set AL32UTF8
ERROR at line 1:
ORA-12712: new character set must be a superset of old character set
what's wrong? How can I achieve this?
Thanks a lot.
Warning: you are talking with a non expert. :)802428 wrote:
Hi schavali,
I am new bee to oracle and BPM so i am unable to get which database you are talking about to drop & recreate and also how to do so.
Any help over this will be highly appreciable.
Regards,
ITM CrazyWe are referring to OP's database (where the characterset is set to WE8MSWIN1252)
Srini -
NLS character set non AL32UTF8 versus AL32UTF8
Good morning Gurus,
RCU utility strongly recommends to have this parameter set to AL32UTF8.
My database by default was set as WE8MSWIN1252.
I have read a lot about these settings. and like to have exact steps to take to accomplish this. I posted this problem in "Problem with RCU utility' but did not get any response.
My question is why oracle makes things difficult. I only use american language, if both are good for that then why do RCU requires to change it.
I have seen people who have ignored this message had trouble down the road of installation process.
I appreciate if some one can explain to me the implications of not changing versus changing.
Is that means, my database has to be kept with AL32UTF8 parameter all the time after installation is done?
Also another question
I have 3 meg RAM on my laptop, Oracle requires 4, will this cause the problem during installation?
Thank you
jHi,
a change from we8mswin1252 to al32utf8 is not directly possible. This is because al32utf is not a binary superset of we8mswin1252.
There are 2 options:
- use full export and import
- Use of the Alter in a sort of restricted way
The method you can choose depends on the characters in the database, is it only ASCII then the second one can work, in other cases the first one is needed.
It is all described in the Support Note 260192.1, "Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)". Get it from the support/metalink site.
You can also read the chapters about this issue in the Globalization Guide: [url http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430]Change characterset.
Herald ten Dam
http://htendam.wordpress.com -
Unable to migrate table, character set from WE8MSWIN1252 to AL32UTF8
Hi,
On our source db the character set is AL32UTF8
On our own db, we used the default character set of WE8MSWIN1252 .
When migrating one of the table, we get an error of this: ORA-29275: partial multibyte character
So in to alter our character set from WE8MSWIN1252 to AL32UTF8, we get this error:
ALTER DATABASE CHARACTER SET AL32UTF8
ERROR at line 1:
ORA-12712: new character set must be a superset of old character set
I would sure not like to reinstall the db and migrate the tables again. Thanks.See this related thread - Re: Want to change characterset of DB
You can use the ALTER DATABASE CHARACTER SET command in very few cases. You will most likely have to recreate the database and re-migrate the data.
HTH
Srini -
Data not viewable in table using Control file CharacterSet AL32UTF8
i have a flat file which contains Chinese characters and English Characters
I have to create a control file, to insert the data in this flat file to the Table
The characterset i am using in my control file is AL32UTF8
When i use this characterset the data is getting loaded into the table,but i am not able to view the Chinese characters.
i am able to see some Upside down question mark symbol
Please help me of how to view the chinese characters into the table
Is that any other characterset i have to set in Control fileNLS_LANG is an environment variable. I'm assuming you're on Windows, so it's probably something like
Control Panel | System | Advanced | Environment Variables
Given that you're using Toad, though, there may be someplace there where you set the character set. There is a discussion on Eddie Awad's blog on how various tools can be made to display Unicode data
http://awads.net/wp/2006/07/06/sql-developer-and-utf8/
Some of the comments discuss Toad, though that's not a tool I'm familiar with.
If you happen to be able to use iSQL*Plus, since that's browser based, it supports unicode natively. That's often the easiest solution.
Justin -
I am not able parse my XML in oracle 11g having parameter value NLS_CHARACTERSET=AL32UTF8
but the same xml and same procedure i am able execute in db having NLS_CHARACTERSET=WE8MSWIN1252.
How to solve this??
for parsing the xml i am using -
dbms_lob,dbms_xmlparser.parseClobI got my solution.
-
Can a db with character set UTF8 be restored to AL32UTF8?
Hello Everyone,
Good Day.
Our present production and non-production databases are configured with NLS_CHARACTERSET as UTF8. However, as we are in the process of migrating to a new server, we intend to configure the new databases with NLS_CHARACTERSET as AL32UTF8 (which is the recommended option as per our research. Moreover, came to know that for Weblogic schemas and repositories to work, NLS_CHARACTERSET must be AL32UTF8).
As we would be restoring from a backup to the new instance created on the new server, kindly help us understand if any issues might arise while restoring due to both being different charactersets?
Warm Regards,
Vikram.Hi Robin,
Thank you for the update. Our DB is too huge and contains many schemas to try for a data pump. Hence we had planned for a restoration which might be simpler task with lesser downtime.
Perhaps, one option would be to create the instances with UTF character set itself and then change it once the migration activity has been completed.
Also, could you please throw some light on the two character sets as to which one is better and why?
Warm Regards,
Vikram. -
NLS support problems when using AL32UTF8 in dads.conf
Hello,
Following a post by Joel Kallman, in one of the forum threads, about the mandatory use of AL32UTF8 in dads.conf, when running HTML DB v2.0, I changed my PlsqlNLSLanguage parameter accordingly.
Prior to the change, I experienced some problems when using non-English characters some application items appeared as gibberish when contained non-English characters, and the LIKE operator didn't perform as expected. After the change, it all seems to work OK, but now I have a different problem.
All the non-English characters in my HTML page source code appears as gibberish. On screen, at run time, everything display correctly, but the source code seems to be corrupted. It is very difficult, and very annoying to debug the pages that way. Is there a way to enjoy both worlds Using AL32UTF8 in the dads.conf, as required, and still getting a coherent HTML source code, containing non-English characters?
Thanks,
Arie.Joel,
I use the following settings and they work fine for me:
Operating system:
LANG=de_DE
LANGVAR=de_DE.UTF-8
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
daust:oracle[o1020]> uname -a
Linux daust.opal-consulting.de 2.4.21-37.EL #1 Wed Sep 7 13:35:21 EDT 2005 i686 i686 i386 GNU/Linux
daust:oracle[o1020]> cat /etc/redhat-release
Red Hat Enterprise Linux ES release 3 (Taroon Update 6)
daust:oracle[o1020]>
marvel.conf:
<Location /pls/htmldb>
Order deny,allow
PlsqlDocumentPath docs
AllowOverride None
PlsqlDocumentProcedure wwv_flow_file_manager.process_download
PlsqlDatabaseConnectString localhost:1521:o1020
PlsqlNLSLanguage AMERICAN_AMERICA.WE8ISO8859P1
PlsqlAuthenticationMode Basic
SetHandler pls_handler
PlsqlDocumentTablename wwv_flow_file_objects$
PlsqlDatabaseUsername HTMLDB_PUBLIC_USER
PlsqlDefaultPage htmldb
PlsqlDatabasePassword @BZvJYqadreElOqj5poCB5gE=
Allow from all
</Location>
Database:
daust:oracle[o1020]> sqlplus "/ as sysdba"
SQL> select * from nls_database_parameters;
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET WE8ISO8859P1
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
PARAMETER VALUE
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 10.2.0.1.0####################
Using AL32UTF8 resulted in the same problem as described ( and fixed ) here: Re: Strange - HTML not written correctly
So, what is the proper configuration of the DAD, perhaps there are different ones for Unicode instances and non-Unicode instances.
~Dietmar. -
Conversion of character sets (UCS2 of MSSQL to AL32UTF8 of Oracle Warehouse
Hi all,
I installed my enviroment as below.
Server:
Windows 7 Professional
SQL Server 20012
Character Set: UCS2
Client:
Linux REDHAT 5
Oracle Warehouse 11gR2 (11.2.0.1)
Oracle Database Gateway for MSSQL Server 11.2.0.1
Character Set:AL32UTF8
I installed the gateway for connecting the Oracle Warehouse with MSSQL DB. However, the characters are not readable, because the character of MSSQL DB is coded via one byte character, and the Oracle character is coded like multibyte character.
Input:
Table name on the MSSQL DB: Table_Name
Output:
Table name on the Oracle Warehouse Import Wirzard:◊T◊a◊b◊le◊_◊N◊a◊m◊e
It causes error by importing the table, because the OWB doesn't allow the unreadable characters of table name.
Do you have an idea for resolving it.
Thanks and kind regards,
HipOk, I have changed it as you wrote:
HS_FDS_CONNECT_INFO=100.30.4.157:1433//bob
HS_FDS_TRACE_LEVEL=255
HS_FDS_RECOVERY_ACCOUNT=RECOVER
HS_FDS_RECOVERY_PWD=RECOVER
HS_TRANSACTION_MODEL=READ_ONLY
HS_LANGUAGE=american_america.we8mswin1252
HS_NLS_NCHAR=UCS2
HS_NLS_LENGTH_SEMANTICS=CHAR
and I also added the listern.ora:
(SID_DESC =
(SID_NAME=bob)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2)
(ENVS=LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_2/dg4msql/driver/lib;/u01/app/oracle/product/11.2.0/dbhome_2/lib)
(PROGRAM=dg4msql)
tnsname.ora:
bob =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA = (SID = bob))
(HS = OK)
SQL> select * from all_users@bob;
USERNAME USER_ID CREATED
public 0 08-APR-03
dbo 1 08-APR-03
guest 2 08-APR-03
INFORMATION_SCHEMA 3 13-APR-09
sys 4 13-APR-09
db_owner 16384 08-APR-03
db_accessadmin 16385 08-APR-03
db_securityadmin 16386 08-APR-03
db_ddladmin 16387 08-APR-03
db_backupoperator 16389 08-APR-03
db_datareader 16390 08-APR-03
db_datawriter 16391 08-APR-03
db_denydatareader 16392 08-APR-03
db_denydatawriter 16393 08-APR-03
14 rows selected.
The Oracle Warehouse Builder still has the same error by the conversion of character sets.
Hip -
Oracle Character sets with PeopleSoft - AL32UTF8 vs. UTF8
We currently have PeopleSoft FInancials v8.8 with PeopleTools 8.45 running on Oracle 9.2.0.8 with the UTF8 character set.
We plan to upgrade to Oracle 10.2, and want to know if we can and should also convert the character set to AL32UTF8.
Any issues?
(A couple of years ago, we were told that AL32UTF8 was not yet supported in PeopleSoft).Right now, something strange, Oracle recommand do not use anymore UTF8, and Peoplesoft recommand do not use AL32UTF8 yet.
You can read the solution id #719906, but anyway, AL32UTF8 on PT8.4x should works fine.
Nicolas.
Maybe you are looking for
-
List of error to "Save all Metadata"
Hi, before restart my machine, when I changed the bind of the context of a custom controller, NWDS reported the possible errors also clicking "SAVE ALL METADATA"; after restart, to report these errors i must "Rebuild project". The "Rebuild" is an ope
-
I deleted some photos from my phone by accident
i deleted some photos from my phone by accident, the phone has previously been synced with my mac book while the photos were on my phone, how do i get them back on my phone from my laptop, also where can i find all the photos off of my phone that hav
-
How can i restore my safari menu view on my PC?
how can i restore my safari menu view on my PC?
-
Hi All, I developed one WD application which browses all the Roles in EP. I am using the code given by Prakash Singh in his weblog on Browsing through Roles,Worksets,Pages and Iviews. In his weblog /people/prakash.singh4/blog/2005/07/28/browse-roles-
-
Passing Parameters from JavaScript to method in backingbean
Hi, I use JDev 11.1.1.2.0 I want when I press a button I call a javaScript function, This function get IP Address and Mac Address of the client machine and send it to the method in backingbean. How can I get the IP Address and Mac Address and send it