Regarding CharacterSet
Hi,
In our code base we are using CharacterSet.DEFAULT_CHARSET and it returns me default value as oracle-characterset-31. CharacterSet class is in oracle.sql package. I would like to understand what does this character set oracle-characterset-31 mean and what it supports?
Where is this value 'oracle-characterset-31'coming from? Is it set somewhere?
I have a piece of code which states,
charset= CharacterSet.make(CharacterSet.DEFAULT_CHARSET);
return new CHAR((String)o, Charset);
The String contains numerals and Russian Characters. So new CHAR returns junk characters in place of Russian characters. So I wanted to change the value of DEFAULT_CHARSET such that it recognizes Russian characters too and does not return junk value. How to override the value of DEFAULT_CHARSET? My application is JAVA application and Oracle Database. We use OC4J AS.
This piece of code is in DAO layer.
Thanks.!!
I would like to add more to this. oracle-character-set-31 is ISO_LATIN_1_CHARSET. I got the characters which are supported in ISO_LATIN_1_CHARSET.
But I wanted to know where is this default value(oracle-character-set-31) coming from? Can I change the Default from ISO_LATIN_1 to UTF8_CHARSET? I am not understanding whose default value is this ISO_LATIN, if known I will modify the default value.
Thanks.!!
Similar Messages
-
Unicode Migration using National Characterset data types - Best Practice ?
I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
Specific questions that I have are :
1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
in variable assignments - v_module_name := N'ABCD'
in variable comparisons - IF v_sp_access_mode = N'DL'
in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
NLS_CHARACTERSET = WE8MSWIN1252
NLS_NCHAR_CHARACTERSET = AL16UTF16##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
## to keep the code/schemas consistent between the two databases
First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
-- Sergiusz -
Problem in JMS-Adapter with CharacterSet Websphere MQ
Hi,
we have the following scenario:
JMS -> PI -> File
We have a local Websphere MQ Queue Manager and the follwoing configuration in our sender adapter:
Transport-Protocol: WebSphere MQ (non JMS)
Message-Protocol: JMS 1.x
ConnectionFactory: com.ibm.mq.jms.MQQueueConnectionFactory
Java-class Queue: com.ibm.mq.jms.MQQueue
CCSID: 819
Transport: TCP/IP
JMS-conform: WebSphere MQ (non JMS)
In the local queue manager the messages (XML-Messages with header <?xml version="1.0" encoding="ISO-8859-1"?>) have characterSet 819 (ISO-8859-1). That's correct. You can open the files with XMLSpy and it works.
When we receive the messages by our JMS Sender Adapter all the character seems to be in UTF-8 and I don't know why. All the special characters are wrong cause the header of the XML-message shows ISO-8859-1 but all the signs are decoded in UTF-8.
In the other direction (JMS Receiver adapter, File -> PI - JMS) we have the same problem.
We create a ISO-8859-1 message in mapping (and it is really ISO-8859-1) and send it via JMS Receiver Adapter to the local message queue. But there the message arrives in UTF-8 encoding. I don't understand this.
Does anybody know what could be the cause for this?
Does the JMS adapter convert the messages from ISO-8859-1 into UTF-8?
Are there any parameters we have to set?
I hope anybody has an idea what's wrong.
Regards
Thorsten
Edited by: Thorsten Hautz on Oct 12, 2010 5:42 PMHi,
thanks a lot for your replies.
our driver settings are correct (as I can see).
I removed value 819 from CCSID, but we have the same effect.
The messages in the local queue manager are TextMessages in XML.
Does anybody know, if we need the standard modules (ConvertJMSMessageToBinary and ConvertBinaryToXMBMessage) in this case?
Is it possible to set the CCSID for the message payload anywhere in the configuration?
The CCSID in the Source tab doesn't have any influence to the encoding of the payload message, only to the header data.
Regards
Thorsten -
SOAP receiver adapter for ASCII-7 characterset???
Hi,
Our scenario is Abap Proxy -> XI -> Web Services (SOAP Adapter). Receiver webservice will accept only the characterset of ASCII-7. But Abap Proxy will send only unicode characterset (default).
Any workaround for receiver SOAP adapter to accept ASCII-7 characterset?
Regards,
Prasad UHi -
You can set a specific encoding in the soap receiver channel module configuration. From the SOAP Adapter FAQ (Note 856597):
<i> o Q: What character encoding is supported by the SOAP receiver
adapter?
A: The SOAP receiver adapter can use any character encoding
supported by the local JDK. The request message from the SOAP
receiver is normally encoded in UTF-8. If you want to change this
encoding, for instance to iso-8859-1, you can set parameter
XMBWS.XMLEncoding to iso-8859-1 in the module configuration for
the SOAP adapter module. This setting is for the outgoing SOAP
message and has no effect on the incoming SOAP message. For the
incoming SOAP message, any code page supported by the local JDK
is accepted.</i>
Check the <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/d23cbe11-0d01-0010-5287-873a22024f79">How to Use the XI 3.0 SOAP Adapter</a> document for an example.
Regards,
Jin -
Problem finding docs using content index in DB with different charactersets
Sorry for duplicating thread from [url http://forums.oracle.com/forums/thread.jspa?threadID=653067&tstart=0]this thread in Oracle Text forum, but it seems quite slow compared to this one, so probably someone has some suggestion.
Problem explanation:
DB version 10.2.0.1 SE on windows
database characterset AL32UTF8
I am creating following context index:
create index myindex on g(Content)
indextype is ctxsys.context
parameters('filter ctxsys.auto_filter
section group ctxsys.null_SECTION_GROUP');With following query:
SELECT distinct filename FROM g f
WHERE contains(F.Content, 'latiinju') >0;I can find latin symbols in ansi, utf-8 encoded text documents and msword and msexcel documents.
With following query:
SELECT distinct filename FROM g f
WHERE contains(F.Content, 'latviešu') >0;I can find latvian symbols in utf-8 encoded text documents and msword and msexcel documents, which basically is OK.
However there is another unfortunately already production database
10.2.0.3.0 SE on windows
with characterset BLT8CP921
and with index as defined above queries find absolutely nothing for both latin and latvian texts.
As soon as I've added another column "cset" in the table and filled it up with AL32UTF8 I can find latin characters for the same cases as in db above.
Index in this case is as follows:
create index myindex on g(Content)
indextype is ctxsys.context
parameters('filter ctxsys.auto_filter
section group ctxsys.null_SECTION_GROUP
charset column cset');However the problem is that for latvian characters it can find only UTF-8 encoded text files, but not doc and xls files and that's absolutely not OK.
I've tried also another charactersets in cset columns, but without any success.
So the question is - is there any possibility somehow to create the content index that it is possible to find also latvian specific symbols in doc and xls files in DB with characterset BLT8CP921?
Of course the ultimate solution would be to recreate db with AL32UTF8 characterset but I'd like to avoid that if possible.
TIA
Gints PlivnaI have applied performance tuning steps, like JAVA_POOL_SIZE, SGA_MAX_SIZE, increase table spaces size and change some other DB parameters. After applied, problematic index has been created without error and application is running fine without any error. Many thanks for your prompt suggestion, by which I could narrow down my problem. So, as my problem has been resolved, so please close this thread.
Regards,
Farhan Mazhar -
Folks, Here is the situation:
I have this 8.1.7 db in Western European characterset.
--I need to upgrade this db to 10g (latest version),
--I need to make it UTF in 10g.
Can the characterset change is possible during db upgrade? Or I can do the characterset after the db is upgraded to 10g?
Can anyone shed light on this?
Thanks in advance.
regards,
Lily.Unfortunately there are some pitfalls, when changing a database characterset from single-byte to unicode, especially if source database is 8i. You should start reading here (Upgrade Guide) and follow the links:
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14238/upgrade.htm#sthref142 -
CharacterSet Property for FTP adapter in SOA 10.1.3.5
Hi All,
I am facing an issue while setting a jca property for FTP adapter. The property I am setting is to sync Read files of UTF-8 type of characterSet. Hence in jca properties I am setting the following value:
<jca:operation
FileType="ascii"
CharacterSet="UTF-8"
LogicalDirectory="directory"
InteractionSpec="oracle.tip.adapter.ftp.outbound.FTPReadInteractionSpec"
LogicalArchiveDirectory="FileArchiveDirectory"
DeleteFile="true"
FileName="fileName"
OpaqueSchema="true"
UseRemoteArchive="true">
I want this changes to be done since the FTP adapter is not able to read files with the file name having special characters of type UTF-8.
When I execute this I am getting the following error:
[ SynchRead_ptt::SynchRead(Empty,opaque) ] - WSIF JCA Execute of operation 'SynchRead' failed due to: Could not instantiate InteractionSpec oracle.tip.adapter.ftp.outbound.FTPReadInteractionSpec due to: Error while setting JCA WSDL Property.
Property setCharacterSet is not defined for oracle.tip.adapter.ftp.outbound.FTPReadInteractionSpec
Please verify the spelling of the property.
I am trying this in SOA 10.1.3.5. can you please provide me a solution for this issue.Hi,
I am facing the same scenario.
FTP adapter is picking file as opq and sending as email attachment.But the adapter not able to resolve special characters like Á....In the received payload of the adapter Á is replaced by ?......
Can anyone please help me out or suggest a work around.Thanks in advance..
Thanks & Regards,
Subho -
Changing the Database characterset
Hi,
My oracle 9i database characterset is AL32UTF8.
I wanted to change this to UTF8 for some testing purpose...
After few days i need to reset this characterset back to AL32UTF8.
how should i proceed on this?
Plz adivice.....
Regards,
Ashok Kumar.GThanks Pierre Forstmann,
In metalink i found out the following
In 9i you can't simply use "ALTER DATABASE CHARACTERSET" to go from
AL32UTF8 to UTF8 because UTF8 is a SUB-set of AL32UTF8
(some codepoints which are correct in AL32UTF8 are not known in UTF8)
So you will run into ORA-12712 if you try alter database ....
I hope this makes sense when the database contains data.
suppose if i drop my schema(now the database has no user data) and try to change the database characterset from AL32UTF8 to UTF8 will it impact any other?
and after changing to UTF8 can i import the user data again (say from export file which was taken from a UTF8 database) ?
Regards,
Ashok Kumar.G -
SQLLOAD with various Charactersets
Hello,
I am working on an Oracle EBS project (version 11i) with Oracle DB 11G hosted on Oracle Linux.
Character set is UTF8 on the database since we have some Polish users.
Our ERP is logically interfaced with many systems from which we receive ASCII datafiles that we need to upload in our DB using SQLLOAD utility.
The problem we have is that depending on the sending system, the characterset of a datafile can vary among following values :
- EE8MSWIN1250 => files sent by our Polish Subsidiary
- WE8MSWIN1252 => files sent by WINDOWS systems
- WE8ISO8859P1 => files sent by some Unix Systems
We have developped a specific Linux shell that submits SQLLOAD for datafiles with the appropriate control file "CHARACTERSET" option.
The problem is that until now I was not able to detect precisely the character set of a given datafile.
- the Linux command "file -i" returns "text/plain; charset=iso-8859-1" even for a windows file encoded with WINDOWS-1252 or WINDOWS-1250
- I also tried the linux command iconv to convert the file to UTF8 but this command is successfull whatever the "from" characterset we specify (ISO-8859-1 / WINDOWS-1252 / WINDOWS-1250)
My Question :
How can I determine precisely the characterset of a given ASCII datafile in order to set correctly the CHARACTERSET option of SQLLOAD control file ?
(in batch mode on Linux)
Browsers as IE, Chrome or Firefox are able to do that (detect the character set of a web page to display it correctly) so I suppose that a tool or command should exist for that purpose.
Thanks in advance for helping and sharing experience.
Karim Helali
Toshiba FranceThank you Sergiusz : the lcsscan tools gives quite good results and it may be a solution for us.
The only issue is that lcsscan is only available on last oracle DB releases (10 and 11 ).
Although our database server is on release 11G, the EBS applications server is on oracle 8i due to Oracle Forms restrictions.
As the SQLLOAD is run from the applications server, I have to run the lcsscan tool by SSH on the DB server
So I let the question open a few days again in case someone knows a Linux command or tool that does the same control as lcsscan
Note: We are also considering the other solution you mention eg to assign to each sending system an agreed characterset .
Thank you again and best regards
Karim Helali -
I have a database that has been installed with the UTF8 characterset.
Therefore the £ sign is ASCII 49827 in this characterset and not ASCII 163 as used in ISO8859P1. I believe my problem stems from me using UTF complaint clients to enter the £ sign into the database. Because the Clients are UTF8 compaint they are passing ASCII 49827 for the £ sign into the database. However when non UTF8 compliants try to view or manipulate the data from this database, instead of them seeing the £ sign they see £.
I presume the solution should be to ensure all viewing clients are UTF8 compliant ?
My problem is that I have a 3rd party program that is taking the output from the database in the form of a text file and is then converting the output into a PDF. Hence in the PDF the £ sign is appearing as £, since this is what is effectively stored in the database / text file output
I presume it never makes sense to try and convert the Database using CSALTER to WE8ISO8859P1 from UTF8 ? ( as WE8ISO8859P1 is a sub set of UTF8 ? )
The routine that is creating the test file from the database is an E-Business Suite routine, so I cannot interfer with it and put in a translate command.
Any suggestions ?
thanks,
Jimwhat is NLS_CHARACTERSET value of your database?
I always use AR8MISO8859P6 and it's perfect for Arabic language.
For your Application Server, in the registry, check
NLS_LANG = AMERICAN_AMERICA.AR8MSWIN1256
REPORT_ARABIC_NUMERAL = ARABIC
REPORTS_PATH = (Make sure your font path is included here)
Add these entries in your uifont.ali file under [ PDF:Subset]
[ PDF:Subset ]
Arial..Italic.Bold.. = "Arialbi.ttf"
Arial..Italic... = "Ariali.ttf"
Arial...Bold.. = "Arialbd.ttf"
Arial..... = "Arial.ttf"
"Arabic Transparent"..Italic.Bold.. = "ARIALBI.TTF".....
"Arabic Transparent"..Italic... = "ARIALI.TTF".....
"Arabic Transparent"...Bold.. = "Arialbd.ttf"
"Arabic Transparent"..... = "artro.ttf"
Tahoma = "tahoma.ttf".....ar8mswin1256
Arial = "Tahoma"
"Tahoma Bold" = "tahomabd.ttf"
Make sure Regional and Language option in control panel is properly configured for Arabic.
restart the Windows Server 2003 and arabic should work properly on Server 2003 also.
I'm using Oracle Application Server 10g on Windows 2003 Server and Arabic Language is working perfectly for forms as well as PDF Reports.
I'm using Tahoma font for Arabic.
Hope this will help and solve your problem.
Note: Make sure that the font which you are using for Arabic in your reports is included in the uifont.ali
regards,
Saadat Ahmad
Message was edited by:
saadatahmad -
Characterset conversion from WE8MSWIN1252 to AL32UTF8
DB Version:11g
Is it possible to convert a DB of WE8MSWIN1252 characterset to AL32UTF8?Yes it can be done..
Please refer below metalink notes:
Changing the Database Character Set ( NLS_CHARACTERSET ) [ID 225912.1]
AL32UTF8 / UTF8 (Unicode) Database Character Set Implications [ID 788156.1]
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) [ID 260192.1]
Regards
Rajesh -
Convert Characterset from WE8MSWIN1252 to AL32UTF8
Dear Friends,
How to conver Characterset from WE8MSWIN1252 of Oracle 10.2.0.2 Database (32 bit Ent Edition) to AL32UTF8 on 11.2.0.2 (64 Bit Std Edition) during 11g DB Upgrade.
How to check the Limitations of Characterset conversions and the objects which will be affected during the conversion.
Regards,
DBIf you want help from this forum, i recommend:
1) Search before post
2) close your threads when it will be answered:
839396
Handle: 839396
Status Level: Newbie
Registered: Feb 23, 2011
Total Posts: 21
Total Questions: 14 (14 unresolved) -
Database Characterset Conversion from AMERICAN to JAPANESE
Hello
We are creating a test environment from a database with NLS_LANG = AMERICAN_AMERICA.WE8ISO8859P1. This needs to be converted into Japanese characterset with NLS_LANG = JAPANESE_JAPAN.JA16EUC
As per my understanding, I need to first export the database, then create a new database with Japanese NLS_LANG, and then re-import it
Could someone please confirm the steps please
1. Do I need to change NLS_LANG before exporting the database?
2. Can I run a database characterset conversion check before exporting db?
3. Is there any other pre-cautions that I need to take
Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
OS Version: HP-UX B.11.11
Regards
SudhanshuHello,
I have used the Character Set Scan utility (csscan) Version 1.2. I have attached the report results below.
To convert the character set instead of export/import, can I simply do
ALTER DATABASE CHARACTER SET ja16euc;
I would appreciate feedback on this if anyone has worked on this conversion before.
Parameter Value
CSSCAN Version v1.2
Database Version 9.2.0.6.0
Scan type Full database
Scan CHAR data? YES
Database character set WE8ISO8859P1
FROMCHAR WE8ISO8859P1
TOCHAR JA16EUC
Scan NCHAR data? NO
Array fetch buffer size 10240
Number of processes 20
Capture convertible data? NO
[Scan Summary]
Some character type data in the data dictionary are not convertible to the new character set
Some character type application data are not convertible to the new character set
[Data Dictionary Conversion Summary]
Datatype Changeless Convertible Truncation Lossy
VARCHAR2 3,303,684 386 0 8
CHAR 5 0 0 0
LONG 219,988 1 0 0
CLOB 0 58 0 0
Total 3,523,677 445 0 8
Total in percentage 100% 0% 0% 0%
[Application Data Conversion Summary]
Datatype Changeless Convertible Truncation Lossy
VARCHAR2 3,316,273,097 8,517 56 2,675
CHAR 6,954 0 0 0
LONG 197,690 0 0 0
CLOB 0 0 0 0
Total 3,316,477,741 8,517 56 2,675
Total in percentage 100% 0% 0% 0%
Convertible Truncation Lossy
1 0 0
4 0 0
28 0 6
2,792 43 20
55 1 0
45 0 0
3 0 0
39 0 0
978 0 0
69 0 0
36 0 0
16 0 0
45 0 0
21 0 0
102 4 0
195 4 0
205 0 2
102 4 0
61 0 470
2 0 0
6 0 0
67 0 487
0 0 1
1 0 0
3 0 0
8 0 0
1 0 0
3,115 0 1,689
261 0 0
4 0 0
230 0 0
5 0 0
4 0 0
4 0 0
6 0 0
3 0 0
1 0 0
58 0 0
386 0 8
Convertible Truncation Lossy
1 0 0
4 0 0
28 0 6
2,792 43 20
55 1 0
45 0 0
3 0 0
20 0 0
11 0 0
2 0 0
4 0 0
2 0 0
978 0 0
69 0 0
16 0 0
10 0 0
4 0 0
4 0 0
2 0 0
10 0 0
4 0 0
2 0 0
45 0 0
21 0 0
4 0 0
4 0 0
0 4 0
59 0 0
27 0 0
8 0 0
4 0 0
93 0 0
4 0 0
0 4 0
59 0 0
27 0 0
8 0 0
102 0 1
58 0 1
35 0 0
10 0 0
4 0 0
4 0 0
0 4 0
59 0 0
27 0 0
8 0 0
61 0 470
2 0 0
6 0 0
67 0 487
0 0 1
1 0 0
3 0 0
8 0 0
1 0 0
3,115 0 1,689
261 0 0
4 0 0
73 0 0
115 0 0
37 0 0
5 0 0
2 0 0
3 0 0
4 0 0
4 0 0
6 0 0
2 0 0
1 0 0
1 0 0
58 0 0
386 0 8
---------------- ---------------- ---------------- -
Hi,
I tried to install Oracle Database version 11.2.0.1.0 on windows 7, on time of selecting characterset i don't find WE8ISO8859P1 and it give me only two option which is WE8MSWIN1252 and unicode, since i want to install WE8ISO8859P1 characterset because of using transalationhub.
Thanks in Advance.
Regards
NomanHaqI have tried to run throuh following emca command
C:\app\Window7\product\11.2.0\dbhome_1\BIN>emca -config dbcontrol db -repos create
STARTED EMCA at Jun 21, 2012 10:22:34 AM
EM Configuration Assistant, Version 11.2.0.0.2 Production
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Enter the following information:
Database SID: orcl
Listener port number: 1521
Listener ORACLE_HOME [ C:\app\Window7\product\11.2.0\dbhome_1 ]:
Password for SYS user:
Password for DBSNMP user:
Password for SYSMAN user:
Email address for notifications (optional):
Outgoing Mail (SMTP) server for notifications (optional):
You have specified the following settings
Database ORACLE_HOME ................ C:\app\Window7\product\11.2.0\dbhome_1
Local hostname ................ 192.168.127.138
Listener ORACLE_HOME ................ C:\app\Window7\product\11.2.0\dbhome_1
Listener port number ................ 1521
Database SID ................ orcl
Email address for notifications ...............
Outgoing Mail (SMTP) server for notifications ...............
Do you wish to continue? [yes(Y)/no(N)]: y
Jun 21, 2012 10:23:53 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at C:\app\Window7\cfgtoollogs\emca\orcl\emc
a_2012_06_21_10_22_33.log.
Jun 21, 2012 10:24:05 AM oracle.sysman.emcp.util.FileUtil backupFile
WARNING: Could not backup file C:\app\Window7\product\11.2.0\dbhome_1\sysman\con
fig\emd.properties
Jun 21, 2012 10:24:05 AM oracle.sysman.emcp.util.FileUtil backupFile
WARNING: Could not backup file C:\app\Window7\product\11.2.0\dbhome_1\sysman\con
fig\emoms.properties
Jun 21, 2012 10:24:05 AM oracle.sysman.emcp.util.FileUtil backupFile
WARNING: Could not backup file C:\app\Window7\product\11.2.0\dbhome_1\sysman\emd
\targets.xml
Jun 21, 2012 10:24:05 AM oracle.sysman.emcp.EMReposConfig createRepository
INFO: Creating the EM repository (this may take a while) ...
Jun 21, 2012 10:24:06 AM oracle.sysman.emcp.EMReposConfig invoke
SEVERE: Error creating the repository
Jun 21, 2012 10:24:06 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Refer to the log file at C:\app\Window7\cfgtoollogs\emca\orcl\emca_repos_c
reate_<date>.log for more details.
Jun 21, 2012 10:24:06 AM oracle.sysman.emcp.EMConfig perform
SEVERE: Error creating the repository
Refer to the log file at C:\app\Window7\cfgtoollogs\emca\orcl\emca_2012_06_21_10
_22_33.log for more details.
Could not complete the configuration. Refer to the log file at C:\app\Window7\cf
gtoollogs\emca\orcl\emca_2012_06_21_10_22_33.log for more details.
But at the end it gave me this error and produce following log:
Could not complete the configuration. Refer to the log file at C:\app\Window7\cfgtoollogs\emca\orcl\emca_2012_06_21_10_22_33.log for more details.
Log File:
Jun 21, 2012 10:10:15 AM oracle.sysman.emcp.util.OUIInventoryUtil setOUILoc
CONFIG: Setting oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:15 AM oracle.sysman.emcp.util.ClusterUtil isHASInstalled
CONFIG: isHAInstalled: false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setFlag
CONFIG: Flag '-migrate' set to false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setFlag
CONFIG: Flag 'migrateFromDBControl' set to false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setFlag
CONFIG: Flag 'migrateToCentralAgent' set to false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setFlag
CONFIG: Flag 'migrateFromCentralAgent' set to false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setFlag
CONFIG: Flag 'migrateToDBControl' set to false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setFlag
CONFIG: Flag 'db' set to true
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setParam
CONFIG: Setting param: ORACLE_HOME value: C:\app\Window7\product\11.2.0\dbhome_1
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.EMConfig isEMConfigured
CONFIG: isEMConfigured for DB: orcl
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.util.PlatformInterface isPre112Home
CONFIG: oracleHome: C:\app\Window7\product\11.2.0\dbhome_1 isPre112Home: false
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager setParam
CONFIG: Setting param: DB_UNIQUE_NAME value: orcl
Jun 21, 2012 10:10:16 AM oracle.sysman.emcp.ParamsManager getParam
CONFIG: No value was set for the parameter ORACLE_HOSTNAME.
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.ClusterUtil isCRSInstalled
CONFIG: isCRSInstalled: false
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.ClusterUtil getLocalNode
CONFIG: Cluster.isCluster: false. Skip call to getLocalNode
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.ClusterUtil getLocalNode
CONFIG: isLocalNodeDone: true localNode: null
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.DBControlUtil isDBConsoleConfigured
CONFIG: Sid: orcl Host: 192.168.127.138 Node: null OH: C:\app\Window7\product\11.2.0\dbhome_1 isDBC: false
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.ParamsManager setParam
CONFIG: Setting param: DB_UNIQUE_NAME value: orcl
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.PlatformInterface isPre112Home
CONFIG: oracleHome: C:\app\Window7\product\11.2.0\dbhome_1 isPre112Home: false
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.ParamsManager getParam
CONFIG: No value was set for the parameter ORACLE_HOSTNAME.
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.ClusterUtil getLocalNode
CONFIG: isLocalNodeDone: true localNode: null
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.CentralAgentUtil isCentralAgentConfigured
CONFIG: Sid: orcl Host: 192.168.127.138 Node: null OH: C:\app\Window7\product\11.2.0\dbhome_1 agentHome: null isCentral: false
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.OUIInventoryUtil setOUILoc
CONFIG: Setting oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.OUIInventoryUtil isValidOH
CONFIG: Invalid oracleHome: C:\OraHome_2
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.OUIInventoryUtil isValidOH
CONFIG: Invalid oracleHome: C:\app\Window7\product\11.2.0\dbhome_2
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.CentralAgentUtil getCentralAgentHomeAndURL
CONFIG: Central Agent home and URL: {}
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.OUIInventoryUtil setOUILoc
CONFIG: Setting oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.OUIInventoryUtil isValidOH
CONFIG: Invalid oracleHome: C:\OraHome_2
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.OUIInventoryUtil isValidOH
CONFIG: Invalid oracleHome: C:\app\Window7\product\11.2.0\dbhome_2
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.util.CentralAgentUtil getURLAndCentralAgentHome
CONFIG: URL and Central Agent home : {}
Jun 21, 2012 10:10:28 AM oracle.sysman.emcp.EMConfig restoreOuiLoc
CONFIG: Restoring oracle.installer.oui_loc to C:\app\Window7\product\11.2.0\dbhome_1\oui
Jun 21, 2012 10:11:12 AM oracle.sysman.emcp.EMConfig finalize
CONFIG: finalize() called for EMConfig -
Convert characterset WE8MSWIN1252 to UTF8
Hi all
I am using Oracle 10g Database. Now the Characterset as WE8MSWIN1252. I want to change my CharacterSet to UTF8. It is possible.
Can anyone please post me the steps involved.
Very Urgent !!!!!!!
Regds
NirmalSubject: Changing WE8ISO8859P1/ WE8ISO8859P15 or WE8MSWIN1252 to (AL32)UTF8
Doc ID: Note:260192.1 Type: BULLETIN
Last Revision Date: 24-JUL-2007 Status: PUBLISHED
Changing the database character set to (AL32)UTF8
=================================================
When changing a Oracle Applications Database:
Please see the following note for Oracle Applications database
Note 124721.1 Migrating an Applications Installation to a New Character Set
If you have any doubt log an Oracle Applications TAR for assistance.
It might be usefull to read this note, even when using Oracle Applications
seen it explains what to do with "lossy" and "truncation" in the csscan output.
Scope:
You can't simply use "ALTER DATABASE CHARACTER SET" to go from WE8ISO8859P1 or
WE8ISO8859P15 or WE8MSWIN1252 to (AL32)UTF8 because (AL32)UTF8 is not a
binary superset of any of these character sets.
You will run into ORA-12712 or ORA-12710 because the code points for the
"extended ASCII" characters are different between these 3 character sets
and (AL32)UTF8.
This note will describe a method of still using a
"ALTER DATABASE CHARACTER SET" in a limited way.
Note that we strongly recommend to use the SAME flow when doing a full
export / import.
The choise between using FULL exp/imp and a PARTIAL exp/imp is made in point
7)
DO NOT USE THIS NOTE WITH ANY OTHER CHARACTERSETS
WITHOUT CHECKING THIS WITH ORACLE SUPPORT
THIS NOTE IS SPECIFIC TO CHANGING:
FROM: WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252
TO: AL32UTF8 or UTF8
AL32UTF8 and UTF8 are both Unicode character sets in the oracle database.
UTF8 encodes Unicode version 3.0 and will remain like that.
AL32UTF8 is kept up to date with the Unicode standard and encodes the Unicode
standards 3.0 (in database 9.0), 3.1 (database 9.2) or 3.2 (database 10g).
For the purposes of this note we shall only use AL32UTF8 from here on forward,
you can substitute that for UTF8 without any modifications.
If you use 8i or lower clients please have a look at
Note 237593.1 Problems connecting to AL32UTF8 databases from older versions (8i and lower)
WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252 are the 3 main character sets that
are used to store Western European or English/American data in.
All standard ASCII characters that are used for English/American do not have to
be converted into AL32UTF8 - they are the same in AL32UTF8. However, all other
characters, like accented characters, the Euro sign, MS "smart quotes", etc.
etc., have a different code point in AL32UTF8.
That means that if you make extensive use of these types of characters the
preferred way of changing to AL32UTF8 would be to export the entire database and
import the data into a new AL32UTF8 database.
However, if you mainly use standard ASCII characters and not a lot else (for
example if you only store English text, maybe with some Euro signs or smart
quotes here and there), then it could be a lot quicker to proceed with this
method.
Please DO read in any case before going to UTF8 this note:
Note 119119.1 AL32UTF8 / UTF8 (unicode) Database Character Set Implications
and consider to use CHAR semantics if on 9i or higher:
Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
It's best to change the tables and so to CHAR semantics before the change
to UTF8.
This procedure is valid for Oracle 8i, 9i and 10g.
Note:
* If you are on 9i please make sure you are at least on Patch 9204, see
Note 250802.1 Changing character set takes a very long time and uses lots of rollback space
* if you have any function-based indexes on columns using CHAR length semantics
then these have to be removed and re-created after the character set has
been changed. Failure to do so will result in ORA-604 / ORA-2262 /ORA-904
when the "alter database character set" statement is used in step 4.
Actions to take:
1) install the csscan tool.
1A)For 10g use the csscan 2.x found in /bin, no need to install a newer version
Goto 1C)
1B)For 9.2 and lower:
Please DO install the version 1.2 or higher from TechNet for you version.
http://technet.oracle.com/software/tech/globalization/content.html
and install this.
copy all scripts and executables found in the zip file you downloaded
to your oracle_home overwriting the old versions.
goto 1C).
Note: do NOT use the CSSCAN of a 10g installation for 9i/8i!
1C)Run csminst.sql using these commands and SQL statements:
cd $ORACLE_HOME/rdbms/admin
set oracle_sid=<your SID>
sqlplus "sys as sysdba"
SQL>set TERMOUT ON
SQL>set ECHO ON
SQL>spool csminst.log
SQL> START csminst.sql
Check the csminst.log for errors.
If you get when running CSSCAN the error
"Character set migrate utility schema not compatible."
then
1ca) or you are starting the old executable, please do overwrite all old files with the files
from the newer version from technet (1.2 has more files than some older versions, that's normal).
1cb) or check your PATH , you are not starting csscan from this ORACLE_HOME
1cc) or you have not runned the csminst.sql from the newer version from technet
More info is in Note 123670.1 Use Scanner Utility before Altering the Database Character Set
Please, make sure you use/install csscan version 1.2 .
2) Check if you have no invalid code points in the current character set:
Run csscan with the following syntax:
csscan FULL=Y FROMCHAR=<existing database character set> TOCHAR=<existing database character set> LOG=WE8check CAPTURE=Y ARRAY=1000000 PROCESS=2
Always run CSSCAN with 'sys as sysdba'
This will create 3 files :
WE8check.out a log of the output of csscan
WE8check.txt a Database Scan Summary Report
WE8check.err contains the rowid's of the rows reported in WE8check.txt
At this moment we are just checking that all data is stored correctly in the
current character set. Because you've entered the TO and FROM character sets as
the same you will not have any "Convertible" or "Truncation" data.
If all the data in the database is stored correctly at the moment then there
should only be "Changeless" data.
If there is any "Lossy" data then those rows contain code points that are not
currently stored correctly and they should be cleared up before you can continue
with the steps in this note. Please see the following note for clearing up any
"Lossy" data:
Note 225938.1 Database Character Set Healthcheck
Only if ALL data in WE8check.txt is reported as "Changeless" it is safe to
proceed to point 3)
NOTE:
if you have a WE8ISO8859P1 database and lossy then changing your WE8ISO8859P1 to
WE8MSWIN1252 will most likly solve you lossy.
Why ? this is explained in
Note 252352.1 Euro Symbol Turns up as Upside-Down Questionmark
Do first a
csscan FULL=Y FROMCHAR=WE8MSWIN1252 TOCHAR=WE8MSWIN1252 LOG=1252check CAPTURE=Y ARRAY=1000000 PROCESS=2
Always run CSSCAN with 'sys as sysdba'
For 9i, 8i:
Only if ALL data in 1252check.txt is reported as "Changeless" it is safe to
proceed to the next point. If not, log a tar and provide the 3 generated files.
Shutdown the listener and any application that connects locally to the database.
There should be only ONE connection the database during the WHOLE time and that's
the sqlplus session where you do the change.
2.1. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all.
If you are using RAC see
Note 221646.1 Changing the Character Set for a RAC Database Fails with an ORA-12720 Error
2.2. Execute the following commands in sqlplus connected as "/ AS SYSDBA":
SPOOL Nswitch.log
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
ALTER SYSTEM SET AQ_TM_PROCESSES=0;
ALTER DATABASE OPEN;
ALTER DATABASE CHARACTER SET WE8MSWIN1252;
SHUTDOWN IMMEDIATE;
STARTUP RESTRICT;
SHUTDOWN;
The extra restart/shutdown is necessary in Oracle8(i) because of a SGA
initialization bug which is fixed in Oracle9i.
-- a alter database takes typically only a few minutes or less,
-- it depends on the number of columns in the database, not the amount of data
2.3. Restore the parallel_server parameter in INIT.ORA, if necessary.
2.4. STARTUP;
now go to point 3) of this note of course your database is then WE8MSWIN1252, so
you need to replace <existing database character set> with WE8MSWIN1252 from now on.
For 10g and up:
When using CSSCAN 2.x (10g database) you should see in 1252check.txt this:
All character type data in the data dictionary remain the same in the new character set
All character type application data remain the same in the new character set
and
The data dictionary can be safely migrated using the CSALTER script
IF you see this then you need first to go to WE8MSWIN1252
If not, log a tar and provide all 3 generated files.
Shutdown the listener and any application that connects locally to the database.
There should be only ONE connection the database during the WHOLE time and that's
the sqlplus session where you do the change.
Then you do in sqlplus connected as "/ AS SYSDBA":
-- check if you are using spfile
sho parameter pfile
-- if this "spfile" then you are using spfile
-- in that case note the
sho parameter job_queue_processes
sho parameter aq_tm_processes
-- (this is Bug 6005344 fixed in 11g )
-- then do
shutdown immediate
startup restrict
SPOOL Nswitch.log
@@?\rdbms\admin\csalter.plb
-- Csalter will aks confirmation - do not copy paste the whole actions on one time
-- sample Csalter output:
-- 3 rows created.
-- This script will update the content of the Oracle Data Dictionary.
-- Please ensure you have a full backup before initiating this procedure.
-- Would you like to proceed (Y/N)?y
-- old 6: if (UPPER('&conf') <> 'Y') then
-- New 6: if (UPPER('y') <> 'Y') then
-- Checking data validility...
-- begin converting system objects
-- PL/SQL procedure successfully completed.
-- Alter the database character set...
-- CSALTER operation completed, please restart database
-- PL/SQL procedure successfully completed.
-- Procedure dropped.
-- if you are using spfile then you need to also
-- ALTER SYSTEM SET job_queue_processes=<original value> SCOPE=BOTH;
-- ALTER SYSTEM SET aq_tm_processes=<original value> SCOPE=BOTH;
shutdown
startup
and the 10g database will be WE8MSWIN1252
now go to point 3) of this note of course your database is then WE8MSWIN1252, so
you need to replace <existing database character set> with WE8MSWIN1252 from now on.
3) Check which rows contain data for which the code point will change
Run csscan with the following syntax:
csscan FULL=Y FROMCHAR=<your database character set> TOCHAR=AL32UTF8 LOG=WE8TOUTF8 CAPTURE=Y ARRAY=1000000 PROCESS=2
Always run CSSCAN with 'sys as sysdba'
This will create 3 files :
WE8TOUTF8.out a log of the output of csscan
WE8TOUTF8.txt a Database Scan Summary Report
WE8TOUTF8.err a contains the rowid's of the rows reported in WE8check.txt
+ You should have NO entries under Lossy, because they should have been filtered
out in step 2), if you have data under Lossy then please redo step 2).
+ If you have any entries under Truncation then go to step 4)
+ If you only have entries for Convertible (and Changeless) then solve those in
step 5).
+ If you have NO entry's under the Convertible, Truncation or Lossy,
and all data is reported as "Changeless" then proceed to step 6).
4) If you have Truncation entries.
Whichever way you migrate from WE8(...) to AL32UTF8, you will always have to
solve the entries under Truncation.
Standard ASCII characters require 1 byte of storage space under in WE8(...) and
in AL32UTF8, however, other characters (like accented characters and the Euro
sign) require only 1 byte of storage space in WE8(...), but they require 2 or
more bytes of space in AL32UTF8.
That means that the total amount of space needed to store a string can exceed
the defined column size.
For more information about this see:
Note 119119.1 AL32UTF8 / UTF8 (unicode) Database Character Set Implications
and
"Truncation" data is always also "Convertible" data, which means that whatever
else you do, these rows have to be exported before the character set is changed
and re-imported after the character set has changed. If you proceed with that
without dealing with the truncation issue then the import will fail on these
columns because the size of the data exceeds the maximum size of the column.
So these truncation issues will always require some work, there are a number of
ways to deal with them:
A) Update these rows in the source database so that they contain less data
B) Update the table definition in the source database so that it can contain
longer data. You can do this by either making the column larger, or by using
CHAR length semantics instead of BYTE length semantics (only possible in
Oracle9i).
C) Pre-create the table before the import so that it can contain 'longer' data.
Again you have a choice between simply making it larger, or switching from BYTE
to CHAR length semantics.
If you've chosen option A or B then please rerun csscan to make sure there is no
Truncation data left. If that also means there is no Convertible data left then
proceed to step 6), otherwise proceed to step 5).
To know how much the data expands simply check the csscan output.
you can find that in the .err file as "Max Post Conversion Data Size"
For example, check in the .txt file wich table has "Truncation",
let's assume you have there a row that say's
-- snip from WE8TOUTF8.txt
[Distribution of Convertible, Truncated and Lossy Data by Table]
USER.TABLE Convertible Truncation Lossy
SCOTT.TESTUTF8 69 6 0
-- snip from WE8TOUTF8.txt
then look in the .err file for "TESTUTF8" until the
"Max Post Conversion Data Size" is bigger then the column size for that table.
User : SCOTT
Table : TESTUTF8
Column: ITEM_NAME
Type : VARCHAR2(80)
Number of Exceptions : 6
Max Post Conversion Data Size: 81
-> the max size after going to UT8 will be 81 bytes for this column.
5) If you have Convertible entries.
This is where you have to make a choice whether or not you want to continue
on this path or if it's simpler to do a complete export/import in the
traditional way of changing character sets.
All the data that is marked as Convertible needs to be exported and then
re-imported after the character set has changed.
6) check if you have functional indexes on CHAR based columns and purge the RECYCLEBIN.
select OWNER, INDEX_NAME , INDEX_TYPE, TABLE_OWNER, TABLE_NAME, STATUS,
FUNCIDX_STATUS from ALL_INDEXES where INDEX_TYPE not in
('NORMAL', 'BITMAP','IOT - TOP') and TABLE_NAME in (select unique
(table_name) from dba_tab_columns where char_used ='C');
if this gives rows back then the change will fail with
ORA-30556: functional index is defined on the column to be modified
if you have functional indexes on CHAR based columns you need to drop the
index and recreate after the change , note that a disable will not be enough.
On 10g check ,while connected as sysdba, if there are objects in the recyclebin
SQL> show recyclebin
If so do also a PURGE DBA_RECYCLEBIN; other wise you will recieve a ORA-38301 during CSALTER.
7) Choose on how to do the actual change
you have 2 choices now:
Option 1 - exp/imp the entire database and stop using the rest of this note.
a. Export the current entire database (with NLS_LANG set to <your old
database character set>)
b. Create a new database in the AL32UTF8 character set
c. Import all data into the new database (with NLS_LANG set to <your old database character set>)
d. The conversion is complete, do not continue with this note.
note that you do need to deal with truncation issues described in step 4), even
if you use the export/import method.
Option 2 - export only the convertible data and continue using this note.
For 9i and lower:
a. If you have "convertible" data for the sys objects SYS.METASTYLESHEET,
SYS.RULE$ or SYS.JOB$ then follow the following note for those objects:
Note 258904.1 Convertible data in data dictionary: Workarounds when changing character set
make sure to combine the next steps in the example script given in that note.
b. Export all the tables that csscan shows have convertible data
(make sure that the character set part of the NLS_LANG is set to the current
database character set during the export session)
c. Truncate those tables
d. Run csscan again to verify you only have "changeless" application data left
e. If this now reports only Changeless data then proceed to step 8), otherwise
do the same again for the rows you've missed out.
For 10g and up:
a. Export all the USER tables that csscan shows have convertible data
(make sure that the character set part of the NLS_LANG is set to the current
database character set during the export session)
b. Fix any "convertible" in the SYS schema, note that the 10g way to change
the characterset (= the CSALTER script) will deal with any CLOB data in the
sys schema. All "no 9i only" fixes in
Note 258904.1 Convertible data in data dictionary: Workarounds when changing character set
should NOT be done in 10g
c. Truncate the exported user tables.
d. Run csscan again to verify you only have "changeless" application data left
e. If this now reports only Changeless data then proceed to step 8), otherwise
do the same again for the rows you've missed out.
When using CSSCAN 2.x (10g database) you should see in WE8TOUTF8.txt this:
The data dictionary can be safely migrated using the CSALTER script
If you do NOT have this when working on a 10g system CSALTER will NOT work and this
means you have missed something or not followed all steps in this note.
8) Perform the character set change:
Perform a backup of the database.
Check the backup.
Double-check the backup.
For 9i and below:
Then use the "alter database" command, this changes the current database
character set definition WITHOUT changing the actual stored data.
Shutdown the listener and any application that connects locally to the database.
There should be only ONE connection the database during the WHOLE time and that's
the sqlplus session where you do the change.
1. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all.
If you are using RAC see
Note 221646.1 Changing the Character Set for a RAC Database Fails with an ORA-12720 Error
2. Execute the following commands in sqlplus connected as "/ AS SYSDBA":
SPOOL Nswitch.log
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
ALTER SYSTEM SET AQ_TM_PROCESSES=0;
ALTER DATABASE OPEN;
ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8;
SHUTDOWN IMMEDIATE;
-- a alter database takes typically only a few minutes or less,
-- it depends on the number of columns in the database, not the amount of data
3. Restore the parallel_server parameter in INIT.ORA, if necessary.
4. STARTUP;
Without the INTERNAL_USE you get a ORA-12712: new character set must be a superset of old character set
WARNING WARNING WARNING
Do NEVER use "INTERNAL_USE" unless you did follow the guidelines STEP BY STEP
here in this note and you have a good idea what you are doing.
Do NEVER use "INTERNAL_USE" to "fix" display problems, but follow Note 225938.1
If you use the INTERNAL_USE clause on a database where there is data listed
as convertible without exporting that data then the data will be corrupted by
changing the database character set !
For 10g and up:
Shutdown the listener and any application that connects locally to the database.
There should be only ONE connection the database during the WHOLE time and that's
the sqlplus session where you do the change.
Then you do in sqlplus connected as "/ AS SYSDBA":
-- check if you are using spfile
sho parameter pfile
-- if this "spfile" then you are using spfile
-- in that case note the
sho parameter job_queue_processes
sho parameter aq_tm_processes
-- (this is Bug 6005344 fixed in 11g )
-- then do
shutdown
startup restrict
SPOOL Nswitch.log
@@?\rdbms\admin\csalter.plb
-- Csalter will aks confirmation - do not copy paste the whole actions on one time
-- sample Csalter output:
-- 3 rows created.
-- This script will update the content of the Oracle Data Dictionary.
-- Please ensure you have a full backup before initiating this procedure.
-- Would you like to proceed (Y/N)?y
-- old 6: if (UPPER('&conf') <> 'Y') then
-- New 6: if (UPPER('y') <> 'Y') then
-- Checking data validility...
-- begin converting system objects
-- PL/SQL procedure successfully completed.
-- Alter the database character set...
-- CSALTER operation completed, please restart database
-- PL/SQL procedure successfully completed.
-- Procedure dropped.
-- if you are using spfile then you need to also
-- ALTER SYSTEM SET job_queue_processes=<original value> SCOPE=BOTH;
-- ALTER SYSTEM SET aq_tm_processes=<original value> SCOPE=BOTH;
shutdown
startup
and the 10g database will be AL32UTF8
9) Reload the data pump packages after a change to AL32UTF8 / UTF8 in Oracle10
If you use Oracle10 then the datapump packages need to be reloaded after
a conversion to UTF8/AL32UTF8. In order to do this run the following 3
scripts from $ORACLE_HOME/rdbms/admin in sqlplus connected as "/ AS SYSDBA":
For 10.2.X:
catnodp.sql
catdph.sql
catdpb.sql
For 10.1.X:
catnodp.sql
catdp.sql
10) Reimporting the exported data:
If you exported any data in step 5) then you now need to reimport that data.
Make sure that the character set part of the NLS_LANG is still set to the
original database character set during the import session (just as it was during
the export session).
11) Verify the clients NLS_LANG:
Make sure your clients are using the correct NLS_LANG setting:
Regards,
Chotu,
Bangalore
Maybe you are looking for
-
Not able to pair iphone5 with windows 8 in itunes
When I connect my Iphone5 and Ipad2 to the laptop that has Windows 8, Itunes does not recognize them. I want to authorize the computer; however, I do not see where to do this. I assume this is due to the problem with Windows 8. If anyone is able to
-
Changing ipod over from Windows to Mac
I've been using my ipod with my PC for the last 2 years and I now want to change it over to my powerbook. I have all the music on an external drive and will only be selecting some of the music to go on the ipod as the collection is larger than the po
-
A more specific Air vs Pro thread
Hello all, It seems I'll be making the jump to macs in the near future. I own an iPad 2, latest iPad, iPhone 4S, and the latest AppleTV. A variety of laptop choices are being offered at work, so naturally I should pick out a Mac to complete the ecosy
-
Hey everyone, I am trying to finish up a project, and I cannot seem to come up with a solution for these single black frames that are coming up in between each cut in my quicktime movie. Right now I am exporting under DV NTSC 48 KhZ. I initially thou
-
In TCODE LECI, field "Service agent",want this as mandatory.
Dear Friends, I am using Transaction code LECI to track Vendor's Incoming Bills and challans. We have created a Z-Report for the same. In Transaction LECI,there is a field called "Service agent",this field is for Supplier.I want this filed to be mand