Characterset issues

Hello all,
I'm having some persistent characterset issues. I (PHP-, no
Coldfusion programmer) have been working on this site that uses
Mysql 5 and fusebox 3 as framework and a lot of tailormade
coldfusion scripts, but can't get this problem under control. I've
tried specifying iso-8859-1 on all pages using the
cfprocessingdirective and <cfcontent
type="text/html;charset=ISO-8859-1"> in application.cfm but no
result.
I understand that Coldfusion uses UTF-8 by default, so I've
tried getting rid of all ISO-8859-1 statements to see if that would
solve my problems, but again no result.
Fiddled with some database collation and character set
parameters, but to no avail.
You can see the result of rendering
'Iñtërnâtiônàlizætiøn'
here
Weird thing is, when I build my own coldfusion page, with
nothing more that a query, cfoutput and optionally character set,
it displays
correctly.
Could anyone point me in the right direction?
Thanks!

> in that case, look to the db and/or db driver.
Well, I've installed coldfusion on a local box and tried to
emulate my webhoster's environment. I first installed that JDBC
driver you mentioned which worked like a charm (even without the
connection string). My webhoster says they use the 'standard'
driver, which I guess is the 'mysql (3.x)' option. I've done some
reading on why you get errors when trying to use this feature with
newer mysql databases (has to do with authentication
incompatibility: you have to do an alter query on the db user and
convert the password using oldpassword('password')) and managed to
make a working DSN using this 'standard' driver and guess what?
The same character set issues appeared. After putting in the
connection string the problems disappeared as you said.
Now the only problem that remains is my webhoster who claims
that they haven't got a connection string parameter in Coldfusion
Administrator....
Wish I could use my local box that's running perfectly now as
a webserver!
Thanks for your help!

Similar Messages

  • Migration Characterset Issue

    Hi,
    I am involved in a Data Migration project.
    The database version I am using is Oracle 10.2.0.5 and the NLS_CHARACTERSET is set to UTF8 on the database.
    The requirement is to provide the data extract in Unicode for the new databse.
    I have set the NLS_LANG variable to AL32UTF8 on my machine.
    For extracting the data from all the tables except one, I did not face any problems.
    But for one table,while exporting the data I got the error ORA:29275 and ORA-12703 while trying to convert the strings.
    I was able to address this issue by changing the characterset on my machine to UTF8.
    My Question is:
    1)Will the extract files pulled out in AL32UTF8 characterset pose a problem to the new database(if in UTF8) where the data has to be migrated?
    2)I got to know that Oracle UTF8 is not exactly "Unicode" as per the standards, so how do I address the dataset in UTF8 if to be provided in Unicode?
    thanks in advance
    deb

    It would seem this post should have been posted in {forum:id=50}.
    1. How exactly are you "extracting the data"?
    In general, when using export tool, you should use a client char set that matches the database, as to avoid or delay conversion.
    (Oracle character sets UTF8 and AL32UTF8 are not the same.)
    But ORA-29275 possibly indicates some problem with source data, so it is best to investigate the issue.
    Why does the new database have UTF8 over the current AL32UTF8? The first, UTF8 should be used only when (the very old) client applications require it.
    2. Where did you read this?
    See Globalization Support guide about Unicode versions and Oracle character sets.

  • Forms and reports server characterset aix

    My database is in UTF8 mode and is displaying all characters correctly.
    However when I run a report using the forms and reports server it uses Western European Character Set.
    I have tried re-installing forms and reports server in order to fix this characterset issue, however I am not prompted for a characterset at installation time, so the forms and reports server is still set to the Western European Character Set, thus, some characters are not being displayed (inverse ?).
    I have tried specifying UTF8 for NLS_LANG in reports.sh but I only get garbage displayed.
    Can someone point me in the right direction please

    This issue has now been resolved. Changing the NLS_LANG setting in reports.sh to POLISH_POLAND.UTF8 was correct. The issue then was that greek symbols were displayed. This was a font issue and was resolved by amending a file
    $ORACLE_HOME/guicommon/tk/admin/AFM/fontname (where fontname is the characterset you are using,eg in my case Courier ).
    the Change was to amend the line:
    EncodingSchemeProperty AdobeStandardEncoding
    to read
    EncodingSchemeProperty FontSpecific
    This fixed the problem for me , but Oracle also advise to amend the file $ORACLE_HOME/guicommon/tk/admin/Tk2Motif.rgb:
    eg change the below line from:
    !Tk2Motif*fontMapCs: iso8859-2=EE8ISO8859P2
    to:
    Tk2Motif*fontMapCs: iso8859-2=UTF8 (or whatever charcterset you are using).
    Note that the exclamation mark has been removed.
    Read Oracle Note 300416.1

  • Oracle Reports 10.1.2.3 output shows question marks, parameter form shows correct Arabic

    Hi
    Following is our environment
    Oracle Application Server     10.1.2.3.0
    Microsoft Windows 2003 Enterprise Edition Service Pack 2
    Oracle Database 11.2.0.2.0
    NLS_LANG     AMERICAN_AMERICA.AR8MSWIN1256
    When client runs the report on browser (IE), the parameter form appears fine with Arabic characters.
    Inserting the parameters in Arabic also look fine and characters display in Arabic but when report is generated, the output shows the inserted parameters as ????
    On server, regional settings are ALL Arabic. Also the registry has NLS_LANG setting of AMERICAN_AMERICA.AR8MSWIN1256.
    Report output is same for both html and pdf formats. For PDF subetting, uifont.ali file has been modified but the issue does not seem to be PDF related since the html format reports also shows ???
    It is a 3 tier setup hence there is no question of having NLS_LANG registry settings for client PCs that are running Windows OS with Arabic regional settings that match the server.
    Interestingly, the same report when run from a different application server having same architecture runs fine. The other application server has exact same version of OAS. Database supports Arabic and is configured as such.I tried comparing the different configuration files on both servers but apparently I cannot find the place where I should change something for the reports to show correct Arabic.
    SELECT * FROM NLS_PARAMETERS
    WHERE parameter IN ('NLS_CHARACTERSET','NLS_LANGUAGE');
    NLS_LANGUAGE
    AMERICAN
    NLS_CHARACTERSET
    AR8MSWIN1256
    I am writing this after doing quite some research but seems like I am unable to find a solution to this.
    Any help will be appreciated.

    Thank you Paul for your reply.
    I think you misunderstood me. I understand that it is all volunteer work and I never said I was looking for "quick" or "sure" response. I did not even use the words IMMEDIATE or URGENT in my post. I found it strange because I personally believe that there are many volunteers providing great support to others and yet my post somehow went unnoticed. In my personal opinion, however, if you do not wish to reply then you should just ignore rather than being rude. No one can force anyone to do the good work that people are doing here at oracle forums and they are all doing this out of choice. Let me also assure you that me and people like myself really appreciate their efforts.
    Having said that, I wish you had taken time to read about my issue. There is infact a registry setting that specifies the NLS_LANG and it is already set under both Infrastructure and Middle Tier in the registry. I mentioned it earlier. The value is AMERICAN_AMERICA.AR8MSWIN1256 and I believe this is correct for Arabic characters.
    What is confusing for me is the fact that if browser was not capable of showing Arabic characters then the static Arabic words in the reports layout (field name for instance) would also show as garbage or question marks. They appear to be fine. Since we are just passing some parameters in Arabic language and not saving anything in the database, the DB characterset does not come into play for now. Although the DB characterset is also set to store Arabic data. We are displaying the parameters that we are passing through the parameter form in the output of the report and this is where we see question marks (????).
    Finally, if it was a browser issue or a DB characterset issue, then in my limited knowledge, I believe that the report that we are running using the other Application Server that is pointing to the same database should also show the same behaviour. That is not the case as it displays the same report perfectly. Client machine is the same in both cases using Windows 7 and IE as browser.
    http://appserver1/reports/rwservlet?admin/myreport           (connecting to db1)    works fine
    http://appserver2/reports/rwservlet?admin/myreport          (connecting to db1)     show question marks
    Above URLs are examples. The point I am trying to make here is that the issue has to be with the new application server and it cannot be registry settings as I have double checked the entry and it exists in all Oracle Homes ie Infrastructure and Middle Tier. So maybe it is some configuration file setting that I am missing here.
    Any help will be appreciated.

  • Java.util.zip.ZipFile.entries() shows only 99 files but zip file is fine.

    Hi,
    I have a wierd issue with java.util.zip.ZipFile
    Code as simple as
    ZipFile file = new ZipFile("my.zip") ;
    System.out.println(file.size());
    For this particular zip file, it says 99 files found but the zip contains more than 60,000 files. I tried the zip with unzip and zip utilities and the zip file checks out fine. I even tried to unzip the contents, and zip 'em up all over again, just to eliminate the chances of corruption while the zip was being transferred over the network.
    The same program works fine with another zip containing more or less the same number of files and prints 63730.
    Any idea? This can not possibly be related to the type of files the zips contain? right? In any case, the contents of both zips are text/xml files.
    Any help would be greatly appreciated.
    Regards,
    ZiroFrequency

    I know its a problem with this particular zip. But whats interesting is that "unzip" can easily open / verify the zip and claims that it is a valid zip.
    As I wrote earlier, I unzipped the file and zipped up the contents again in a new zip but java can't still count the contents correctly.
    So I am thinking there is something to do with the "contents" of the xmls inside the zip? (characterset issues?)
    There are no exceptions thrown and no error anywhere :(
    I basically need to pinpoint the issue so that I can have it corrected upstream as this zip file processing is an ongoing process and I need to resolve it not just once but for the periodic executions.
    Hope this helps explain the issue.

  • UTF8 support in Oracle Lite

    I looks to me like Oracle Lite doesn't support the UTF8 character set for storing data. Is this correct?
    I have an Oracle 8.1.7 database using UTF8 to store English, Thai, Chinese and Philippino data, and I want to synchronise that with Oracle Lite on Windows 2000 clients. The sync seems to work ok, but any non-english characters are lost in a characterset conversion (display as "?").
    Regards
    Steve

    I'm a bit confused by your reply. Can Oracle Lite store data in a unicode characterset such as UTF8? It looks to me like the "UTF-8 support" is limited to the drivers, so that it can load and extract utf-8 data, but not store it. From section 2.2.1 of the release notes:
    "Oracle Lite Database is NOT a NLS component. In order to reduce the kernel size, it is built for each language which supports native character sets for Windows. Which means, each language has each kernel. Here are the character sets supported by this release:
    - Chinese: MS936 CodePage (Simplified Chinese GBK, ZHS)
    - Taiwanese: MS950 CodePage (Traditional Chinese BIG5, ZHT)
    - Japanese: MS932 CodePage (Japanese Shift-JIS, JA)
    - Korean: MS949 CodePage (Korean, Ko)
    The database kernel for each language in this list only supports its corresponding character set. Other multibyte character sets are not supported."
    Also the documentation on the DBCharEncoding parameter you mention suggests that it only affects the UTF translation for java programs. Section A.2.3:
    "... Specifies the UTF translation performed by Oracle Lite. If set to NATIVE, no UTF translation is performed. If set to UTF8, UTF translation is performed. If this parameter is not specified, the default is UTF8. This applies to Java programs only."
    I've tried playing with these parameters, as well as changing the NLS_LANG parameter on the client, and for the mobile server, for the Oracle Lite home, all to no avail. I'm still losing the non-english data during synchronisation and it does look like it's being lost in a character set conversion rather than just being garbled, as each Thai characters is being replaced by the correct number of "?"s. As an example the Thai string "&#3610;&#3619;&#3636;&#3625;&#3633;&#3607; &#3610;&#3619;&#3634;&#3648;&#3604;&#3629;&#3619;&#3660; &#3588;&#3629;&#3617;&#3648;&#3617;&#3629;&#3619;&#3660;&#3648;&#3594;&#3637;&#3656;&#3618;&#3621; (&#3611;&#3619;&#3632;&#3648;&#3607;&#3624;&#3652;&#3607;&#3618;) &#3592;&#3635;&#3585;&#3633;&#3604;" on the 8.1.7 database sever appears as "?????? ???????? ?????????????? (?????????) ?????" on the oracle lite database.
    Am I missing something here? If I can get this data syncronising correctly then Oracle Lite looks like it will support all our requirments so any assistance would be greatly appreciated. (Should I post this to the globilization forum or does that focus only on Oracle's enterprise editions?)
    BTW, thanks for the info on the sorting. Obviously the characterset issue is more a fundamental problem at this stage, but if we can fix this then it's good to know about the sorting abilities.

  • XML parsing failed - Works in Oracle Developer fails in SQLplus

    Hello,
    I am trying to figure out what could possibly be causing my XML parsing to fail from sqlplus. Basically the exact same sql script fails with the following error when run from sqlplus but works fine in Oracle Developer.
    Now I have read a number of posts talking about the NLS_Lang and characterset issues. But these are all running on the same machine.
    Error Message Received:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00222: error received from SAX callback function
    ORA-06512: at "SYS.DBMS_XMLSTORE", line 78
    ORA-06512: at line 53
    Exact SQL Script
    DECLARE
    insCtx DBMS_XMLStore.ctxType;
    rows NUMBER;
    xmldoc CLOB :=
    '<DepotData DepotCode="B106">
    <tblPoint_of_Call>
    <DEPOTCODE>B106</DEPOTCODE>
    <MASTERKEY>2077</MASTERKEY>
    <ADDR_ID>45286159</ADDR_ID>
    <ADDR_MAIL_ID>45286160</ADDR_MAIL_ID>
    <STREETBLOCK_KEY>45</STREETBLOCK_KEY>
    <PC_ID>19603417</PC_ID>
    <ST_ID>40034414</ST_ID>
    <BUILDING_KEY>0</BUILDING_KEY>
    <ADDR_NUM>505</ADDR_NUM>
    <POCTYPE>154</POCTYPE>
    <RECEPTACLETYPE>964</RECEPTACLETYPE>
    <TOTALPOCS>1</TOTALPOCS>
    <TOTALPOCSOCCUPIED>1</TOTALPOCSOCCUPIED>
    <TOTALADMAIL>1</TOTALADMAIL>
    <OLD038POC>0</OLD038POC>
    <OCCUPIED>1</OCCUPIED>
    <ADMAIL>1</ADMAIL>
    <BAGGER>0</BAGGER>
    <SORTED>1</SORTED>
    <DELIVERED>1</DELIVERED>
    <AMOBLIGATORY>0</AMOBLIGATORY>
    <POCICANDIDATE>0</POCICANDIDATE>
    <DELIVERYSEQUENCE>10</DELIVERYSEQUENCE>
    <NUMSEPARATIONS>0.50</NUMSEPARATIONS>
    <SEPSGROUPID>0</SEPSGROUPID>
    <DELETED>0</DELETED>
    <CARDID>0</CARDID>
    <CARD>0</CARD>
    <FORCECARD>0</FORCECARD>
    <DNC>0</DNC>
    <PRINTED>0</PRINTED>
    <A12CARD>0</A12CARD>
    <EXTRACARDS>0</EXTRACARDS>
    <DSOUPDATECODE>0</DSOUPDATECODE>
    <TRANSACTION_TYPE>0</TRANSACTION_TYPE>
    <UPDATE_TYPE>0</UPDATE_TYPE>
    <SORTSEQUENCE>10</SORTSEQUENCE>
    <BUILDINGBASE>0</BUILDINGBASE>
    </tblPoint_of_Call>
    </DepotData>';
    BEGIN
    insCtx := DBMS_XMLStore.newContext('AIMPRIME.TBLPOINT_OF_CALL'); -- get saved context
    -- set the columns to be updated as a list of values
    -- Now insert the doc.
    -- This will only insert into EMPNO, SAL and HIREDATE columns
    DBMS_XMLStore.setRowTag(insCtx,'tblPoint_of_Call');
    rows := DBMS_XMLStore.insertXML(insCtx, xmlDoc);
    -- Close the context
    DBMS_XMLStore.closeContext(insCtx);
    END;
    System Information:
    Oracle XE on windows XP. Everything is running on the same machine, sqlplus, Oracle Developer and the database are all on the same machine.
    If you need any more information please let me know, and how to get the information you want. I am not a DBA so I don't know how to do a lot of things with oracle, but I do know that this SQL should be working.

    NLS...read up on them
    on the same machine, yeah and...?
    (1) c:\> set NLS_LANG=.....
    (2) c:\> sqlplus
    -- sqlplus has the active nls settings set from (1)
    3) you now connect to a database
    - this database has an NLS character settings that where set during using a spfile or pfile
    - or, as in the case of the characterset, was defined and set once, during the creation of the database.
    So...
    Your NLS settings in (1) can be different from the ones that are active in the database (3).
    Your NLS settings are / can be set on windows on 5 levels
    In the registry under /software/oracle:
    1) on HKEY_LOCAL_MACHINE
    2) on HKEY LOCAL USER
    In the envionment settings from windows under control panel, system, advanced, environment settings:
    3) on system level
    4) on user level
    5) On command level via explicitly setting the parameter before starting the actual program
    So in all, you can have the case where the NLS settings from sqlplus or Oracle Developer or the database are all different.
    http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm

  • Changes in application after import- Urgent help!!

    Hi All,
    We have a QA HTML DB instance and Production HTML DB instance. When we export applications from the QA instance and import it on to the production instance we are seeing some changes in the application. Some of the changes are
    1. A popup lov is not getting initialized. Getting the error "Unable to initialize query". The same popup LOV is working in QA instance. Also, if I run the query of the popup LOV in the SQL Command processor of the production instance I am able to see the records.
    2. A "blank" entry is getting inserted in the select list. I have not enabled to show NULL value in the attributes of the select list. I cannot see this "blank" entry in QA.
    3. Checkboxes in a region are re aligned.
    The only difference between the QA and production instance is the characterset. The QA is on WE8ISO8859P1 NLS_CHARACTERSET and production is on AL32UTF8 NLS_CHARACTERSET.
    When we export the application we see that it is exported in WE8ISO8859P1, so when we import we have tried both WE8ISO8859P1 and AL32UTF8. But we are still seeing these changes.
    Is this change related to the characterset issue? or something else? We cannot change the characterset of the QA instance to AL32UTF8 as of now and if its is character set issue is there a work around?
    Any help is appreciated.
    Thanks,
    Swaroop

    Thanks for your reply.
    Yes, both databases are running the same version of HTML DB i.e 2.0.0.00.49
    Oracle versions are different, QA -10.1.0.3.0, Production - 10.2.0.1.0
    The database objects are similar, the schema was exported from QA database and imported on to production database. Data is also same.
    -Swaroop

  • Getting ¿¿¿¿¿¿ while converting Raw data to Varchar2.

    Hi All,
    We are fetching data from as400 database to Oracel using a dblink.
    It gives us Raw data. But when we tried converting Raw data to Varchar2 at Oracle side Its returning ¿¿¿¿¿¿ instead of actual data.
    We are using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    We used following SQL:
    select UTL_RAW.CAST_TO_varCHAR2(ordno) from amflib6.pomast@as400_pub
    OUTPUT :     ¿¿¿¿¿¿
    However we also tried to run it withoout dblink as following but not getting required data.
    select UTL_RAW.CAST_TO_varCHAR2('4DAC') from dual
    OUTPUT  :     M¿
    Is it something to do with installed fonts or soem thing else.. Please help us out.
    Any help would be appreciated.
    Thanks and Regards
    Indu

    Hi,
    Probably a multibyte data vs. characterset issue.
    What is the outcome of:
    select * from nls_database_parameters where parameter like '%CHAR%';?
    Perhaps you'll find some similar problem here:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5783936214008#12758388562377
    And check the [Globalisation Support Guide | http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/toc.htm]

  • ORA-31011 Error. Help!

    Guys,
    I am running this simple query:
    SELECT XMLELEMENT("mydate", sysdate) FROM dual;
    Getting the below error:
    ERROR:
    ORA-31011: XML parsing failed
    Database version:
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
    PL/SQL Release 9.2.0.5.0 - Production
    CORE 9.2.0.6.0 Production
    TNS for HPUX: Version 9.2.0.5.0 - Production
    NLSRTL Version 9.2.0.5.0 - Production
    Is this because of encoding/characterset issues ? Any help to pin-point what the problem is, would be appreciated. Thank You!

    SQL> SELECT XMLELEMENT("mydate", sysdate) FROM dual;
    XMLELEMENT("MYDATE",SYSDATE)
    <mydate>02-JUL-07</mydate>
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release[b] 9.2.0.8.0 - 64bit Production
    PL/SQL Release 9.2.0.8.0 - Production
    CORE    9.2.0.8.0       Production
    TNS for HPUX: Version 9.2.0.8.0 - Production
    NLSRTL Version 9.2.0.8.0 - Production

  • OCL 4.5.1:  Mass Changes; error received when creating CDS

    I receive the following error when attempting to create a Response Mass Change with the following criteria:
    QG Name: XX
    Question Name: YY (Single CHAR variable with default text, only one question specified)
    Criteria:
    Response, (<XX.YY$F> like '%±%')
    Error:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "RXC.RXCMCMCS", line 30
    ORA-06512: at "RXC.RXCMCCRTCDS, line 1107
    ORA-06512: at "RXC.RXCMCCRTCDS, line 1477
    Any help is greatly appreciated.

    Thanks Satish,
    It seemed that this problem was going to go away on its own as we ceased to need the mass change. However, it has arisen again.
    I have no problem creating a like set of criteria using responses. It is only when the special character is included in the filter that the error appears. It is somewhat odd as the character renders correctly in other areas of the interface, however, I begin to wonder if isn't still fundamentally a characterset issue.
    Despite the efficiency of using the special character in the sql statement, I think I will need to simply define the change in another, probably more laborious manner to correct the problem.
    Any further ideas are welcome.
    Thanks again,
    JK

  • Issue with characterset setting in OWB flat file target

    an OWB mapping reads from the database source and writes to flat file target in unix os but junk characters are displayed for non english characters in the target file . The database table contains french,spanish,german,arabic characters.The nls db parameter setting is AL32UTF8. The same setting has been applied to OWB target also but still junk values are appearing for non english characterset.different charactersets like al32utf8,utf8,utf18,us7ascii have been tried at owb target setting but nothing is wroking out to remove junk characters. Please suggest
    Edited by: 943807 on 30 Jun, 2012 10:43 PM

    Please provide some input on the issue

  • Latin-1 Characterset Translation Issues

    I have an Oracle 9.2.0.5 database on OpenVMS 7.3-2. Currently, there are 101 incorrect Latin-1 to Latin-1 character set translations that are being loaded into my Oracle database (Incorrect OS conversion tables when data is transfered from source system).
    NLS DB parameters (nls parameters not listed are default values):
    nls_language string AMERICAN
    nls_length_semantics string BYTE
    nls_nchar_conv_excp string FALSE
    nls_territory string AMERICA
    example:
    Source Data : Résine de PolyPropylène
    Loaded in my database after OS translation: R©sine de PolyPropyl¬ne
    The invalid translations are happening external to the oracle database at the OS level. My problem is I need to correct all the invalid character sets that are in my database. The database is current 3.5TB in size, so I have to do this in an efficient matter. I know what the before (invalid translations values in HEX) and after (correct translations in HEX) values are.
    Is there a PL/SQL program or Oracle tool that can help me to correct these values against millions of rows of data in Oracle (Basically a characterset translation program)?
    I have a C program that works to convert the charactersets if they are in a CSV file. The problem is it takes to long to extract the data from oracle into CSV files for tables that are multi-millions of rows.
    Any help is appreciated.

    It looks like during the insertion from ASP the Latin 1 string has not been converted to UTF8. Hence you are storing Latin-1 encoding inside a UTF-8 database.
    I thought it would automatically be handled by OO4O.True. Did you specify the character set of the NLS_LANG env variable for the OO4O client to WE8ISO8859P1 ? If it was set to UTF8 then Oracle will assume that the encoding coming thru' the ASP page are in UTF-8 , hence no conversion takes place ..
    Also may be you should check the CODEPAGE directive and Charset property in your ASP ?
    null

  • Issue with database characterset

    Hi All,
    Database Version:11gR2
    Developer complained that they are trying to insert Japanese characters and they are able to insert but while displaying its not actually dispalying Japanese characters. So i checked the database characterset.
    SELECT * FROM nls_database_parameters WHERE parameter = 'NLS_CHARACTERSET';
    PARAMETER VALUE
    NLS_CHARACTERSET WE8ISO8859P15For 'Japanese' i know that the characterset required is AL32UTF8. The above one supports French i believe.
    So here are my questions.
    1. Is there anyway i can insert and view Japanese characters properly in the above characterset database?
    2. Is there anyway i can change the characterset of the database(i believe we cannot, but just confirming it)
    3. If i create AL32UTF8 and import the database from above characterset, will that work?
    Thanks,
    Arun

    Arun wrote:
    1. Is there anyway i can insert and view Japanese characters properly in the above characterset database?Not in CHAR, VARCHAR2 or CLOB data types. This should be possible with NCHAR, NVARCHAR2 or NCLOB data types that are using the national character set.
    2. Is there anyway i can change the characterset of the database(i believe we cannot, but just confirming it)Yes but this may require a lot of work.
    see approaches in http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#CEGDHJFF.
    3. If i create AL32UTF8 and import the database from above characterset, will that work? Yes: should be easy with Data Pump.
    >
    Thanks,
    Arun

  • Characterset Al32UTF8 Issue

    Hi,
    Our db is set with Character set as "AL32UTF8" and are storing multi-language data. Taking Greece data as sample
    data stored as below in the DB
    First Name Last Name
    ?S?????S G?O?G??S
    Can anybody guide me on how to store this data properly?
    Thanks,
    Parthy.

    Correct forum for Globalization Support (NLS) issues is:
    Globalization Support
    Hint: use dump() function to verify what's actually stored in the database.

Maybe you are looking for