Unicode data in non-utf8 oracle 8.1.7

Hi,
I have to migrate unicode data from a UTF-8 Oracle 9.0.2 database to non-utf8 oracle 8.1.7. The tables are small and I am reading and writing into the database using java code.The column which contained the unicode data have been made nchar in oracle 8.1.7.
When I try to insert the data,I get the error:
java.sql.SQLException: ORA-12704: character set mismatch
Can I have unicode data stored in nchar columns in a non-utf8 database?
Is there any documentation available on the same?
Thanks,
Shipra

Check out the Oracle Unicode Database Support paper on OTN - http://technet.oracle.com/tech/globalization/content.html
Basically NCHAR prior to Oracle9i can not be Unicode. If you need to store Unicode data in 8.1.7, you need to use UTF8 as the database character set.
Nat

Similar Messages

  • Move data from Non Partitioned Table to Partitioned Table

    Hi Friends,
    I am using Oracle 11.2.0.1 DB
    Please let me know how can i copy /move the data from Non -Partitioned Oracle table to the currently created Partiotioned table.
    Regards,
    DB

    839396 wrote:
    Hi All,
    Created Partitioned table but unable to copy the data from Non Partitioned table:
    SQL> select * from sales;
    SNO YEAR NAME
    1 01-JAN-11 jan2011
    1 01-FEB-11 feb2011
    1 01-JAN-12 jan2012
    1 01-FEB-12 feb2012
    1 01-JAN-13 jan2013
    1 01-FEB-13 feb2013into which partition should row immediately above ("01-FEB-13") be deposited?
    [oracle@localhost ~]$ oerr  ora 14400
    14400, 00000, "inserted partition key does not map to any partition"
    // *Cause:  An attempt was made to insert a record into, a Range or Composite
    //          Range object, with a concatenated partition key that is beyond
    //          the concatenated partition bound list of the last partition -OR-
    //          An attempt was made to insert a record into a List object with
    //          a partition key that did not match the literal values specified
    //          for any of the partitions.
    // *Action: Do not insert the key. Or, add a partition capable of accepting
    //          the key, Or add values matching the key to a partition specification>
    6 rows selected.
    >
    SQL>
    SQL> create table sales_part(sno number(3),year date,name varchar2(10))
    2 partition by range(year)
    3 (
    4 partition p11 values less than (TO_DATE('01/JAN/2012','DD/MON/YYYY')),
    5 partition p12 values less than (TO_DATE('01/JAN/2013','DD/MON/YYYY'))
    6 );
    Table created.
    SQL> SELECT table_name,partition_name, num_rows FROM user_tab_partitions;
    TABLE_NAME PARTITION_NAME NUM_ROWS
    SALES_PART P11
    SALES_PART P12
    UNPAR_TABLE UNPAR_TABLE_12 776000
    UNPAR_TABLE UNPAR_TABLE_15 5000
    UNPAR_TABLE UNPAR_TABLE_MX 220000
    SQL>
    SQL> insert into sales_part select * from sales;
    insert into sales_part select * from sales
    ERROR at line 1:
    ORA-14400: inserted partition key does not map to any partition
    Regards,
    DB

  • Unable to show Unicode Data in Oracle RESTful Service JSON

    Hi Everyone.
    I have stored unicode data in Oracle database and when i retrieve in sql query it is showing the same. But when i retrieve the data in json using oracle RESTful web service (GET), it bringing with unknown character as shown below.
    next: {},$ref: "http://000.00.00.00:8085/ords/mobile/sch/loginm/?user=SURESH&pwd=123&page=1"
    items: [
    uri: {},$ref: "http://000.00.00.00:8085/ords/mobile/sch/loginm/41"
    stud_id: 41,
    stud_code: "1001",
    stud_name: "அப்துல் ஜப்பார்"
    My Database Setup as below:
    SQL> SELECT name,value$ FROM sys.props$;
    NAME                                                          VALUE$
    DICT.BASE                                                  2
    DEFAULT_TEMP_TABLESPACE               TEMP
    DEFAULT_PERMANENT_TABLESPACE     USERS
    DEFAULT_EDITION                                   ORA$BASE
    Flashback Timestamp TimeZone                    GMT
    TDE_MASTER_KEY_ID
    DBTIMEZONE                                        -07:00
    DST_UPGRADE_STATE                         NONE
    DST_PRIMARY_TT_VERSION               11
    DST_SECONDARY_TT_VERSION          0
    DEFAULT_TBS_TYPE                              SMALLFILE
    NLS_LANGUAGE                              AMERICAN
    NLS_TERRITORY                                   AMERICA
    NLS_CURRENCY                                   $
    NLS_ISO_CURRENCY                         AMERICA
    NLS_NUMERIC_CHARACTERS               .,
    NLS_CHARACTERSET                         AL32UTF8
    NLS_CALENDAR                                   GREGORIAN
    NLS_DATE_FORMAT                              DD-MON-RR
    NLS_DATE_LANGUAGE                         AMERICAN
    NLS_SORT                                        BINARY
    NLS_TIME_FORMAT                         HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT               DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT               HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT          DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY                    $
    NLS_COMP                                   BINARY
    NLS_LENGTH_SEMANTICS          BYTE
    NLS_NCHAR_CONV_EXCP          FALSE
    NLS_NCHAR_CHARACTERSET          AL16UTF16
    NLS_RDBMS_VERSION               11.2.0.1.0
    GLOBAL_DB_NAME                    MOBILE
    EXPORT_VIEWS_VERSION   
    SQL> select DECODE(parameter, 'NLS_CHARACTERSET', 'CHARACTER SET',
      2  'NLS_LANGUAGE', 'LANGUAGE',
      3  'NLS_TERRITORY', 'TERRITORY') name,
      4  value from v$nls_parameters
      5  WHERE parameter IN ( 'NLS_CHARACTERSET', 'NLS_LANGUAGE', 'NLS_TERRITORY');
    NAME          VALUE
    LANGUAGE      AMERICAN
    TERRITORY     AMERICA
    CHARACTER SET AL32UTF8
              8
    WORKLOAD_CAPTURE_MODE    
    WORKLOAD_REPLAY_MODE
    Awaiting you solution.
    -- Abdul Jabbar

    Kumar,
    Ftping the PG.xml to mds folder will not help the page to goto MDS directory
    You have to import the file using xmlimporter
    I understand you have done the import, but it is not success.
    Could you please post what is the script you used to import the PG.xml
    and once you run what was the output you have got.
    May be you can refer the URL for the scripts
    http://apps2fusion.com/at/61-kv/331-oa-framework-scripts
    With regards,
    Kali.
    OSSI.

  • SAP XI 2, JDBC Inbound Adapter. Non Unicode data problem.

    Hi All,
    As far a I understand the data from inbout JDBC adapter in XI 2 should be in Unicode format. Is it possible to accept non unicode data using this adapter? If answer is yes then how to specify correct data encoding?
    Best Regards.
    Victor.

    Hi,
    Yes, tecnicaly XMB is aware of the encoding and not the adapter, but when you read data from file you can tell to the adapter that file is in specific encoding e.g. file.encoding = "ISO-8859-5"
    This value is used by adapter to create xml file wich is passed to XMB. By default the passed XML is UTF8 encoded.
    So I need similar setting for JDBC adapter and my question is if I can do it?
    Best Regards.
    Victor.

  • Cannot run a UNICODE kernel against a non-UTF8 database

    Hi,
    I am trying to install SAP ECC 6.0 SR2 . am using  windows 2003 server oracle 10g db.
    please help me how to resolve this...
    sapparam: sapargv( argc, argv) has not been called.
    sapparam(1c): No Profile used.
    sapparam: SAPSYSTEMNAME neither in Profile nor in Commandline
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: START OF LOG: 20100717144107
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#13 $ SAP
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: version R7.00/V1.4 [UNICODE]
    Compiled Jul 17 2007 01:28:45
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe -testconnect
    DbSl Trace: ORA-1403 when accessing table SAPUSER
    DbSl Trace: Cannot run a UNICODE kernel against a non-UTF8 database (charset = AL32UTF8)
    (DB) ERROR: db_connect rc = 256
    DbSl Trace: Default conn.: already connected to DEV
    (DB) ERROR: DbSlErrorMsg rc = 29
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: job finished with 1 error(s)
    D:\usr\sap\DEV\SYS\exe\uc\NTI386\R3load.exe: END OF LOG: 20100717144108

    Reinstallae the server .problem solved thanks to 3everyone

  • Importing non-unicode data into unicode 10gR2 database

    Hi:
    I will have to import non-unicode data into unicode 10gR2 database. The systems the data is coming from are the following: CODA, Timberline, COMMS, CMS, LIMS. These are all RDBMS, sql-enabled systems. We are talking about pretty big amounts of data (a couple hundred GB combined).
    Did anybody go through a similar exersize?
    I know I'll have to setup nls_length_semantics to CHAR.
    What other recommendations could you guys give?
    TIA,
    Greg

    I think "nls_length_semantics" isn't mandatory at this point, and you must extract a little quantity of information from every source and do some probes injecting them into the Oracle10g database.

  • Encoding Problem: non-Unicode Data to Unicode format of XI

    Hi SDN,
    I have a JDBC sender to SAP BW scenario. The database is MS SQL server. 
    The code page of db CP1CIAS
    Description:SQL Server Sort Order 52 on Code Page 1252 for non-Unicode Data
    Some fields with values like <b>ZAK&#x0;ADY TWORZYW SZTUCZNYCH</b> are failing in XI Mapping with error
    <b>Fatal Error: com.sap.engine.lib.xml.parser.Parser~
    XMLParser : #0 not allowed in Character data sections
    in the trace.</b>
    Please help how should i get over this code page errors. By installing this code page on XI server help?

    There is no such global setting, this is b/c your source has Unicode I trust, and the only one other thing to try would be this:
    Arthur My Blog

  • How to load unicode data files with fixed records lengths?

    Hi!
    To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
    Alternative 1: one record per row
    SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode.dat
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001111112234444
    01NormalDExZWEI
    02ÄÜÖßêÊûÛxöööö
    03ÄÜÖßêÊûÛxöööö
    04üüüüüüÖÄxµôÔµ Alternative2: variable length records
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode_var.dat "VAR 4"
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
    Implementing these two alternatives in OWB, I encounter the following problems:
    * How to specify LENGTH SEMANTICS CHAR?
    * How to suppress the POSITION definition?
    * How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
    Or is there another way that can be implemented using OWB?
    Any help is appreciated!
    Thanks,
    Carsten.

    Hi Carsten
    If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
    Cheers
    David

  • Non-UTF8 DADs and PlsqlTransferMode RAW

    Hey, so here's our situation. Oracle DB 10g and OAS 10g 9.0.4.2. We have a lot of DAD-based PL/SQL apps, with DADs set up per language. Some time ago, we converted the databases to UTF8. Everything worked pretty much fine, except for all the errors about language conversion in our logs. We've been getting our app developers/site managers to slowly move over to using new UTF8 DADs, but many of the old language-specific DADs are in use.
    Well, some of the pages had the problem with mismatched content length (see Note:244544.1) but no one saw that as a big deal - until this weekend, when we moved over to a new load balancer, and the LB is waiting for the page to "finish" before passing it on to the user, making all our non-UTF8 DAD apps hang when using non-English languages. Big production problem.
    So the docs seem to say that the magic solution for me is to set all my non-UTF8 DADs to PlsqlTransferMode RAW. I've tested this and it resolves the problem at hand, but it scares all the application managers here who worry that it'll have some kind of side effects.
    So question to all - in a reasonably complex environment, with a heavily international Internet user base (wide variety of browsers, character settings, etc) - could there be any unforseen impact from changing from CHAR to RAW for the transfer mode? Or should it be utterly transparent to the end user, given that all the data is stored in the DB in utf8?
    Thanks,
    Ernest

    There is something similar before:
    wwv_flow.accept was not found on this server
    what lead to some DAD changes....revert is helping ?
    Regards,
    Damir

  • Using updatable ResultSet to update Unicode data ?

    Is it possible to use Unicode data in updateString() methods for updatable resultset ? Or I only can do it using OraclePreparedStatement ? I've opened an updatable ResultSet and trying to update NCHAR column using the code below:
    rset.updateString(6, "\uFF23\uFF23");
    which fails with "java.sql.SQLException: Cannot map Unicode to Oracle character".
    An attempt to do the same thing with regulair English data
    rset.updateString(6, "CC");
    causes "java.sql.SQLException: ORA-12704: character set mismatch" on insertRow(). The server primary encoding is UTF8 and I have AMERICAN_AMERICA.UTF8 in NLS_LANG in the registry. I would really appreciate any help on this ...

    Thanks ! I guess I was looking in the wrong jdbc directory (the one that comes with JDev) ...

  • UNICODE data files with SQLLDR

    how can i load UNICODE data files with SQLLDR.
    my Oracle instance is on UNIX with NLS_CHARACTERSET WE8ISO8859P1.
    I have .dat files extracted from SQL Server using bcp utility with -w option.
    When i use -c option i'm not getting the european characters correctly like the a and e with 2 dots on top....
    when i load UNICODE (-w) file with CHARACTERSET UTF8 in my control file, it doesnt go thru. Any solution for this ? Thanks !

    I just created a unicode textfile on windows with some westeuropean characters and imported it into we8iso8859p1 database on linux using controlfile parameter CHARACTERSET UTF16.
    They got all properly converted.
    As Justin mentioned, unicode on windows means generally UTF16 Little Endian.
    Best regards
    Maxim

  • Query Unicode Data

    Hi,
    I have a UTF8 database and the data in the database is stored in Unicode too.
    If I run the query:
    select dump(a, 1016) from test;
    results:
    DUMP(A,1016)
    Typ=1 Len=1 CharacterSet=UTF8: cd
    subsequently if I run select a from test from sqlplus it doesn't display the character.
    I am using Unix Solaris 9 platform database is too on this server. NLS_LANG is set to AMERICAN_AMERICA.UTF8 locale on the server is set to en_US.UTF-8 via LC_ALL=en_US.UTF-8 env. setting.
    Am I missing something here? The problem is there is a requirement for us to spool the data out of the database as Unicode and forward it onto third parties. Any help would be appreciated
    Many Thanks, Chris

    How did you load the unicode data into the database in the first place? Can you do a hex dump of some non-ASCII characters to see what the vakues are coming back as.

  • Reading Unicode data from a file...

    I am writing an application that needs to read some configuration data from a file. An end user edits the configuration file to provide the configuration data. The Java code reads this file and uses the configuration data supplied by the user.
    The user can also save non-ascii characters as part of the configuration data. hence, I do not want to use java properties files. What are the other options available that allow me reading Unicode data into my Java code and will also allow user to save the configuration file as Unicode?

    Java characters are Unicode characters. Read file data that consists of Unicode characters as Java characters or strings.
    You can read the data as primitive char values using the DataInputStream class. The InputStreamReader class can also read Unicode (UTF-16) data.
    Data can be written using the OutputStreamWriter class.

  • Unable to insert and retrieve Unicode data using Microsoft OLE DB Provider

    Hi,
    I have an ASP.NET web application that uses OLEDB connection to Oracle database.
    Database: Oracle 11g
    Provider: MSDAORA
    ConnectionString: "Provider=MSDAORA;Data Source=localhost;User ID=system; Password=oracle;*convertNcharLiterals*=true;"
    When I use SQL Develeoper client and add convertNcharLiterals=true; in sqldeveloper.conf then I am able to store and retrieve Unicode data.
    The character sets are as follows:
    Database character set is: WE8MSWIN1252
    National Language character set: AL16UTF16
    Select * from nls_database_parameters where parameter in ('NLS_CHARACTERSET','NLS_LENGTH_SEMANTICS','NLS_NCHAR_CHARACTERSET');
    PARAMETER VALUE ---------------------------------------
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    I have a test table:
    desc TestingUni
    Name Null Type
    UNI1 VARCHAR2(20)
    UNI2 VARCHAR2(20)
    UNI3 NVARCHAR2(20)
    I execute the below mentioned query from a System.OleDb.OleDbCommand object.
    Insert into TestingUni(UNI3 ) values(N'汉语漢語');
    BUT when retrieving the same I get question marks (¿¿¿¿) instead of the Chinese characters (汉语漢語)
    Is there any way to add the above property(convertNcharLiterals) when querying the Oracle database from OLEDB connection?
    OR is there any other provider for Oracle which would help me solve my problem?
    OR any other help regarding this?
    Thanks

    using OraOLEDB Provider.
    set the environment variable ORA_NCHAR_LITERAL_REPLACE to TRUE. Doing so transparently replaces the n' internally and preserves the text literal for SQL processing.
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements003.htm#i42617

  • Error: columns are not equal when writeback data from Essbase to Oracle

    I got an error when writeback Budget data from Essbase to Oracle that: The number of columns returned by script [14] is less than the source data columns exposed [15] while my returned columns from script is 15
    My report script:
    <Sym
    {MISSINGTEXT ""}
    { SUPMISSINGROWS }
    //{SUPPAGEHEADING}
    {SUPBRACKETS}
    {SUPFEED}
    {SUPCOMMAS}
    { TABDELIMIT }
    { NAMESON }
    { ROWREPEAT }
    { NOINDENTGEN }
    {DECIMAL 0}
    //<COLUMN ("Version")
    <ROW ("Account","Sector","Resident / Non-Resident","HSP_Rates","Year","Profit_Center","Period","SubAccount","Currency","Branch","Scenario","Elements","Spare","Version")
    <IDESCENDANTS "Account"
    "S_NA"
    "R_0"
    "HSP_InputValue"
    "FY11"
    "Jan" "Feb" "Mar" "Apr" "May""Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec"
    <IDESCENDANTS "Profit_Center"
    "T_00"
    "Local"
    "P_682"
    "B_01"
    "Budget"
    "Amount"
    "E_0000"
    "Approved"
    *

    the solution to uncomment -> //{SUPPAGEHEADING}

Maybe you are looking for