NLS_CHARACTERSET and nls_length_semantics

hi there,
I seem to find a lot of link which states a recommendation if you have a database install of NLS_CHARACTERSET = AL32UTF8 to set NLS_LENGTH_SEMANTICS from BYTE to CHAR after database install, however both can be used.
Example below:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:127933400346891255
Is there a link in the Oracle web link that confirm this, please?
Thanks
Edited by: user13045898 on 23-Apr-2012 08:30

There is a good discussion on the Re: Multiple Language Support.
Sergiusz is as authoritative a source as you're going to get from Oracle and his suggestion is to leave NLS_LENGTH_SEMANTICS as BYTE and to explicitly specify CHAR length semantics on every column you declare.
Personally, assuming you're dealing with a system that you're building rather than a packaged system (in which case the vendor should tell you what setting to use), I'd rather set NLS_LENGTH_SEMANTICS to CHAR because it's much more likely in my experience that someone is going to screw up and forget to specify character length semantics than that some script is going to screw up because it doesn't handle character length semantics correctly.
Justin

Similar Messages

  • NLS_Characterset and NLS_NCHAR_character set

    Hi,
    I am trying to insert more than 1000 characters in a nvarchar2 (2000) field for my database with
    nls_char = AL32UTF8 and nls_nchar_character = AL16UTF8...
    get too long value error

    By default, columns lenghts are in bytes, not characters. NVARCHAR2( 2000 ) allocates 200 bytes of storage. Since the AL16UTF16 character set requires 2 bytes per character, you can only store 1000 characters.
    You can either increase the size of the field or change the field definition to use the CHAR qualifier. An NVARCHAR2( 2000 CHAR ) field allows you to store 2000 characters, not 2000 bytes.
    Justin

  • Local Character Conversion using ODI

    Hi ODI guru's..
    Need info on how to Interface Local Characters to SQL Server.
    Requirement Background:
    Source (Oracle) : Oracle stores data in UTF8 format
    Target (SQL Server 2005): Data is stored in UTF16 Format in SQL Server
    We have multi country language data in Source Table in oracle. While interfacing data we need to convert the data to UTF16 and then push to SQL Server in ODI.
    Please throw some ideas on this..
    I have tried CONVERT, UNISTR functions to send the data to SQL Server in UTF16 format but nothing worked out. Data is always interfaced as (????) or Inverted question marks. This seems to be junk data for me.
    Any useful info is higly appreciated.
    Regards,
    Anil

    Hi anil,
    I think best way is setting the NLS_LANGUAGE ,NLS_CHARACTERSET and NLS_LENGTH_SEMANTICS as that of the target in the source (oracle) i.e UTF-16.
    But we canot do any thing on target sqlserver because
    -It can not be used as a char/varchar/text encoding, even in the most recent versions of MS SQL Server.
    -JDBC connections to SQL Server use the JDBC-ODBC driver. This driver cannot tell the difference between n-types and “regular” types, and thus cannot retrieve
    Unicode string values.
    OR Another Way
    Usage of flat files where we can define encoding in the jdbc url in flatfile technology as intermediate b/n oracle and sqlserver.
    Thanks,
    katukota
    Edited by: katukota on Sep 12, 2009 1:44 AM

  • Storing Chinese Characters

    What are the NLS_CHARACTERSET and NLS_LENGTH_SEMANTICS settings required to store chinese characters?

    AL32UTF8 and AL16UTF16 support exactly the same Unicode standard in a given Oracle version. Both support supplementary characters: AL32UTF8 as single 4-byte codes, AL16UTF16 as surrogate pairs (two codes 2 bytes each). Therefore, AL32UTF8 support for supplementary characters is actually more transparent.
    Also, AL32UTF8 is only valid as the database character set (VARCHAR2, CHAR, LONG, CLOB), while AL16UTF16 is only valid as the national character set (NVACHAR2, NCHAR, NCLOB).
    UTF8 is the character set that should be avoided as it supports Unicode 3.0 only.
    -- Sergiusz

  • NLS_LENGTH_SEMANTICS and work tables in KMs

    Hi everybody,
    I'm working on an interface that uses a 11g-based workarea. A quick query on v$nls_parameters returns:
    NLS_CHARACTERSET: AL32UTF8
    NLS_LENGTH_SEMANTICS: BYTE
    Because of this every time a km creates a "$" table, CHAR and VARCHAR2 column lengths are implicitly defined in bytes.
    Obviously when a 30-chars string (source datastores are on DB2) exceeds those 30 bytes target length, the execution fails.
    Is there any way to avoid this? I've tried adding "ALTER SESSION ..." as a step in the "LKM SQL to Oracle" km I'm using but from what I understand every action performed by it on the database lives in its own session.
    I'd prefer to avoid performing ALTER SYSTEM as the 11g instance has other apps/products installed.
    Sorry for my bad english and thanks in advance for any suggestion...

    Sutirtha, your post was sort of inspiring!
    I've found a partial solution to my problem by changing in Topology Manager the CHAR and VARCHAR2 datatypes implementation for the Oracle technology:
    !http://www.abload.de/img/odi-datatype-implementu55n.png!
    Now only a little annoyance remains, caused by a known bug (4485954) within the Oracle JDBC driver: when I reverse engineer tables from a UTF8 physical schema, ALL_TAB_COLUMNS.COLUMN_LENGTH is used instead of ALL_TAB_COLUMNS.CHAR_LENGTH.
    I should try to use a newer jdbc .jar but IIRC that bug is resolved only with db 11g that ships with JDK 1.5/1.6 drivers which ODI (1.4 based) cannot use...

  • OWB-Import does not distinguish between nls_length_semantics char and byte.

    We have a 10g-DB with UTF8 charset and nls_length_semantics=char and we've created an OWB warehouse module for 10g with several tables in it. After deploy operation for a table the column length attributs are looking nice. But if we re-import this table the varchar fields have the threefold length!
    This makes life hard for us, because sometimes it's necessary to reconcile discrepancies in development cycle and some tables have about 150 columns.
    Does anybody have suggestions?
    The OWB version is 10.1.0.3.0.
    Regards
    Ralf

    Ralf,
    Oracle stores the length of VARCHAR columns in two ways in its metadata. One is the length as required by the application, and the other is the length required to store UTF data.
    One UTF character can take upto 3 characters of storage. When OWB reads in the definition from data dictionary, it uses the actual characters allocated to the column, and thus u see a three fold increase in VARCHAR columns.
    Try this for your table
    select
    column_name, data_length, char_length
    from
    user_tab_columns
    where
    table_name='nameOfTable'
    And that is how Oracle designed the database, but unfortunately OWB does not have that twin way of seeing things.
    - Jojo

  • Unable to insert and retrieve Unicode data using Microsoft OLE DB Provider

    Hi,
    I have an ASP.NET web application that uses OLEDB connection to Oracle database.
    Database: Oracle 11g
    Provider: MSDAORA
    ConnectionString: "Provider=MSDAORA;Data Source=localhost;User ID=system; Password=oracle;*convertNcharLiterals*=true;"
    When I use SQL Develeoper client and add convertNcharLiterals=true; in sqldeveloper.conf then I am able to store and retrieve Unicode data.
    The character sets are as follows:
    Database character set is: WE8MSWIN1252
    National Language character set: AL16UTF16
    Select * from nls_database_parameters where parameter in ('NLS_CHARACTERSET','NLS_LENGTH_SEMANTICS','NLS_NCHAR_CHARACTERSET');
    PARAMETER VALUE ---------------------------------------
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    I have a test table:
    desc TestingUni
    Name Null Type
    UNI1 VARCHAR2(20)
    UNI2 VARCHAR2(20)
    UNI3 NVARCHAR2(20)
    I execute the below mentioned query from a System.OleDb.OleDbCommand object.
    Insert into TestingUni(UNI3 ) values(N'汉语漢語');
    BUT when retrieving the same I get question marks (¿¿¿¿) instead of the Chinese characters (汉语漢語)
    Is there any way to add the above property(convertNcharLiterals) when querying the Oracle database from OLEDB connection?
    OR is there any other provider for Oracle which would help me solve my problem?
    OR any other help regarding this?
    Thanks

    using OraOLEDB Provider.
    set the environment variable ORA_NCHAR_LITERAL_REPLACE to TRUE. Doing so transparently replaces the n' internally and preserves the text literal for SQL processing.
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements003.htm#i42617

  • Issue with Characters (tilde, accent and more)

    Hello all,
    Im triyng to do and export / import
    Export system:
    AIX 4.x, Oracle, 8.1.7
    Import system
    RedHat 5, Oracle 10.0.2
    After trying a lot with different character sets, setting and unsetting nls_lang in the target system i still having problems.
    The point is that an accent lleter (á) for example once imported is translated to a box (FFFD) or ascii code 191.
    We have tried changing NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET and setting NLS_LANG as environment variable.
    The source system has:
    env | grep -i nls
    NLS_LANG=american_america.US7ASCII
    LOCPATH=/usr/lib/nls/loc
    NLS_DATE_FORMAT=DD-MON-RR
    ORA_NLS33=/u01/app/oracle/product/8.1.7/ocommon/nls/admin/data
    NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
    SQL> select name,value$ from sys.props$;
    NAME VALUE$
    DICT.BASE 2
    DBTIMEZONE 0:00
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET US7ASCII
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_NCHAR_CHARACTERSET US7ASCII
    NLS_RDBMS_VERSION 8.1.7.4.0
    GLOBAL_DB_NAME PUB.xxx.yyy
    EXPORT_VIEWS_VERSION 8
    The target:
    env | grep -i nls
    NLS_LANG=american_america.US7ASCII
    NAME VALUE$
    DICT.BASE 2
    DEFAULT_TEMP_TABLESPACE TEMP
    DEFAULT_PERMANENT_TABLESPACE SYSTEM
    DBTIMEZONE -03:00
    DEFAULT_TBS_TYPE SMALLFILE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET US7ASCII
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET US7ASCII
    NLS_RDBMS_VERSION 10.2.0.1.0
    GLOBAL_DB_NAME SP9
    EXPORT_VIEWS_VERSION 8
    Anybody can give me any clue?
    Thanks in advance.
    Leonardo

    Hi,
    Does your indexed data (which you hope to match) contain "sofá" or "sofa" (no diacritic)? If the latter, and in-general, you may benefit from the dgidx flag --diacritic-folding* as described in documentation "Mapping accented characters to unaccented characters".  If you are running the latest version, this is all that should be required to generate a match.
    Best
    Brett

  • Is it possible to change NLS_CHARACTERSET after installing 10gR2 database ?

    Hi, all.
    I need to change NLS_CHARACTERSET because of the ORA-29275 error.
    SELECT * FROM NLS_DATABASE_PARAMETERS
    WHERE PARAMETER LIKE '%CHARACTER%';
    Server Side : HP unix, oracle 8i 8.1.7.4
    PARAMETER     VALUE
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     KO16KSC5601
    NLS_NCHAR_CHARACTERSET     KO16KSC5601
    Client Side : Window xp, oracle 10gR2
    PARAMETER     VALUE
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     KO16MSWIN949
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    Thus, I would like to change NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET parameters.
    The above is the default value.
    Is it possible to change those parameters after installing 10gR2 on windows xp?
    Thanks in advance.
    Edited by: user507290 on Jul 8, 2010 11:15 PM
    Edited by: user507290 on Jul 8, 2010 11:16 PM
    Edited by: user507290 on Jul 8, 2010 11:16 PM

    Hi,
    It is possible to migrate from certain character sets to anothers, but not allways. However, if it's a fresh installation, I recommend to create a new instance and delete this one you just created.
    You don't need to reinstall Oracle, just create a new instance. Character sets are instance related.
    Regards,
    Mario Alcaide
    http://marioalcaide.wordpress.com

  • ORA-00604 and ORA-12705 when adding datasource in Mapviewer

    Hi,
    I am trying to add a datasource using standalone OC4J Mapviewer and I'm having some problems.
    I'm using Oracle8i on Windows 2000 Server. Windows 2000 is in Spanish.
    When I add the datasource I get the following XML error
    <?xml version="1.0" encoding="UTF-8" ?>
    <oms_error>Message:[MapperConfig] no se puede agregar el origen de datos de mapa. Wed Oct 12 21:29:58 ART 2005 Severity: 0 Description: at oracle.lbs.mapserver.core.MapperConfig.addMapDataSource(MapperConfig.java:528) at oracle.lbs.mapserver.MapServerImpl.addMapDataSource(MapServerImpl.java:308) at oracle.lbs.mapserver.oms.addDataSource(oms.java:937) at oracle.lbs.mapserver.oms.doPost(oms.java:329) at javax.servlet.http.HttpServlet.service(HttpServlet.java:760) at javax.servlet.http.HttpServlet.service(HttpServlet.java:853) at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:810) at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:322) at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790) at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:270) at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112) at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186) at java.lang.Thread.run(Thread.java:534)</oms_error>
    The part in spanish means: Map Datasource cannot be added.
    In the console I get ORA-00604 and ORA-12705
    I know that ora-12705 has something to do with NLS_LANG.
    I put the following query in sql*plus:
    select *
    from v$nls_parameters
    where parameter in ('NLS_LANGUAGE', 'NLS_TERRITORY', 'NLS_CHARACTERSET');
    And got AMERICAN.AMERICA.WE8ISO8859P1
    I also changed the NLS_LANG variables in the registry to the same characterset. (One of them was in spanish characterset).
    I don't know what else to do! Pleas help!

    Yes, you are in the wrong forum, but it sounds like you have your environment variable (or registry entry) for ORA_NLS33 pointing at the wrong directory.
    Steve

  • Oracle 9i, OCI, and GCC on SPARC

    does anyone have any successful experience compiling 9i oci apps on solaris using gcc? i've tried a couple of the demos with no success. i include the $ORACLE_HOME/rdbms/demo/demo_rdbms.mk files and altered a couple of variables (CCFLAGS, NOKPIC...) and got it to compile but segfaults on olog().
    thanks.

    Thank you for your reply.
    I run the query:
    select * from nls_database_parameters
    where parameter in('NLS_LANGUAGE','NLS_TERRITORY','NLS_CHARACTERSET');
    And he returned:
    PARAMETER     VALUE
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CHARACTERSET     CL8MSWIN1251
    Then I set the variable NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252; export NLS_LANG
    and tried to run ./frmcmp /oracle/formsp/CORE/FORM_NAME.FMB userid=XXX/XXX@XXX compile_all=yes batch=yes window_state=minimize
    but received FRM-91500: Unable to start/complete the build.
    Strange, but when you try to change the environment variable NLS_LANG, frmcmp can not run. If I do unset NLS_LANG frmcmp working properly.
    Maybe it's something to do with the configuration of locale and fonts in Solaris?
    Because I can not enter Russian or in graphical mode (when working through vnc) or in konsole mode (when connecting via putty) ...
    It is worth mentioning that when I run the form, then all the data from the database appear in the fields correctly, in Russian. Incorrectly displays only those characters that belong to the form (such as titles of forms, labels on buttons, etc.).
    Please help me find a solution.
    Edited by: Tarasov_ES on 23.12.2009 19:45
    Edited by: Tarasov_ES on 23.12.2009 19:50

  • About Import and Export question!

    I'm from China and a new DBA with a little experience with Oracle administrator.I meet some questions and can't solve.
    I have a database server (With Oracle Server 7.3.2 for NT),I exported the datas as follows way from a workstation which is oracle client and have admistrator tools.
    exp73 system/manager@nts-com full=Y file=c:\czdata.dmp
    Enter array fetch buffer size:30720
    Export grants :Y
    Export table data :Y
    Compress extents:Y
    And last export terminated successfully without warning.
    In the next,I began imported it into the New database Server(With Oracle Server7.3.2 for NT) with a workstation which is oracle client and have admistrator tools ,too.
    imp73 system/manager@nts-com1 file=c:\czdata.dmp
    The error message appeared:
    IMP-00016:
    IMP-00000:
    In the two databases,I had run the sql before the database was setuped:
    UPDATE PROPS$ SET VALUE$='CHINA' WHERE NAME='NLS_TERRITORY';
    UPDATE PROPS$ SET VALUE$='CHINESE' WHERE NAME='NLS_LANGUAGE';
    UPDATE PROPS$ SET VALUE$='ZHS16CGB231280' WHERE NAME='NLS_CHARACTERSET';
    And the same tablespace,users in the database.I couldn't find the causation.
    What can I do? Please give me such a case which was resolved successfully as detaileder as more. Thanks very much!
    null

    hello,
    I don't know if my question is clear enough but I mean is there a method to export the model as a DDL code
    Thanks
    AL

  • Database import errors

    Hello everyone!
    I have some problems with importing a database, I can't find the answer so I turn to you people.
    I've tried to import to an empty database (with the necessary tables of course) but that gave more errors, so I created the important schemes and tables with the sqls from the original database.
    The original database is working, everything is in there so I don't really understand the "not found" or "does not exists" errors, because I think Oracle is supposed to do the import in such a way, that the connections between objects doesn't get lost, so the database is built up from the bottom.
    The machine I export the data from:
    CentOS 5.5
    Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
    NLS_CHARACTERSET EE8MSWIN1250
    nls_length_semantics string BYTE
    The machine I try to import data:
    Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    NLS_CHARACTERSET AL32UTF8
    nls_length_semantics string CHAR
    (Before you ask, we are Platinum Partners that is why we have Enterprise Edition, we can use it but we don't have support)
    The export command: expdp system/password full=y directory=dmpdir dumpfile=export.dmp logfile=export.log
    Job "SYSTEM"."SYS_EXPORT_FULL_01" completed successfully at: 14:46:34
    The import command: impdp system/password full=y directory=dmpdir table_exists_action=append dumpfile=export.dmp logfile=import_db.log
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 999 error(s) at 15:55:46
    The errors I don't understand:
    ORA-39083: Object type JOB failed to create with error:
    ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
    ORA-31693: Table data object "SYSMAN"."MGMT_JOB_PURGE_POLICIES" failed to load/unload and is being skipped due to error:
    ORA-00001: unique constraint (SYSMAN.PK_MGMT_JOB_PURGE_POL) violated
    ORA-31693: Table data object "SYSMAN"."MGMT_JOB_SINGLE_TARGET_TYPES" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    ORA-31693: Table data object "SYSMAN"."MGMT_CREDENTIALS2" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-28239: no key provided
    ORA-31693: Table data object "SYSMAN"."MGMT_HC_VENDOR_SW_COMPONENTS" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-02291: integrity constraint (SYSMAN.VNC_VND_FK) violated - parent key not found
    KUP-11007: conversion error loading table "SCHAME_NAME"."TABLE_NAME"
    ORA-12899: value too large for column COLUMN_NAME (actual: 3767, maximum: 4000)
    KUP-11009: data for row: COLUMN_NAME: 0X'4146414245484E41507C4146414245484E41507CC166616265'
    And many objects are missing, triggers, sequences, indexes. Much more than the number of errors in the import_db.log.
    I hope someone can help me.
    Thank you in advance,
    Adam
    Edited by: 925120 on Apr 3, 2012 1:55 AM
    Edited by: 925120 on Apr 3, 2012 2:07 AM

    Hi, and welcome to OTN!
    Firstly, try to create the new database with the same character set and nls_length_semantics,
    The machine I export the data from:
    CentOS 5.5
    Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
    NLS_CHARACTERSET EE8MSWIN1250
    nls_length_semantics string BYTE
    The machine I try to import data:
    Red Hat Enterprise Linux Server release 5.6 (Tikanga)
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    NLS_CHARACTERSET AL32UTF8
    nls_length_semantics string CHARThis could be responsible for the error:
    KUP-11007: conversion error loading table "SCHAME_NAME"."TABLE_NAME"
    ORA-12899: value too large for column COLUMN_NAME (actual: 3767, maximum: 4000)Secondly, I often find it better to not do a full export/import when moving between databases, and especially when moving between different database versions. The importing database have a (almost) fully populated sys and sysman-schema, and the full import is trying to add stuff to those schemas, causing all kind of violations.
    So, could you try to either export only the schemas that you're interested in (the user schemas), or just import these from the full export, and post the results?
    HtH
    Johan
    BTW, here's more info on expdp/impdp: http://www.orafaq.com/wiki/Data_Pump

  • NLS_LENGTH_SEMANTICSissue

    I have a schema on database instance with following nls parameter defined
    NLS_CHARACTERSET     AL32UTF8
    NLS_LENGTH_SEMANTICS     BYTE
    I am trying to import this schema into another instance with following nls parameter defined
    NLS_CHARACTERSET     AL32UTF8
    NLS_LENGTH_SEMANTICS     CHAR
    But when I log into the new schema and view any table structure having varchar2 column shows (n BYTE) (eg. varchar2(x byte) ) where n is the length of this field. This basically prevents adding x lengths multi-bytes characters into varchar2 fields.
    Does anyone know how to solve this problem?

    Did you export tables from first ("byte") database and then import to the second ("char") database? Then this could simply be the explanation.
    If you look at the ddl statements coming from the export dump, for example using imp indexfile parameter, you will notice that character column lengths are explicitly defined as "x byte". So, you would have to pre-create the tables before importing.

  • How substrb works with multibyte characters

    suppose X is a 3 byte Korean character.
    What is returned by substrb(X,1,1)
    I know X in hex is EA B8 B0
    But I am getting substrb(X,1,1) = <blank space> ie ASCII 32

    I am not sure whether you can see the following character: '&#44592;'
    I found the following:
    select dump('&#44592;') from dual => Typ=96 Len=3: 234,184,176
    select dump(substrb('&#44592;',1,1)) from dual => Typ=1 Len=1: 32
    I am running this from Oracle SQL Developer on Windows.
    DB is our QA Debug Instance: HRQ115XG
    checked v$nls_parameters and found following:
    NLS_CHARACTERSET = UTF8
    NLS_LENGTH_SEMANTICS = BYTE
    NLS_LANGUAGE = AMERICAN
    Let me tell you my original problem. I have a valueset that accepts 30 bytes.
    It is always possible that susbstrb( ...,1,30) splits a Korean Char in the midway.
    resulting an Invalid character at the end as per UTF8 encoding. As far as I know
    the first byte of an UTF8 character indicates how many bytes it has.
    If substrb() is returning spaces as I mentioned above then it is safe to use in the valueset. But I am not able to find any such behaviour documented.
    SELECT * FROM nls_session_parameters;
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE

Maybe you are looking for