Change NLS_LENGTH_SEMANTICS to CHAR

I need to change the length semantics of all the tables in an existing application schema from BYTE to CHAR.
I have explored two methods
1- Datapump export/import
Due to large tables with numerous CLOB columns, the performance of the export/import is hardly acceptable for our production downtime window.
2- ALTER TABLE OWNER.TABLE modify (C160 VARCHAR2(255 CHAR))
This solution is to alter all table for all varchar2 or char columns.
Questions :
a) Does the alter table solution modify only the data dictionary or does it also modify/rearrange the blocks for existing rows.
a.1) What happens to a row with a varchar(3) string previously stored in three bytes when you change the semantics to CHAR and you update that string with two-bytes charactyers ? Where does the "enlarged" field in the row piece/block ? Does it go at the end of the row piece ?
b) Are there performance or management benefits to use one methods over the other one ?
Thanks for any information you can provide.
Serge Vedrine
Edited by: [email protected] on 13-Mar-2009 11:12 AM
Edited by: [email protected] on 13-Mar-2009 11:24 AM

## I have explored two methods
## 1- Datapump export/import
This will not work. Export/import preserves the length semantics
## 2- ALTER TABLE OWNER.TABLE modify (C160 VARCHAR2(255 CHAR))
This is the simplest approach. You can write a simple select on the view ALL_TAB_COLUMNS to generate the necessary ALTER TABLE statements.
## Questions :
## a) Does the alter table solution modify only the data dictionary or does it also modify/rearrange the blocks for existing rows.
Only data dictionary.
## a.1) What happens to a row with a varchar(3) string previously stored in three bytes when you change the semantics to CHAR
## and you update that string with two-bytes charactyers ? Where does the "enlarged" field in the row piece/block ? Does it go at the end of the row piece ?
This is the standard behavior of UPDATE when the new value is longer than the old one. If there is place in the block, the new longer value will push itself into the old place moving the rest of the block data to higher offsets. If there is no place, an extra block will be grabbed and inserted into the chain of blocks holding the row. This is called "row chaining".
## b) Are there performance or management benefits to use one methods over the other one ?
As I said, export/import will not work anyway.
-- Sergiusz

Similar Messages

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • How to change int to char?

    hie was wondering if anyone could help me
    i want to take an integer (lets say int y = 9)
    and change it to char x = y . is it possible?

    Doesn't work.
    public class intToChar {
            public static void main( String[] args ) {
                    int i = 10;
                    char c = (char) i;
                    System.out.println("Int is " + i);
                    System.out.println("Char is " + c);
    [/code/
    Above code prints:Int is 10
    Char is
    I've looked in the online API documentation for Character and Integer and found nothing that appears to do what you want to do. I actually wonder if this is possible in core java or if you have to pull some unicode chicanery? Eagerly awaiting reply from someone who knows...
    Maduin

  • Change NLS_LENGTH_SEMANTICS from BYTE to CHAR on Oracle 9i2 problem!

    Hi,
    I have created a new database on Oracle 9i 2, I did not find the correct pfile parameter for the NLS_LENGTH_SEMANTICS setting.
    So have created a standart UTF8 database and now I am trying to change the standard NLS_LENGTH_SEMANTICS=BYTE to CHAR. When I execute the following command in SQL PLUS "ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR SCOPE=BOTH"
    The system is tells me that command is successfully executed.
    But when I look at the NLS_DATABASE_PARAMETERS table I do not see any change for the NLS_LENGTH_SEMANTICS parameter.
    I have also restarted the instance but still everything is the same as it was before.
    Do you know what I am doing wrong?
    Regards
    RobH

    Hi,
    Yeah you are right, the nls_session_parameters "NLS_LENGTH_SEMANTICS" for the app user is set to CHAR.
    This means that NLS_DATABASE_PARAMETERS is from the SYS or SYSTEM user view?
    Thanks a lot
    Regards
    RobH

  • Unable to change NLS_LENGTH_SEMANTICS

    I am currently trying to run a UTF8 enabled application on a Oracle 11.2.0.1.0 database, and have been having trouble storing some characters into the DB. I have been advised by the application support that the product can definitely do store them, and that I must set the NLS_LENGTH_SEMANTICS parameter to CHAR using the following statement;
    ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR SCOPE=BOTH;
    However after logging into the target database as the "systems" user and running this statement, which runs successfully, querying the "v$nls_parameters" still shows that it is set to "BYTE". I have tried then creating a new database and setting this value to CHAR in the advanced settings, but still to no avail.
    However after googling this and reading several articles I have began to notice some other strange behaviour. The first thing i noticed was the "SPFILE{dbname}.ORA" file contains the right value i.e;
    *.nls_length_semantics='CHAR'
    The second thing that I noticed was that running the "v$nls_parameters" query in SQL Developer produced a different result to the one produced by SQL Plus (See below) even though exactly the same user in the same database is being used. In SQL developer the following query "select * from v$nls_parameters where Parameter = 'NLS_LENGTH_SEMANTICS';" produces;
    PARAMETER VALUE
    NLS_LENGTH_SEMANTICS BYTE
    1 rows selected
    Where as in SQLPLUS this query produces;
    SQL> select * from v$nls_parameters where Parameter = 'NLS_LENGTH_SEMANTICS';
    PARAMETER
    VALUE
    NLS_LENGTH_SEMANTICS
    CHAR
    Can anyone provide any insight as to what could be the problem with my oracle database setup, as the application developers are adamant that UTF8 chars are supported and the only difference they can see between my setup (which doesnt work) and theirs (which does work) is the value of this parameter. Details of my system are;
    OS: Windows 7 x64
    SQL Developer: 2.1.0.63
    DATABASE: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    Forum for NLS / Globalization Support discussions:
    Globalization Support
    user10600690 wrote:
    I am currently trying to run a UTF8 enabled application on a Oracle 11.2.0.1.0 database, Is this database created with db charset of AL32UTF8?
    and have been having trouble storing some characters into the DB. What kind of trouble?
    ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR SCOPE=BOTH;
    However after logging into the target database as the "systems" user and running this statement, which runs successfully, querying the "v$nls_parameters" still shows that it is set to "BYTE". I have tried then creating a new database and setting this value to CHAR in the advanced settings, but still to no avail. You've changed one parameter and looked at another.
    There are different "levels" in what you wrote in the previous quote. Take a look at the dictionary views (http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch3globenv.htm#i1006415)
    NLS_SESSION_PARAMETERS (compare to v$nls_parameters, on which the view is based)
    NLS_INSTANCE_PARAMETERS (alter system set nls...)
    NLS_DATABASE_PARAMETERS (creating a new database...)
    You may need to bounce the instance for the setting to have any effect (even with scope=both) for new sessions. At least I think that was the case in some previous release.
    Note that I do not think you can (or, at least, should) create a database with semantics=char. The "advcanced settings" probably referred to instance parameters.
    The first thing i noticed was the "SPFILE{dbname}.ORA" file contains the right value i.e;
    *.nls_length_semantics='CHAR'Yes, the instance level view should agree.
    >
    The second thing that I noticed was that running the "v$nls_parameters" query in SQL Developer produced a different result to the one produced by SQL Plus Could be different client config/environment settings affecting session parameters. Compare with session level view.

  • How can we change in 1st char mapping level in sap pi?

    Hi All,
    There is inbound scenario(third party system to SAP),in the mapping level they are ask us to change 1st char = "9".
    can any one pls tell me how can we change in mapping level?
    Thanks
    Narendra

    you can use predefined text function substring remove first charecter and if you want to replace first charecter then first remove by substring function then use concat function to append desired charecter.
    to remove first chaecter  use belwo values
    substring parameters as start position 1 and charecter count 0

  • Dynamically changing font - Ugly chars on Win 2000

    Hi,
    I wrote simple class called FontChooser (extends JDialog). It can be "plugged" into any JFrame to change font at runtime. Looks like ordinary font dialog, nothing special.
    The FontChooser itself works (IMHO) just fine but I'm facing a problem: Almost all PLAIN fonts on Windows 2000 (sp4) have several ugly characters. For example Tahoma PLAIN 11 has ugly char '8'.
    Strange is that BOLD fonts are always displayed OK - the ugly things are "covered".
    The only font that's OK is Arial. (Any size, any style.) Brobably because it's the Swing default font.
    Changing the font is done by this method:
    * The method simply iterates through the table of  GUI keys and updates every key whose name ends with "font".
    Thus, FontChooser is not able to deal with different fonts at the same time.
    public void updateFontKeys(FontUIResource font) {
      UIDefaults def = UIManager.getDefaults();
      Enumeration en = def.keys();
      while(en.hasMoreElements()) {
        String key = en.nextElement().toString();
        if(key.toLowerCase().endsWith("font"))
          UIManager.put(key,font);
    // Note: FontUIResource is just a new (Swing)
    // "version" of java.awt.Font.I also tried to change Swing component's font using html
    - instead of changing gui keys - but the result was the
    same :o(
    Any hint will be very appreciated :o) Thanks.

    You can download FontChooser (with a simple demo app) from http://www.volny.cz/dojcland/gui/FontChoosing.zip
    The archive includes binaries, source and javadoc.

  • Change colors CNE5 char bar

    Hi PS experts, i have a consultant. In the transaction CNE5, when I visualize the bar chart shows me some colors to actual dates. Where can configure to change the color of these bars?
    please help me
    Greats
    Augusto, Lima Peru

    You can use initWithImage or initWithCustomView to get the look you want.

  • Impact of changing of existing Char. InfoObj. to KF InfoObj.

    Hi,
    I am having characteristic: ZEXCRATE (Exchange Rate) which has been using in existing ODS's and Infocubes in the system. Now my problem is that - it is identified by client that this InfoObject: ZEXCRATE (Exchange Rate) should be represent as Key Figure in the system to display dicimal values.
    Please advice, if i delete existing characteristic ZEXCRATE and change ZEXCRATE (Exchange Rate) as KEy Figure Info Object with the same name, will it impact the ZEXCRATE field in the existing ODSs and Info Cubes during regular data load?
    Thanks & Best Regards,
    Venkata.

    Hi,
    Thanks you for yours response.
    If i create new Key Figure Info Object and assign with same R/3 field in the transfer rules, will it allow to do like that? Because, this issue is specific to one particular ODS only. So, i don't want to disturb rest of InfoCubes and other ODSs because of this.
    Best regards,
    Venkat

  • Changing itab to char type

    hi mates,
    im having an itab which has int, char, decimal types fields in it. now i need to translate the itab to uppercase. when i use the statement " translate itab to uppercase." im getting error stating itab should be of type char.
    how can i make the itab of type char. any help will be appreciated...
    regards
    mano

    Hi,
    consider you have the workarea of you output table structure.
    after assigning all the field value u will do append to internal table.
    declare a string type variable
    l_string type string.
    clear l_string.
    l_string = 'abcd'.
    translate l_string to uppercase.
    wa_output -field1 =  l_string.
    append wa_output to t_output.
    clear wa_output.

  • Changing the default char encoding of the current JVM ?!

    Is there any way that could be used to alter the default character encoding of the Java Virtual Machine at the start of a Java application?

    This seems a little dangerous...considering that file i/o etc depend on correct charset encodings for filenames, etc.
    However, perhaps you could try setting the file.encoding property on the command line:
    java -Dfile.encoding=Big5 YourApplication
    Regards,
    John O'Conner

  • Change column from CHAR(1) to NUMBER(6)

    Hi,
    I have to tables referenced by

    see my reply in this tread
    <a id="jive-thread-3" href="thread.jspa?threadID=432846&tstart=0"
    >Please help me</a>
    <br>
    hope that helps

  • Forms 6i Against 9i UTF8 database with NLS_LENGTH_SEMANTICS = CHAR

    Hi all,
    I have a 9.2.0.4 database which was created with a character set of UTF8.
    I want to start storing multibyte characters in this database so I changed the init.ora parameter NLS_LENGTH_SEMANTICS to CHAR and restarted.
    I then installed my application into the database and all the definitions have correctly been created using CHAR length semantics.
    The client to this application is Forms 6i but I am having serious problems compiling it against the application schema.
    When I open a pll library and try to compile I get Error 201 for every %type variable declaration and also for server based packages.
    I am definitely connected and to the correct schema where the objects all exist.
    I have another application schema - identical except that it was created before I made the change to NLS_LENGTH_SEMANTICS so it uses BYTE. When I compile the same library connected to this schema there are no problems.
    Also, if I try to connect to the CHAR schema using Forms (i.e. 8.0.6) SQL*Plus it crashes with a Windows Application Error. If I connect with a 9.2.0.1 SQL*Plus its fine.
    My question is should it be possible to compile a 6i Form against a UTF8 database with CHAR NLS length semantics at all? Have I done something wrong or am I barking up the wrong tree? Has anyone else tried this?
    Many thanks in advance,
    Kevin.

    Hi,
    Thanks for the responses but I don't think you're quite right. Forms 6i should indeed work with Unicode as a database character set as long as it is UTF8 rather than AL32UTF8. This can be seen in various Metalink documents and also in the Release 6i Guidelines for Building Applications here (section 4.1.5):
    http://download-west.oracle.com/otn_hosted_doc/forms/forms/A73073_01.pdf
    The problems started when I changed NLS_LENGTH_SEMANTICS to CHAR. Its understandable if Forms 6i does not recognise this setting as it pre-dates it but I wanted to confirm this.
    We want to migrate our existing application to work with multibyte characters. Unfortunately there are table columns, PL/SQL variables etc. that are too small when defined as a length in bytes.
    I am looking for the least impact method to rectify this. On the server I can use NLS_LENGTH_SEMANTICS but if I can't get Forms to work with it then I may need to rethink.
    I realise that using Forms 9i or 10g would support the parameter but it is not possible to upgrade at the moment.
    Has anyone any other suggestions or methods for migrating an application to using multibyte characters?
    Thanks again,
    Kevin.

  • Need help on changing parameter NLS_LENGTH_SEMANTICS

    hi All,
    Oracle is 11.2.0.1.0
    Initially our db parameter NLS_LENGTH_SEMANTICS='CHAR' for one table I am getting error
    like
    ORA-01450: maximum key length (6398) exceeded Missing Resource String.
    I changed it NLS_LENGTH_SEMANTICS='BYTE'
    now I am able to create that table
    Is there any side effect if I am changing NLS_LENGTH_SEMANTICS from CHAR to BYTE
    like
    ALTER SYSTEM SET NLS_LENGTH_SEMANTICS='BYTE' scope=both;
    Please help me .
    Thanks In advance.

    Hi;
    Please check below notes which could be helpful to get your answer
    SCRIPT: Changing columns to CHAR length semantics ( NLS_LENGTH_SEMANTICS ) [ID 313175.1]
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]
    Changing NLS_Length_semantics from BYTE to CHAR [ID 974744.1]
    Can The Parameter NLS_LENGTH_SEMANTICS Be Changed From BYTE To CHAR? [ID 906801.1]
    Regard
    Helios

  • String with length 1 became 4 when nls_lang_semantics=CHAR

    Hi,
    We are using Oracle 9.2.0.1 database with charset = UTF-8, and we have set nls_lang_semantics=CHAR.
    We got a problem in our java program is that for a column with char(1), after I insert a java constant 'Y', the result I get back from select is a string of 'Y ', i.e. the string returned is not a string with length=1, but with length=4!! How can I correct the problem to get a string with length=1?
    Regards,
    Roy

    Hi,
    Thanks for the responses but I don't think you're quite right. Forms 6i should indeed work with Unicode as a database character set as long as it is UTF8 rather than AL32UTF8. This can be seen in various Metalink documents and also in the Release 6i Guidelines for Building Applications here (section 4.1.5):
    http://download-west.oracle.com/otn_hosted_doc/forms/forms/A73073_01.pdf
    The problems started when I changed NLS_LENGTH_SEMANTICS to CHAR. Its understandable if Forms 6i does not recognise this setting as it pre-dates it but I wanted to confirm this.
    We want to migrate our existing application to work with multibyte characters. Unfortunately there are table columns, PL/SQL variables etc. that are too small when defined as a length in bytes.
    I am looking for the least impact method to rectify this. On the server I can use NLS_LENGTH_SEMANTICS but if I can't get Forms to work with it then I may need to rethink.
    I realise that using Forms 9i or 10g would support the parameter but it is not possible to upgrade at the moment.
    Has anyone any other suggestions or methods for migrating an application to using multibyte characters?
    Thanks again,
    Kevin.

Maybe you are looking for