AL16UTF16 as nchar set

I'm trying to wrap my head around multi-byte character set support and I'm having quite the time. Our production database was created before my time, so I don't have a good reason why it was created with the WE8MSWIN1252 character set, but here's our setup.
10.2.0.2 ( upgraded from 8.1.7->9.2.0 ) EE on AIX 5.3
character set WE8MSWIN1252
Ncharset AL16UTF16
I created a test table
create table balvey.tsting (COL1 VARCHAR2(50), COL2 NVARCHAR2(50), id number, descrip varchar2(30));
and inserted some records via different connections from a windows client. And none of them really gave me what I was expecting. I used sqldev connected to 10.2 db with AL32UTF32 character set and the following select to get the greek phi symbol 'Φ' and then copy/pasted from there into each insert statement.
From Putty connection on win client to the server.
insert into balvey.tsting values
('.', '.', 1, 'phi aix/slqplus 1252');
insert into balvey.tsting values
(chr(52902), chr(52902), 2, '52902 aix/sqlplus 1252');
From sql*plus on win client.
insert into balvey.tsting values
('F', 'F', 3, 'phi win/sqlplus 1252');
insert into balvey.tsting values
(chr(52902), chr(52902), 4, '52902 win/sqlplus 1252');
From sqldeveloper on win client
insert into balvey.tsting values
('Φ', 'Φ', 5, 'Φ win/sqldev 1252');
insert into balvey.tsting values
(chr(52902), chr(52902), 6, '52902 win/sqldev 1252');
Then selecting back out of the database from each client application I didn't get the Φ from any of the records. It didn't surprise me that I didn't get it from the varchar2 column, but I thought I would have gotten it from the nvarchar2 columns, at least from the sqldev application.
here are the results from selecting back out via each client app.
from aix client
system@JDEDEV> select * from balvey.tsting order by id;
COL1                           COL2                                   ID DESCRIP
.                              .                                       1 phi aix/slqplus
¦                              ¦                                       2 52902 aix/sqlplus
F                              F                                       3 phi win/sqlplus
¦                              ¦                                       4 52902 win/sqlplus
¦                              ¦                                       5 ¦ win/sqldev
¦                              ¦                                       6 52902 win/sqldev
6 rows selected.
from sqlplus on win client
SQL> select * from balvey.tsting order by id;
COL1                           COL2                                   ID DESCRIP
.                              .                                       1 phi aix/slqplus
¦                              ¦                                       2 52902 aix/sqlplus
F                              F                                       3 phi win/sqlplus
¦                              ¦                                       4 52902 win/sqlplus
¦                              ¦                                       5 ¦ win/sqldev
¦                              ¦                                       6 52902 win/sqldev
from sqldev on win client
select * from balvey.tsting order by id;
.     .     1     phi aix/slqplus
¦     ¦     2     52902 aix/sqlplus
F     F     3     phi win/sqlplus
¦     ¦     4     52902 win/sqlplus
¦     ¦     5     ¦ win/sqldev
¦     ¦     6     52902 win/sqldev Edited by: PktAces on Jan 7, 2009 12:33 PM
Edited by: PktAces on Jan 7, 2009 12:35 PM

Well, I'm not running into ora- errors or using an 8i client to connect, but I think in-directly you have helped to clear up some of the confusion. That ML note points to ML #227330.1 and point #14 in that note is, "14. I'm inserting <special character> in a Nchar or Nvarchar2 col but it comes back as ? or ¿ ...". I wasn't necessarily getting the ? or ¿ but that lead me to the suggestion to add the setting to SqlDeveloper to allow the N flag in the insert statement, like so:
insert into balvey.tsting values
('Φ', N'Φ', 9, 'NΦ win/sqldev');
Which I had already tried but it didn't work until the setting change. Then when selecting back out via sqldeveloper does return the Φ from the NVARCHAR2 field. It also pointed to using sqlloader to load from a flat file due to sqlplus not being a UNICODE application.
So while I'm still far from understanding all there is to know about character sets, I'm not quite as confused now. Thanks.

Similar Messages

  • Upgrading user tables with NCHAR columns in 10.2.0.1

    Hi,
    we have upgraded our database to 10.2.0.1 from 8.1.7.4 through DBUA
    Before upgradation our database was having WEISO8895P1 Character Set and WEISO8895P1 National character set.
    In 10g we are having WEISO8895P1 Character Set and AL16UTF16 National character set.
    Now to upgrade user tables with NCHAR columns, we have to perform the following steps to run the scripts :
    SQL> SHUTDOWN IMMEDIATE
    SQL> STARTUP RESTRICT
    SQL> @ utlnchar.sql
    SQL> @ n_switch.sql
    SQL> SHUTDOWN IMMEDIATE
    SQL> STARTUP
    But when i query for the NCHAR or NVARCHAR2 or NCLOB datatype for verification, the NCHAR columns in the user tables have not been upgraded, it still remains the same.
    Kindly suggest for the same.
    Regards
    Milin

    Kindly explain or post the following
    - the 'query' you used after 'upgradation' (a word not occurring in any dictionary and probably specific to the Hindi-English) for 'verification'
    - what result you expected
    - what 'still remains the same' means
    Kindly consider no one is looking over your should. If you don't plan to post anything other than 'It doesn't work', paid support might be a better option for you.
    Volunteers like we are not being paid to tear the information out of you.
    Sybrand Bakker
    Senior Oracle DBA

  • National Character setting

    DB version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
    using Sql/Developer version 1.5 to test
    My DB has National character set set to unicode (NLS_NCHAR_CHARACTERSET     AL16UTF16) and character set to US7ASCII (NLS_CHARACTERSET     US7ASCII).
    When i create a table with nvarchar2 column and insert in a my language word (month name) then upon fetching rows it shows some different results... why such behavior?
    select sysdate from dual;
    04-जुलै      -12
    create table test_nj (a nvarchar2(50));
    insert into test_nj values('जुलै');
    commit;
    select * from test_nj;
    A2H
    A2H
    A2H
    A2H

    user10569054 wrote:
    DB version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
    using Sql/Developer version 1.5 to test
    My DB has National character set set to unicode (NLS_NCHAR_CHARACTERSET     AL16UTF16) and character set to US7ASCII (NLS_CHARACTERSET     US7ASCII).
    When i create a table with nvarchar2 column and insert in a my language word (month name) then upon fetching rows it shows some different results... why such behavior?
    select sysdate from dual;
    04-जुलै      -12
    create table test_nj (a nvarchar2(50));
    insert into test_nj values('जुलै');
    commit;
    select * from test_nj;
    A2H
    A2H
    A2H
    A2Hdo you have data storage problem or data presentation problem?
    SELECT ASCII_STR(A) FROM TEST_NJ;
    post results from above.

  • Import fails with unable to extend table CUSTOM.CASA_TRAN_HIST_UPLD by 6999

    Hi,
    I have taken export backup of table from 9.2.0.4 on AIX & trying to import in 11.1.0.7.0 on AIX
    while importing im getting the following error.
    ORA-01653: unable to extend table CUSTOM.CASA_TRAN_HIST_UPLD by 699912 in tablespace CUSTOM
    As the table size is 37G , total free space in tablespace is 40G,
    & no index on the table.
    following are sum lines from import file
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V09.02.00 via direct path
    import done in US7ASCII character set and UTF8 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    export server uses AL16UTF16 NCHAR character set (possible ncharset conversion)
    . importing DATAMIG's objects into CUSTOM
    . . importing table "CASA_TRAN_HIST_UPLD"
    IMP-00058: ORACLE error 1653 encountered
    ORA-01653: unable to extend table CUSTOM.CASA_TRAN_HIST_UPLD by 699912 in tablespace CUSTOM
    IMP-00028: partial import of previous table rolled back: 62844421 rows rolled back
    IMP-00017: following statement failed with ORACLE error 1917:
    "GRANT SELECT ON "CASA_TRAN_HIST_UPLD" TO "BSGUSER""
    IMP-00003: ORACLE error 1917 encountered
    ORA-01917: user or role 'BSGUSER' does not exist
    Import terminated successfully with warnings.
    is there any to resolve the issue.
    how to change NCHAR set for import.
    Thanks

    Hello,
    which & how i can set character set for import.About the Character Set, it's a setting at the Database creation. You may check it by using the following query on the Source and Target Databases:
    select * from v$nls_parameters; The NLS_CHARACTERSET will give you the Character set of the Database.
    It cannot be changed easily. It may imply a Database creation and export/import of data ( see Note *225912.1* ).
    Else, when you export (with the Original Export/Import utility) it's recommended to set the NLS_LANG parameter.
    The NLS_LANG parameter has 3 components:
    - Language
    - Territory
    - Client Character Set
    A wrong setting of the NLS_LANG may lead to conversion. However starting with *9i* most data is exported with the Character Set of the Database regardless the NLS_LANG setting. The following note may give you some details about it:
    Export/Import and NLS Considerations [ID 15095.1]Hope this help.
    Best regards,
    Jean-Valentin

  • How to insert multi-lingual information into an NVARCHAR column

    Hi,
    I've searched through the forum but couldn't find an answer, and have spent days on this already.
    I have a Java Web Application, and the user may enter information in any language into one of the forms. I need to collect the user-input, and insert it into an Oracle 10g database.
    My database settings:
    WE8ISO8859P1 for Char/VarChar/CLOB fields and
    AL16UTF16 for Nchar/Nvarchar/NCLOB fields
    I have defined the columns that require multi-lingual support to be NVARCHAR.
    My question is, how do I insert into this column? From what I read, it seems that no conversion needs to be done, but doing this:
    update <table> set <column> = '什么'
    does not work. The '什么' is some Chinese characters I copied and pasted from a Chinese site.
    Apparently, if I convert the Chinese characters to a hex string and do this:
    update <table> set <column> = unistr('\xxxx')
    then it will work.
    However, is there a way to avoid doing this conversion and let Oracle handles the conversion?
    Thanks in advance,
    Elaine

    Hi Denes,
    I saw the example in your workspace and it is the same what exactly i want and instead of storing in one column i want to store the two selected values into two different columns. Also i need to restrict the selection of checkboxes upto 2 only. So If the user tries to select the third check box it doesnt have to accept.
    Even I am ready to change my table as according to your example i.e. creating only one column. Store the values of selection into that column.
    I was unable to see how u wrote the logic (Code) for your example in your workspace. It helps alot if you provide me the code for that example(Multi Checkbox One Column).
    I was facinated after watching your examples in your workspace and am very much interested to know more about Apex.
    Please help me insolving this as it is long pending issue for my requirement.
    Thanks a lot again,
    Sekhar.
    Edited by: Sekhar Nooney on Mar 26, 2009 4:35 AM

  • Changing Character Set from AL16UTF16 to AL32UTF8

    Hi,
    I am stuck with a requirement to change the Character Set from AL16UTF16 to AL32UTF8.
    I am trying to install the Oracle Content Database & the installer expects the target database to have a Character Set of AL32UTF8. The current Character Set of the Database is AL16UTF16. I am unable to change the Character set as it simply complains that AL32UTF8 is not a super-set of AL16UTF16.The documents that I have consulted mention that such a transitionis not recommended.
    How do I convince the installer to continue the installation with AL16UTF16 ? Or, is it possile at all to change the Character Set from AL16UTF16 to AL32UTF8 ?
    Please do let me know your thoughts on this.
    Regards,
    Sandeep

    Is your data size above GBs? If not this might help before installation I guess ->
    Changing the Database Character Set of an Existing Database
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1476
    Kind regards,
    Tonguç

  • NCHAR Character set problem

    Hi all,
    I am having Oracle 9.2.0.1.
    I have a situatuion which looks peculiar to me. I have one SID name TEST. In the sys.props$ table I can see the NLS_NCHAR_CHARCATERSET as AL16UTF16.
    However, in another schema in the same SID, the v$nls_parameters show NLS_NCHAR_CHARACTERSET as UTF8.
    When I do a select on the NVARCHAR columns, the data is returned as inverted ?. However, the data is stored properly [I have checked ASCII values using dump].
    My understanding is that the values in PROPS$ and V$NLS_PARAMETERS should be same. I don't know how they are being displayed differently.
    Please help.......

    Yes, they should be the same. But v$nls_parameters is based on values
    stored in SGA, while sys.props$ (NLS_DATABASE_PARAMETERS)
    queries the Data Dictionary table.
    Restart your database (even twice). If it does not help, make sure
    that V$NLS_PARAMETERS has not been redefined in the schema
    from which you are testing. If it still does not help, and you are
    Oracle Support's customer, try the newest patchset. If it does not help,
    log a Service Request.
    -- Sergiusz

  • Problem with Character Set in Oracle database 10g

    Hi,
    I tried to import one tablespace into test server. Source server with Oracle 8i and Target server with Oracle database 10g. The error I get is
    Import: Release 10.2.0.1.0 - Production on Thu Aug 3 00:20:49 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Username: sys as sysdba
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V08.01.07 via conventional path
    About to import transportable tablespace(s) metadata...
    import done in WE8DEC character set and AL16UTF16 NCHAR character set
    export server uses WE8DEC NCHAR character set (possible ncharset conversion)
    . importing SYS's objects into SYS
    . importing SYS's objects into SYS
    IMP-00017: following statement failed with ORACLE error 19736:
    "BEGIN sys.dbms_plugts.beginImport ('8.1.7.4.0',2,'2',NULL,'NULL',67051,25"
    "51,2); END;"
    IMP-00003: ORACLE error 19736 encountered
    ORA-19736: can not plug a tablespace into a database using a different national character set
    ORA-06512: at "SYS.DBMS_PLUGTS", line 2386
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1946
    ORA-06512: at line 1
    IMP-00000: Import terminated unsuccessfully
    PLZ somebody help in geting resolve this. Has anybody seen this error before.

    The solution to this problem is described in MetaLink note #211920.1. But this note is published with LIMITED access as it involves using a hidden parameter.
    You can get access to the note through Oracle Support only.
    The problem itself is solved generically, if the source database is at least 10.1.0.3 and the target database is 10.2
    -- Sergiusz

  • Precautions i need to take when changing the Character set

    Hi,
    ORACLE VERSION: 10G Release 1 (10.1.0.3.0)
    I am going to change my database's characterset from AL32UTF8 to WE8MSWIN1252 character set and AL16UTF16 NCHAR character set. So i have few questions for you.
    1. What is the difference between Character Set and National Character set? Do i have to set both?
    2. What are precautions that i need to take while changing the characterset?
    3. What are JOB_QUEUE_PROCESSES and AQ_TM_PROCESSES parameters in Plain English? Why do i have to set these parameters to 0 as mentioned in this post below.
    Storing Chinese in Oracle Database

    1) The database character set controls (and specifies) the character set of CHAR & VARCHAR2 columns. The national character set controls the character set of NCHAR & NVARCHAR2 columns.
    2) Please make sure that you read the section of the Globalization manual that discusses character set migration. In particular, going from UTF-8 to Windows-1252 is going to require a bit more work since the latter is a subset (and not a strict binary subset) of the former.
    Justin

  • Character set mismatch in copying from oracle to oracle

    I have a set of ODI scripts that are copying from a source JD Edwards ERP database (Oracle 10g) to a BI datamart (Oracle 10g) and all the original scripts work OK.
    However I have mapped on to some additional tables in the ERP source database and some new BI tables in the target datamart database (oracle - to - oracle) but get an error when I try ro execute these.
    The operator log shows that the error is in the 'INSERT FLOW INTO I$ TABLE' and the error is ORA-12704 character set mismatch.
    The character set for both Oracle databases are the same (and have not changed) the main NLS_CHARACTERSET is AL332UTF8 and the national NLS_NCHAR_CHARACTERSET is AL16UTF16.
    But this works for tables containing NCHAR and NUMBER in previous scripts but not for anything I write now.
    The only other difference is that there was a recent upgrade of ODI to 10.1.3.5 - the repositories are also upgraded.
    Any ideas ?

    Hi Ravi,
    yes, a gateway would help. In 11.2 Oracle offers 2 kind of gateways to a SQL Server - a gateway for free which is based on 3rd party ODBC drivers (you need to get them from a 3rd party vendor, they are not included in the package) and called Database Gateway for ODBC (=DG4ODBC) and a very powerful Database Gateway for MS SQL Server (=DG4MSQL) which allows you also to execute distributed transactions and call remote SQL Server stored procdures. Please keep in mind that DG4MSQL requires a separate license.
    As you didn't post which platform you're going to use, please check out On "My Oracle Support" (=MOS) where you'll find notes how to configure each gateway for all supported platforms - just look for DG4MSQL or DG4ODBC
    On OTN you'll find the also the manuals.
    DG4ODBC: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12070.pdf
    DG4MSQL: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12069.pdf
    The generic gateway installation for Unix: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12013.pdf
    and for Windows: http://download.oracle.com/docs/cd/E11882_01/gateways.112/e12061.pdf

  • Character Set issues.  Please advise

    I have a client who use a version 10gR2 DB that stores both English and French data. There are several times where they will send up .dmp file where we load it into ours.
    2 questions.
    What would be the best charater sets to use here in this setup?
    I am assuming we would use
    NLS_CHARACTERSET = WE8ISO9959P1
    NLS_NCHAR_CHARACTERSET = AL16UTF16
    Also if someone can confirm for me.
    NLS_CHARACTERSET = database character set ???
    NLS_NCHAR_CHARACTERSET = national character set???

    So is it better to say that I should use the AL32UTF8
    instead of AL16UTF16 ?It's not an instead of situation. AL32UTF8 is a valid setting for the database character set, which controls CHAR and VARCHAR2 columns. AL16UTF16 is a valid setting for the national character set which controls NCHAR and NVARCHAR2 columns.
    Could you tell me the difference?The difference between the two encodings comes down to how many bytes are required to store a particular code point (character). AL32UTF8 is a variable-length character set, so 1 character will require between 1 and 3 bytes of storage (4 for the supplemental characters but those are rather rare). AL16UTF16 is a fixed-width character set, so 1 character will require 2 bytes of storage (4 for the rare supplemental characters again).
    Also could you tell me the difference between
    WE8ISO8859P15 and WE8ISO8859P1 ? There's a Wikipedia article that discusses the differences and has links to the two different code tables.
    Werner's point is an excellent one as well. I was assuming that we were talking about how to set up both sides of this proposed system. If the source system already exists, there are additional considerations like ensuring that your target system supports a superset of the characters supported by the source system. Regardless, when doing imports & exports, as Werner points out, you need to ensure that NLS_LANG is set appropriately.
    Justin

  • Database character set = UTF-8, but mismatch error on XML file upload

    Dear experts,
    I am having problems trying to upload an XML file into an XMLType table. The Database is 9.2.0.5.0, with the character set details:
    SELECT *
    FROM SYS.PROPS$
    WHERE name like '%CHA%';
    Query results:
    NLS_NCHAR_CHARACTERSET          UTF8     NCHAR Character set
    NLS_SAVED_NCHAR_CS          UTF8
    NLS_NUMERIC_CHARACTERS          .,     Numeric characters
    NLS_CHARACTERSET          UTF8     Character set
    NLS_NCHAR_CONV_EXCP          FALSE     NLS conversion exception
    To upload the XML file into the XMLType table, I am using the command:
    insert into XMLTABLE
    values(xmltype(getClobDocument('ServiceRequest.xml','UTF8')));
    However, I get the error:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00200: could not convert from encoding UTF-8 to UCS2
    Error at line 1
    ORA-06512: at "SYS.XMLTYPE", line 0
    ORA-06512: at line 1
    Why does it mention UCS2, as can't see that on the Database character set?
    Many thanks for your help,
    Mark

    USC2 is known as AL16UTF16(LE/BE) by Oracle...
    Try using AL32UTF8 as the character set name
    AFAIK The main difference between Oracle's UTF8 and AL32UTF8 character set is that is the UTF8 character set does not support those UTF-8 characteres that require 4 bytes..
    -Mark

  • Oracle9i Export Problem with NCHAR

    Hi,
    I have a schema in database (Oracle 9.2.0.7 running on SuSeLinux 9) having following nls parameters
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_NCHAR_CHARACTERSET AL16UTF16
    Now, i want to export the schema with NLS_CHARACTERSET set to AL32UTF8 and NLS_NCHAR_CHARACTERSET set to UTF8
    I used the following commands in database server. BTW, the shell is bash shell
    $ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    $ export NLS_NCHAR=UTF8
    $ exp username owner=test file=exp_test.dmp log=exp_test.log
    But, it always exports with AL32UTF8 and NCHAR AL16UTF16 where as i want to export it as UTF8.
    Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P1 character set (possible charset conversion)
    Can some one help me do this?
    Thanks in advance for all your help.
    Regards,
    Murali

    Thanks for all the replies and helpful links.
    From the note:15095.1, I see "NCHAR/NVARCHAR2s are always exported in the database's national character set. This is something you can't influence by setting any parameters." so, that means we can only export with what we have :-( and cannot change NCHAR while exporting.

  • Conversions between character sets when using exp and imp utilities

    I use EE8ISO8859P2 character set on my server,
    when exporting database with NLS_LANG not set
    then conversion should be done between
    EE8ISO8859P2 and US7ASCII charsets, so some
    characters not present in US7ASCII should not be
    successfully converted.
    But when I import such a dump, all characters not
    present in US7ASCII charset are imported to the database.
    I thought that some characters should be lost when
    doing such a conversions, can someone tell me why is it not so?

    Not exactly. If the import is done with the same DB character set, then no matter how it has been exported. Conversion (corruption) may happen if the destination DB has a different character set. See this example :
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:01 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> create table test(col1 varchar2(1));
    Table created.
    TEST@db102 SQL> insert into test values(chr(166));
    1 row created.
    TEST@db102 SQL> select * from test;
    C
    ¦
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:47:55 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ©
    Typ=1 Len=1: 166
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ echo $NLS_LANG
    AMERICAN_AMERICA.EE8ISO8859P2
    [ora102 work db102]$ exp test/test file=test.dmp tables=test
    Export: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:47 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P15 character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    . . exporting table                           TEST          1 rows exported
    Export terminated successfully without warnings.
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:48:56 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> drop table test purge;
    Table dropped.
    TEST@db102 SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    [ora102 work db102]$ imp test/test file=test.dmp
    Import: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:15 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in EE8ISO8859P2 character set and AL16UTF16 NCHAR character set
    import server uses WE8ISO8859P15 character set (possible charset conversion)
    . importing TEST's objects into TEST
    . importing TEST's objects into TEST
    . . importing table                         "TEST"          1 rows imported
    Import terminated successfully without warnings.
    [ora102 work db102]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jul 25 14:49:34 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select col1, dump(col1) from test;
    C
    DUMP(COL1)
    ¦
    Typ=1 Len=1: 166
    TEST@db102 SQL>

  • Import시 character set 문제, OORA-01435, IMP-00008 에러에 의해 중단되었습니다. 조언을 부탁합니다.

    안녕하세요,
    아래와 같이 import를 하려고 하는데 에러가 나면서 중단되었습니다.
    C:\temp>imp userid=system/1234 file=c:\temp\usr_lms2-TS_LMS_DEV_D1.dmp FULL=y
    에러메시지는 아래와 같습니다.
    참고로, export 했던 어떤 database에는 usr_lms2 계정이 있었고,
    import하려는 이 database에는 그 계정이 없어서 usr_lms2 계정을 생성하고 import를 실행했습니다.
    질문1: 아래의 character set과 관련된 메시지는 정상적인것인지 궁금합니다.
    질문2: ORA-01435: user does not exist라는 에러는 왜 발생할까요?
    질문3: IMP-00008: unrecognized statement in the export file 에러는 dump 화일 자체에 문제가 있다는 말인가요?
    해결방법을 알려주시면 좋겠습니다.
    Import: Release 11.2.0.2.0 - Production on 목 3월 28 10:00:38 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in KO16MSWIN949 character set and AL16UTF16 NCHAR character set <-------- ?
    import server uses AL32UTF8 character set (possible charset conversion) <-------- ?
    export server uses UTF8 NCHAR character set (possible ncharset conversion) <-------- ?
    . importing SYSTEM's objects into SYSTEM
    . importing USR_LMS2's objects into USR_LMS2
    IMP-00003: ORACLE error 1435 encountered <-------------------- 에러
    ORA-01435: user does not exist <--------------------- 에러
    IMP-00008: unrecognized statement in the export file: <-------------------- 에러
    Import terminated successfully with warnings.
    감사합니다.

Maybe you are looking for

  • How to change the screen resolution

    Dear All, Could someone please help and tell me how I can change the screen resolution (i.e. the dimensions of the monitor) in Solaris 10 using CDE? Regards and thanks a lot for your reply

  • Datatypes in HTMLDB

    Hi, I am just getting started with HTMLDB. We are planning to use it as our adhoc query tool, but when I started creating a report I cannot select certain fields. I have determined that all the affected fields so far are of type FLOAT. Can someone po

  • Error in Inbound Customer IDOC (WE19)

    Hi All I am creating Inbound Customer IDOC Message Type : DEBMAS & FM : IDOC_INPUT_DEBITOR. But getting error(51) : "No batch input data for screen SAPMF02D 0340". According to previous threads, I have seen this screen for Mandatory fields but I didn

  • Validity table--how

    hi gurus, i am loading data for Inventory management. in the document how to manage inventory management i read about validity table....r time reference characters, somebody kindly explain me the link between reference point and validity table and pl

  • SQL_TRACE a package

    In my EM SQL_MONITOR, Every 2-3 days Im seeing a package getting called which is generating 8.5gb of IO. It runs in 3-4 seconds. This is putting a large load on my CPU which is causing a delay for an app which needs to be tuned for same as it can tim