Conversion of UTF8 to WE8DEC

First, I'm a newer for DB related topics.
I have two oracle DBMS.
Sys1 : Oracle 8.X UTF-8 for Web Application
Sys2 : Oracle 8.X WE8DEC for SAP R/3
For some reason,
With Sys2, I have to view the table contents of Sys1.
DBA set up DB-link between two systems.
With DB-LINK, Any Alphabetic characters can be seen,
but some characters(2byte Korean,Japanese character) doesn't look properly.
I can find out for some PL/SQL conversion function
which convert from one character set to another character set.
But not for my case. from UTF8 to WE8DEC.
Can anybody give me some help..

Hi,
Use function module 'idoc_xml_transform' .
U can use transaction 'WE60' to convert it.
convert IDOC data to XML format.
best Regards,
Brijesh

Similar Messages

  • R12 Characterset conversion to UTF8 on 11 G R2 Database

    We are doing Characterset conversion to UTF8, i ran csscan its complaning lot of lossy chars and Convertible chars,do i need to take export below all tables..?
    ata Dictionary Tables:
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 117,133,414 0 0 0
    CHAR 17,402 0 0 0
    LONG 8,074,421 0 0 0
    CLOB 4,348,695 22,903 0 0
    VARRAY 42,001 0 0 0
    Total 129,615,933 22,903 0 0
    Total in percentage 99.982% 0.018% 0.000% 0.000%
    The data dictionary can be safely migrated using the CSALTER script
    XML CSX Dictionary Tables:
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 501 0 0 0
    CHAR 0 0 0 0
    LONG 0 0 0 0
    CLOB 0 0 0 0
    VARRAY 0 0 0 0
    Total 501 0 0 0
    Total in percentage 100.000% 0.000% 0.000% 0.000%
    [Application Data Conversion Summary]
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 118,182,641 0 103 1,260,883
    CHAR 265 0 0 0
    LONG 61,948 0 0 0
    CLOB 129,455 827 0 0
    VARRAY 32,797 0 0 0
    Total 118,407,106 827 103 1,260,883
    Total in percentage 98.946% 0.001% 0.000% 1.054%
    Data Dictionary Tables:
    USER.TABLE Convertible Truncation Lossy
    MDSYS.OPENLS_NODES 17 0 0
    MDSYS.SDO_COORD_OP_PARAM_VALS 200 0 0
    MDSYS.SDO_GEOR_XMLSCHEMA_TABLE 1 0 0
    MDSYS.SDO_STYLES_TABLE 78 0 0
    MDSYS.SDO_XML_SCHEMAS 3 0 0
    ORDDATA.ORDDCM_CT_PRED_OPRD 51 0 0
    ORDDATA.ORDDCM_DOCS 9 0 0
    ORDDATA.ORDDCM_MAPPING_DOCS 1 0 0
    SYS.METASTYLESHEET 178 0 0
    SYS.REGISTRY$ERROR 2 0 0
    SYS.RULE$ 21 0 0
    SYS.SCHEDULER$_EVENT_LOG 182 0 0
    SYS.WRH$_SQLTEXT 2,099 0 0
    SYS.WRH$_SQL_PLAN 1,736 0 0
    SYS.WRI$_ADV_ACTIONS 5,452 0 0
    SYS.WRI$_ADV_DIRECTIVE_META 5 0 0
    SYS.WRI$_ADV_OBJECTS 2,278 0 0
    SYS.WRI$_ADV_RATIONALE 9,594 0 0
    SYS.WRI$_ADV_SQLT_PLANS 455 0 0
    SYS.WRI$_ADV_SQLT_PLAN_STATS 288 0 0
    SYS.WRI$_DBU_FEATURE_METADATA 188 0 0
    SYS.WRI$_DBU_FEATURE_USAGE 16 0 0
    SYS.WRI$_DBU_HWM_METADATA 20 0 0
    SYS.WRI$_REPT_FILES 27 0 0
    XDB.XDB$DXPTAB 2 0 0
    XML CSX Dictionary Tables:
    USER.TABLE Convertible Truncation Lossy
    Application Data:
    USER.TABLE Convertible Truncation Lossy
    APPLSYS.BISM_OBJECTS 4 0 0
    APPLSYS.DR$FND_LOBS_CTX$I 0 103 1,260,883
    APPLSYS.FND_CONC_PROG_ANNOTATIONS 272 0 0
    APPLSYS.FND_OAM_CONTEXT_FILES 15 0 0
    APPLSYS.FND_OAM_DOC_LINK 1 0 0
    APPS.FND_OAM_CONTEXT_FILES_1 6 0 0
    AZ.AZ_APIS 11 0 0
    AZ.AZ_SELECTION_SET_ENTITIES_B 48 0 0
    ECX.ECX_DTDS 205 0 0
    ECX.ECX_FILES 91 0 0
    IBC.IBC_ATTRIBUTE_BUNDLES 41 0 0
    JTF.JTF_HEADER_DTD 1 0 0
    JTF.JTF_MESSAGE_OBJECTS 82 0 0
    JTF.JTY_TRANS_USG_PGM_SQL 29 0 0
    ODM.ODM_PMML_DTD 1 0 0
    OKC.OKC_REPORT_SQL_B 3 0 0
    OKC.OKC_REPORT_SQL_TL 2 0 0
    OKC.OKC_REPORT_XSL_TL 5 0 0
    XDP.XDP_PROC_BODY 10 0 0
    [Distribution of Convertible, Truncated and Lossy Data by Column]
    Data Dictionary Tables:
    USER.TABLE|COLUMN Convertible Truncation Lossy
    MDSYS.OPENLS_NODES|SYS_NC00004$ 17 0 0
    MDSYS.SDO_COORD_OP_PARAM_VALS|PARAM_VALUE_FILE 200 0 0
    MDSYS.SDO_GEOR_XMLSCHEMA_TABLE|XMLSCHEMA 1 0 0
    MDSYS.SDO_STYLES_TABLE|DEFINITION 78 0 0
    MDSYS.SDO_XML_SCHEMAS|XMLSCHEMA 3 0 0
    ORDDATA.ORDDCM_CT_PRED_OPRD|SYS_NC00004$ 51 0 0
    ORDDATA.ORDDCM_DOCS|SYS_NC00005$ 9 0 0
    ORDDATA.ORDDCM_MAPPING_DOCS|SYS_NC00007$ 1 0 0
    SYS.METASTYLESHEET|STYLESHEET 178 0 0
    SYS.REGISTRY$ERROR|MESSAGE 1 0 0
    SYS.REGISTRY$ERROR|STATEMENT 1 0 0
    SYS.RULE$|CONDITION 21 0 0
    SYS.SCHEDULER$_EVENT_LOG|ADDITIONAL_INFO 182 0 0
    SYS.WRH$_SQLTEXT|SQL_TEXT 2,099 0 0
    SYS.WRH$_SQL_PLAN|OTHER_XML 1,736 0 0
    SYS.WRI$_ADV_ACTIONS|ATTR5 2,726 0 0
    SYS.WRI$_ADV_ACTIONS|ATTR6 2,726 0 0
    SYS.WRI$_ADV_DIRECTIVE_META|DATA 5 0 0
    SYS.WRI$_ADV_OBJECTS|ATTR4 2,278 0 0
    SYS.WRI$_ADV_RATIONALE|ATTR5 9,594 0 0
    SYS.WRI$_ADV_SQLT_PLANS|OTHER_XML 455 0 0
    SYS.WRI$_ADV_SQLT_PLAN_STATS|OTHER 288 0 0
    SYS.WRI$_DBU_FEATURE_METADATA|INST_CHK_LOGIC 21 0 0
    SYS.WRI$_DBU_FEATURE_METADATA|USG_DET_LOGIC 167 0 0
    SYS.WRI$_DBU_FEATURE_USAGE|FEATURE_INFO 16 0 0
    SYS.WRI$_DBU_HWM_METADATA|LOGIC 20 0 0
    SYS.WRI$_REPT_FILES|SYS_NC00005$ 27 0 0
    XDB.XDB$DXPTAB|SYS_NC00006$ 2 0 0
    XML CSX Dictionary Tables:
    USER.TABLE|COLUMN Convertible Truncation Lossy
    Application Data:
    USER.TABLE|COLUMN Convertible Truncation Lossy
    APPLSYS.BISM_OBJECTS|SYS_NC00023$ 4 0 0
    APPLSYS.DR$FND_LOBS_CTX$I|TOKEN_TEXT 0 103 1,260,883
    APPLSYS.FND_CONC_PROG_ANNOTATIONS|PROGRAM_ANNOTAT 272 0 0
    APPLSYS.FND_OAM_CONTEXT_FILES|TEXT 15 0 0
    APPLSYS.FND_OAM_DOC_LINK|DOC_LINK_INFO 1 0 0
    APPS.FND_OAM_CONTEXT_FILES_1|TEXT 6 0 0
    AZ.AZ_APIS|FILTERING_PARAMETERS 11 0 0
    AZ.AZ_SELECTION_SET_ENTITIES_B|FILTERING_PARAMETE 48 0 0
    ECX.ECX_DTDS|PAYLOAD 205 0 0
    ECX.ECX_FILES|PAYLOAD 91 0 0
    IBC.IBC_ATTRIBUTE_BUNDLES|ATTRIBUTE_BUNDLE_DATA 41 0 0
    JTF.JTF_HEADER_DTD|HEADER_DTD 1 0 0
    JTF.JTF_MESSAGE_OBJECTS|BUS_OBJ_DTD 41 0 0
    JTF.JTF_MESSAGE_OBJECTS|BUS_OBJ_SQL 41 0 0
    JTF.JTY_TRANS_USG_PGM_SQL|BATCH_DEA_SQL 1 0 0
    JTF.JTY_TRANS_USG_PGM_SQL|BATCH_INCR_SQL 5 0 0
    JTF.JTY_TRANS_USG_PGM_SQL|BATCH_TOTAL_SQL 6 0 0
    JTF.JTY_TRANS_USG_PGM_SQL|INCR_REASSIGN_SQL 5 0 0
    JTF.JTY_TRANS_USG_PGM_SQL|REAL_TIME_INSERT 6 0 0
    JTF.JTY_TRANS_USG_PGM_SQL|REAL_TIME_SQL 6 0 0
    ODM.ODM_PMML_DTD|DTD 1 0 0
    OKC.OKC_REPORT_SQL_B|SQL_TEXT 3 0 0
    OKC.OKC_REPORT_SQL_TL|HELP_TEXT 2 0 0
    OKC.OKC_REPORT_XSL_TL|HELP_TEXT 2 0 0
    OKC.OKC_REPORT_XSL_TL|XSL_TEXT 3 0 0
    XDP.XDP_PROC_BODY|PROC_BODY 10 0 0
    [Indexes to be Rebuilt]
    USER.INDEX on USER.TABLE(COLUMN)
    APPLSYS.DR$FND_LOBS_CTX$X on APPLSYS.DR$FND_LOBS_CTX$I(TOKEN_TEXT)
    APPLSYS.DR$FND_LOBS_CTX$X on APPLSYS.DR$FND_LOBS_CTX$I(TOKEN_TYPE)
    APPLSYS.DR$FND_LOBS_CTX$X on APPLSYS.DR$FND_LOBS_CTX$I(TOKEN_FIRST)
    APPLSYS.DR$FND_LOBS_CTX$X on APPLSYS.DR$FND_LOBS_CTX$I(TOKEN_LAST)
    APPLSYS.DR$FND_LOBS_CTX$X on APPLSYS.DR$FND_LOBS_CTX$I(TOKEN_COUNT)
    IBC.IBC_ATTRIBUTE_BUNDLES_CTX on IBC.IBC_ATTRIBUTE_BUNDLES(ATTRIBUTE_BUNDLE_DATA)
    -----------------------------------------------------------------------------------------

    Sawwan,
    I am getting below error
    [oracle@uat nls]$ exp file=exp_utf8.dmp parfile=exp.par
    Export: Release 11.2.0.1.0 - Production on Wed Apr 20 18:43:11 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Username: system
    Password:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in UTF8 character set and AL16UTF16 NCHAR character set
    server uses US7ASCII character set (possible charset conversion)
    About to export specified tables via Conventional Path ...
    Current user changed to APPLSYS
    . . exporting table BISM_OBJECTS 4 rows exported
    EXP-00011: APPLSYS.DR$FND_LOBS_CTX$I does not exist ( do i need to procced with truncation of convertable objects..?, please confirm me, i am confusing here)
    http://www.oracle-latest-technology.com/2011/02/how-to-convert-character-set-of-oracle.html
    . . exporting table FND_CONC_PROG_ANNOTATIONS 290 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table FND_OAM_CONTEXT_FILES 15 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table FND_OAM_DOC_LINK 1 rows exported
    EXP-00091: Exporting questionable statistics.
    Current user changed to APPS
    . . exporting table FND_OAM_CONTEXT_FILES_1 6 rows exported
    EXP-00091: Exporting questionable statistics.
    Current user changed to AZ
    . . exporting table AZ_APIS 356 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table AZ_SELECTION_SET_ENTITIES_B 490 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    Current user changed to ECX
    . . exporting table ECX_DTDS 205 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table ECX_FILES 91 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    Current user changed to IBC
    EXP-00011: IBC.IBC_ATTRIBUTE_BUNDLES_DATA does not exist
    . . exporting table IBC_ATTRIBUTE_BUNDLES 41 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    Current user changed to JTF
    . . exporting table JTF_HEADER_DTD 1 rows exported
    EXP-00091: Exporting questionable statistics.
    . . exporting table JTF_MESSAGE_OBJECTS 41 rows exported
    EXP-00091: Exporting questionable statistics.
    . . exporting table JTY_TRANS_USG_PGM_SQL 7 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00010: MDSYS is not a valid username
    EXP-00010: MDSYS is not a valid username
    EXP-00010: MDSYS is not a valid username
    EXP-00010: MDSYS is not a valid username
    EXP-00010: MDSYS is not a valid username
    EXP-00010: MDSYS is not a valid username
    Current user changed to ODM
    . . exporting table ODM_PMML_DTD 1 rows exported
    EXP-00091: Exporting questionable statistics.
    Current user changed to OKC
    . . exporting table OKC_REPORT_SQL_B 3 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table OKC_REPORT_SQL_TL 3 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table OKC_REPORT_XSL_TL 3 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00010: XDB is not a valid username
    Current user changed to XDP
    . . exporting table XDP_PROC_BODY 10 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    Current user changed to ORDDATA
    . . exporting table ORDDCM_CT_PRED_OPRD 51 rows exported
    . . exporting table ORDDCM_DOCS 9 rows exported
    . . exporting table ORDDCM_MAPPING_DOCS 1 rows exported
    EXP-00091: Exporting questionable statistics.
    Export terminated successfully with warnings.
    [oracle@uat nls]$ exp file=exp_fnd_lobs.dmp table=APPLSYS.DR$FND_LOBS_CTX$I
    LRM-00101: unknown parameter name 'table'

  • Character Set Conversion AL24UTFFSS - UTF8

    Gents,
    we are currently migrating our customers from 8.0.5 to 8.1.6 including charset conversion from AL24UTFFSS to UTF8 using export/import.
    There4s been one case where imp.exe just hung (no messages) at a certain table. It was found, that it hold a string value that ended with the character i. The whole entry was deleted from the export file and imp.exe went through.
    Tests showed that i in the middle of a string is not a problem, only if it4s at the end.
    Can someone explain why that happens or, even better, has an idea how to prevent this from the beginning?
    Thanks
    Stefan

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Stefan Marx ([email protected]):
    Gents,
    we are currently migrating our customers from 8.0.5 to 8.1.6 including charset conversion from AL24UTFFSS to UTF8 using export/import.
    There4s been one case where imp.exe just hung (no messages) at a certain table. It was found, that it hold a string value that ended with the character i. The whole entry was deleted from the export file and imp.exe went through.
    Tests showed that i in the middle of a string is not a problem, only if it4s at the end.
    Can someone explain why that happens or, even better, has an idea how to prevent this from the beginning?
    Thanks
    Stefan<HR></BLOCKQUOTE>
    Sorry, this is probably the wrong Discussion Group. I moved this to 'Oracle 8i Globalization and NLS'.
    null

  • Language Text Conversion to UTF8 -

    Hi ,
    I am trying to convert the Loaded Frech Text into UTF8,
    My Database NLS_CHARACTERSET is UTF8.
    The desired output of the text is "Réforme institutionnelle"
    However I had loaded the data into my table. I find this is the value before applying the Convert on that Value - "Riforme institutionnelle".
    I tried converting the same using the Character set
    update tan_test set value = convert(value,'UTF8','WE8ISO8859P1');
    update tan_test set value = convert(value,'UTF8','WE8ISO8859P9');
    update tan_test set value = convert(value,'UTF8','WE8ISO8859P15');
    None of the above helps me in converting the same to the desired Output
    "Réforme institutionnelle"
    Can anyone help me in giving the right Character set to convert it to French appropriately.
    Thanks,
    Narayanan.

    With Refered the ID 124721.1, we need to take necessary steps until the scanner reports no exceptions. i.e. truncate tables.Even SYS, APPLSYS schema tables need to truncate.
    Now the question is There are few Tables which are not exported as follows:
    APPLSYS.DR$FND_LOBS_CTX$I
    CTXSYS.DR$SQE
    MDSYS.OPENLS_NODES
    MDSYS.SDO_COORD_OP_PARAM_VALS
    MDSYS.SDO_GEOR_XMLSCHEMA_TABLE
    MDSYS.SDO_STYLES_TABLE
    MDSYS.SDO_XML_SCHEMAS
    ORDSYS.ORDDCM_CT_PRED_OPRD
    ORDSYS.ORDDCM_DOCS
    ORDSYS.ORDDCM_MAPPING_DOCS
    Error is :EXP-00010: ORDSYS/MDSYS/ORDSYS is not a valid username
    EXP-00011: APPLSYS.DR$FND_LOBS_CTX$I does not exist
    Do we still go ahead truncation of these tables. But then How to get the data back in target.
    Do We need to set NLS_LANG=American_America.US7ASCII while export and also same while import?
    Please advice.

  • OSB - Code Page Conversion - From UTF8 to iso-8859-1; cp1252 etc...

    Hi,
    I have a requirement to convert a UTF-8 data to other encoding formats like cp1252; iso-8859-1 etc... Please let me know how this can be done, in OSB? Appreciate your response.
    Regards...

    Hi,
    Yes you can change it in transport configuration tab. Please follow the below link for more details.
    http://docs.oracle.com/cd/E17904_01/doc.1111/e15866/transports.htm#i1268967
    Thanks,
    Durga
    - It is considered good etiquette to reward answerers with points (as "helpful"  or "correct").

  • CSSCAN for database character set conversion failing with ORA-01578

    Hi ,
    CSSCAN for database character set conversion failing with ORA-01578: ORACLE data block corrupted (file # 84, block # 23930). please help me out in this regard.
    Thanks,
    Sravan.

    Hi Anand,
    Thanks for your update. The segment is a table not an index in my case. And i got this error while running CSSCAN on Apps database for character set conversion to UTF8 from WE8ISO8859P1. Please find the snapshot below for your reference.
    SQL> select segment_name, segment_type, owner from dba_extents where file_id = 84 and 23930 between block_id and block_id + blocks - 1;
    SEGMENT_NAME
    SEGMENT_TYPE OWNER
    EDW_LOOKUP_M
    TABLE POA
    SQL> ANALYZE TABLE POA.EDW_LOOKUP_M VALIDATE STRUCTURE CASCADE;
    ANALYZE TABLE POA.EDW_LOOKUP_M VALIDATE STRUCTURE CASCADE
    ERROR at line 1:
    ORA-01578: ORACLE data block corrupted (file # 84, block # 23930)
    ORA-01110: data file 84: '/d911/oracle/dbcondata/poad01.dbf'
    Thanks,
    Sravan.

  • Help! Internationalization utf8 FAILS on hotmail

    Hi ! How do I send UTF8 emails to Hotmail ??
    Here's my code
    // Define message
    MimeMessage message = new MimeMessage(session);
    message.setFrom(new InternetAddress(from));
    message.setReplyTo(addressReplyTo);
    message.addRecipient(Message.RecipientType.TO, new InternetAddress(to));
    message.setSubject(subject, "UTF-8");
    message.setText(text, "UTF-8");
    My jsp app is internationalized with utf8, with resource bundles in english estonian and russian. Each jsp page has <% page ... %> directive and request.setCharacterEncoding to utf8.
    PROBLEM:
    When my code sends a mail to an outlook client all the foreign characters come up great.
    But send it to hotmail (and certain other webmails) and it comes out gibberish.
    Is this a "microsoft" thing... is there a trick to it ? Maybe I need to spell "utf8" or "utf-8"... or maybe I need to do a manual byte conversion to UTF8 something like:
    message.setText( new String(text.getBytes(), "UTF-8"), "UTF-8") ???
    The cynical side of me would say msoft are intentionally trying to screw up open standards like utf8 by not processing them correctly ?
    Can anyone please post some sample code.
    extra info:
    tomcat 4
    Help!!!!
    Cheers
    vikingSteve

    Ok well my solution is definitely on the "Hack" scale, but it works.
    Put a key/value pair in your resourcebundle for each language that specifies the charset.
    # ENGLISH
    mailCharSet = ISO-8859-1
    # RUSSIAN
    mailCharSet = UTF-8
    So the resourcebundle in use by the user when they send the email determines charset.
    It would be plausable to expect that this just isn't a "mistake" in hotmail... it sounds to be like deliberate sabotage of open standards. UTF8 and Sun seem synonmous to me, and well if hotmail can clobber UTF8 then it is to their advantage.
    As good as windows character sets may or may not be, the basic premise on which they were built is flawed and they suck for international users. A person using a russian computer (like me) can't type an email mixed in russian and japanese at the same time (like I often do).
    ----> (when will american companies learn that not EVERYONE speaks english ??)

  • How to convert CLOB to UTF8

    Hi all,
    We have batch job which actually runs on daily basis and produces XML with the Java code.And the XML generated is used for various purposes.
    Recently the job was not executed successfully because of some special characters in XML which falls out of ANSI encoding stantands.
    So we are in a situation to convert the CLOB datatype (input to Java code) to UTF8 encoded XML.
    We are not to able to achieve this .
    Right now the cloB data is converted to ASCII stream,which doesn't create a well formed XML based on UTF8 encoding standards.See below the code
    clob xmlCLOB = (Clob)clobInfo.get("clobfield");
    InputStream is = xmlCLOB.getAsciiStream();
    Any thoughts on how to convert this CLOB to UTF8?
    Regards,
    NaG

    Joan,
    I don't know if this will help with conversion of you BFILE, but at
    http://www.xml.com/lpt/a/2000/04/26/encodings/xmlparser.html
    and at
    http://xmlsoft.org/encoding.html
    there is some information on conversion to UTF8.
    Hope it helps. Let us know.
    Dave

  • Convert characterset WE8MSWIN1252 to UTF8

    Hi all
    I am using Oracle 10g Database. Now the Characterset as WE8MSWIN1252. I want to change my CharacterSet to UTF8. It is possible.
    Can anyone please post me the steps involved.
    Very Urgent !!!!!!!
    Regds
    Nirmal

    Subject: Changing WE8ISO8859P1/ WE8ISO8859P15 or WE8MSWIN1252 to (AL32)UTF8
    Doc ID: Note:260192.1 Type: BULLETIN
    Last Revision Date: 24-JUL-2007 Status: PUBLISHED
    Changing the database character set to (AL32)UTF8
    =================================================
    When changing a Oracle Applications Database:
    Please see the following note for Oracle Applications database
    Note 124721.1 Migrating an Applications Installation to a New Character Set
    If you have any doubt log an Oracle Applications TAR for assistance.
    It might be usefull to read this note, even when using Oracle Applications
    seen it explains what to do with "lossy" and "truncation" in the csscan output.
    Scope:
    You can't simply use "ALTER DATABASE CHARACTER SET" to go from WE8ISO8859P1 or
    WE8ISO8859P15 or WE8MSWIN1252 to (AL32)UTF8 because (AL32)UTF8 is not a
    binary superset of any of these character sets.
    You will run into ORA-12712 or ORA-12710 because the code points for the
    "extended ASCII" characters are different between these 3 character sets
    and (AL32)UTF8.
    This note will describe a method of still using a
    "ALTER DATABASE CHARACTER SET" in a limited way.
    Note that we strongly recommend to use the SAME flow when doing a full
    export / import.
    The choise between using FULL exp/imp and a PARTIAL exp/imp is made in point
    7)
    DO NOT USE THIS NOTE WITH ANY OTHER CHARACTERSETS
    WITHOUT CHECKING THIS WITH ORACLE SUPPORT
    THIS NOTE IS SPECIFIC TO CHANGING:
    FROM: WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252
    TO: AL32UTF8 or UTF8
    AL32UTF8 and UTF8 are both Unicode character sets in the oracle database.
    UTF8 encodes Unicode version 3.0 and will remain like that.
    AL32UTF8 is kept up to date with the Unicode standard and encodes the Unicode
    standards 3.0 (in database 9.0), 3.1 (database 9.2) or 3.2 (database 10g).
    For the purposes of this note we shall only use AL32UTF8 from here on forward,
    you can substitute that for UTF8 without any modifications.
    If you use 8i or lower clients please have a look at
    Note 237593.1 Problems connecting to AL32UTF8 databases from older versions (8i and lower)
    WE8ISO8859P1, WE8ISO8859P15 or WE8MSWIN1252 are the 3 main character sets that
    are used to store Western European or English/American data in.
    All standard ASCII characters that are used for English/American do not have to
    be converted into AL32UTF8 - they are the same in AL32UTF8. However, all other
    characters, like accented characters, the Euro sign, MS "smart quotes", etc.
    etc., have a different code point in AL32UTF8.
    That means that if you make extensive use of these types of characters the
    preferred way of changing to AL32UTF8 would be to export the entire database and
    import the data into a new AL32UTF8 database.
    However, if you mainly use standard ASCII characters and not a lot else (for
    example if you only store English text, maybe with some Euro signs or smart
    quotes here and there), then it could be a lot quicker to proceed with this
    method.
    Please DO read in any case before going to UTF8 this note:
    Note 119119.1 AL32UTF8 / UTF8 (unicode) Database Character Set Implications
    and consider to use CHAR semantics if on 9i or higher:
    Note 144808.1 Examples and limits of BYTE and CHAR semantics usage
    It's best to change the tables and so to CHAR semantics before the change
    to UTF8.
    This procedure is valid for Oracle 8i, 9i and 10g.
    Note:
    * If you are on 9i please make sure you are at least on Patch 9204, see
    Note 250802.1 Changing character set takes a very long time and uses lots of rollback space
    * if you have any function-based indexes on columns using CHAR length semantics
    then these have to be removed and re-created after the character set has
    been changed. Failure to do so will result in ORA-604 / ORA-2262 /ORA-904
    when the "alter database character set" statement is used in step 4.
    Actions to take:
    1) install the csscan tool.
    1A)For 10g use the csscan 2.x found in /bin, no need to install a newer version
    Goto 1C)
    1B)For 9.2 and lower:
    Please DO install the version 1.2 or higher from TechNet for you version.
    http://technet.oracle.com/software/tech/globalization/content.html
    and install this.
    copy all scripts and executables found in the zip file you downloaded
    to your oracle_home overwriting the old versions.
    goto 1C).
    Note: do NOT use the CSSCAN of a 10g installation for 9i/8i!
    1C)Run csminst.sql using these commands and SQL statements:
    cd $ORACLE_HOME/rdbms/admin
    set oracle_sid=<your SID>
    sqlplus "sys as sysdba"
    SQL>set TERMOUT ON
    SQL>set ECHO ON
    SQL>spool csminst.log
    SQL> START csminst.sql
    Check the csminst.log for errors.
    If you get when running CSSCAN the error
    "Character set migrate utility schema not compatible."
    then
    1ca) or you are starting the old executable, please do overwrite all old files with the files
    from the newer version from technet (1.2 has more files than some older versions, that's normal).
    1cb) or check your PATH , you are not starting csscan from this ORACLE_HOME
    1cc) or you have not runned the csminst.sql from the newer version from technet
    More info is in Note 123670.1 Use Scanner Utility before Altering the Database Character Set
    Please, make sure you use/install csscan version 1.2 .
    2) Check if you have no invalid code points in the current character set:
    Run csscan with the following syntax:
    csscan FULL=Y FROMCHAR=<existing database character set> TOCHAR=<existing database character set> LOG=WE8check CAPTURE=Y ARRAY=1000000 PROCESS=2
    Always run CSSCAN with 'sys as sysdba'
    This will create 3 files :
    WE8check.out a log of the output of csscan
    WE8check.txt a Database Scan Summary Report
    WE8check.err contains the rowid's of the rows reported in WE8check.txt
    At this moment we are just checking that all data is stored correctly in the
    current character set. Because you've entered the TO and FROM character sets as
    the same you will not have any "Convertible" or "Truncation" data.
    If all the data in the database is stored correctly at the moment then there
    should only be "Changeless" data.
    If there is any "Lossy" data then those rows contain code points that are not
    currently stored correctly and they should be cleared up before you can continue
    with the steps in this note. Please see the following note for clearing up any
    "Lossy" data:
    Note 225938.1 Database Character Set Healthcheck
    Only if ALL data in WE8check.txt is reported as "Changeless" it is safe to
    proceed to point 3)
    NOTE:
    if you have a WE8ISO8859P1 database and lossy then changing your WE8ISO8859P1 to
    WE8MSWIN1252 will most likly solve you lossy.
    Why ? this is explained in
    Note 252352.1 Euro Symbol Turns up as Upside-Down Questionmark
    Do first a
    csscan FULL=Y FROMCHAR=WE8MSWIN1252 TOCHAR=WE8MSWIN1252 LOG=1252check CAPTURE=Y ARRAY=1000000 PROCESS=2
    Always run CSSCAN with 'sys as sysdba'
    For 9i, 8i:
    Only if ALL data in 1252check.txt is reported as "Changeless" it is safe to
    proceed to the next point. If not, log a tar and provide the 3 generated files.
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    2.1. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all.
    If you are using RAC see
    Note 221646.1 Changing the Character Set for a RAC Database Fails with an ORA-12720 Error
    2.2. Execute the following commands in sqlplus connected as "/ AS SYSDBA":
    SPOOL Nswitch.log
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
    ALTER SYSTEM SET AQ_TM_PROCESSES=0;
    ALTER DATABASE OPEN;
    ALTER DATABASE CHARACTER SET WE8MSWIN1252;
    SHUTDOWN IMMEDIATE;
    STARTUP RESTRICT;
    SHUTDOWN;
    The extra restart/shutdown is necessary in Oracle8(i) because of a SGA
    initialization bug which is fixed in Oracle9i.
    -- a alter database takes typically only a few minutes or less,
    -- it depends on the number of columns in the database, not the amount of data
    2.3. Restore the parallel_server parameter in INIT.ORA, if necessary.
    2.4. STARTUP;
    now go to point 3) of this note of course your database is then WE8MSWIN1252, so
    you need to replace <existing database character set> with WE8MSWIN1252 from now on.
    For 10g and up:
    When using CSSCAN 2.x (10g database) you should see in 1252check.txt this:
    All character type data in the data dictionary remain the same in the new character set
    All character type application data remain the same in the new character set
    and
    The data dictionary can be safely migrated using the CSALTER script
    IF you see this then you need first to go to WE8MSWIN1252
    If not, log a tar and provide all 3 generated files.
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    Then you do in sqlplus connected as "/ AS SYSDBA":
    -- check if you are using spfile
    sho parameter pfile
    -- if this "spfile" then you are using spfile
    -- in that case note the
    sho parameter job_queue_processes
    sho parameter aq_tm_processes
    -- (this is Bug 6005344 fixed in 11g )
    -- then do
    shutdown immediate
    startup restrict
    SPOOL Nswitch.log
    @@?\rdbms\admin\csalter.plb
    -- Csalter will aks confirmation - do not copy paste the whole actions on one time
    -- sample Csalter output:
    -- 3 rows created.
    -- This script will update the content of the Oracle Data Dictionary.
    -- Please ensure you have a full backup before initiating this procedure.
    -- Would you like to proceed (Y/N)?y
    -- old 6: if (UPPER('&conf') <> 'Y') then
    -- New 6: if (UPPER('y') <> 'Y') then
    -- Checking data validility...
    -- begin converting system objects
    -- PL/SQL procedure successfully completed.
    -- Alter the database character set...
    -- CSALTER operation completed, please restart database
    -- PL/SQL procedure successfully completed.
    -- Procedure dropped.
    -- if you are using spfile then you need to also
    -- ALTER SYSTEM SET job_queue_processes=<original value> SCOPE=BOTH;
    -- ALTER SYSTEM SET aq_tm_processes=<original value> SCOPE=BOTH;
    shutdown
    startup
    and the 10g database will be WE8MSWIN1252
    now go to point 3) of this note of course your database is then WE8MSWIN1252, so
    you need to replace <existing database character set> with WE8MSWIN1252 from now on.
    3) Check which rows contain data for which the code point will change
    Run csscan with the following syntax:
    csscan FULL=Y FROMCHAR=<your database character set> TOCHAR=AL32UTF8 LOG=WE8TOUTF8 CAPTURE=Y ARRAY=1000000 PROCESS=2
    Always run CSSCAN with 'sys as sysdba'
    This will create 3 files :
    WE8TOUTF8.out a log of the output of csscan
    WE8TOUTF8.txt a Database Scan Summary Report
    WE8TOUTF8.err a contains the rowid's of the rows reported in WE8check.txt
    + You should have NO entries under Lossy, because they should have been filtered
    out in step 2), if you have data under Lossy then please redo step 2).
    + If you have any entries under Truncation then go to step 4)
    + If you only have entries for Convertible (and Changeless) then solve those in
    step 5).
    + If you have NO entry's under the Convertible, Truncation or Lossy,
    and all data is reported as "Changeless" then proceed to step 6).
    4) If you have Truncation entries.
    Whichever way you migrate from WE8(...) to AL32UTF8, you will always have to
    solve the entries under Truncation.
    Standard ASCII characters require 1 byte of storage space under in WE8(...) and
    in AL32UTF8, however, other characters (like accented characters and the Euro
    sign) require only 1 byte of storage space in WE8(...), but they require 2 or
    more bytes of space in AL32UTF8.
    That means that the total amount of space needed to store a string can exceed
    the defined column size.
    For more information about this see:
    Note 119119.1 AL32UTF8 / UTF8 (unicode) Database Character Set Implications
    and
    "Truncation" data is always also "Convertible" data, which means that whatever
    else you do, these rows have to be exported before the character set is changed
    and re-imported after the character set has changed. If you proceed with that
    without dealing with the truncation issue then the import will fail on these
    columns because the size of the data exceeds the maximum size of the column.
    So these truncation issues will always require some work, there are a number of
    ways to deal with them:
    A) Update these rows in the source database so that they contain less data
    B) Update the table definition in the source database so that it can contain
    longer data. You can do this by either making the column larger, or by using
    CHAR length semantics instead of BYTE length semantics (only possible in
    Oracle9i).
    C) Pre-create the table before the import so that it can contain 'longer' data.
    Again you have a choice between simply making it larger, or switching from BYTE
    to CHAR length semantics.
    If you've chosen option A or B then please rerun csscan to make sure there is no
    Truncation data left. If that also means there is no Convertible data left then
    proceed to step 6), otherwise proceed to step 5).
    To know how much the data expands simply check the csscan output.
    you can find that in the .err file as "Max Post Conversion Data Size"
    For example, check in the .txt file wich table has "Truncation",
    let's assume you have there a row that say's
    -- snip from WE8TOUTF8.txt
    [Distribution of Convertible, Truncated and Lossy Data by Table]
    USER.TABLE Convertible Truncation Lossy
    SCOTT.TESTUTF8 69 6 0
    -- snip from WE8TOUTF8.txt
    then look in the .err file for "TESTUTF8" until the
    "Max Post Conversion Data Size" is bigger then the column size for that table.
    User : SCOTT
    Table : TESTUTF8
    Column: ITEM_NAME
    Type : VARCHAR2(80)
    Number of Exceptions : 6
    Max Post Conversion Data Size: 81
    -> the max size after going to UT8 will be 81 bytes for this column.
    5) If you have Convertible entries.
    This is where you have to make a choice whether or not you want to continue
    on this path or if it's simpler to do a complete export/import in the
    traditional way of changing character sets.
    All the data that is marked as Convertible needs to be exported and then
    re-imported after the character set has changed.
    6) check if you have functional indexes on CHAR based columns and purge the RECYCLEBIN.
    select OWNER, INDEX_NAME , INDEX_TYPE, TABLE_OWNER, TABLE_NAME, STATUS,
    FUNCIDX_STATUS from ALL_INDEXES where INDEX_TYPE not in
    ('NORMAL', 'BITMAP','IOT - TOP') and TABLE_NAME in (select unique
    (table_name) from dba_tab_columns where char_used ='C');
    if this gives rows back then the change will fail with
    ORA-30556: functional index is defined on the column to be modified
    if you have functional indexes on CHAR based columns you need to drop the
    index and recreate after the change , note that a disable will not be enough.
    On 10g check ,while connected as sysdba, if there are objects in the recyclebin
    SQL> show recyclebin
    If so do also a PURGE DBA_RECYCLEBIN; other wise you will recieve a ORA-38301 during CSALTER.
    7) Choose on how to do the actual change
    you have 2 choices now:
    Option 1 - exp/imp the entire database and stop using the rest of this note.
    a. Export the current entire database (with NLS_LANG set to <your old
    database character set>)
    b. Create a new database in the AL32UTF8 character set
    c. Import all data into the new database (with NLS_LANG set to <your old database character set>)
    d. The conversion is complete, do not continue with this note.
    note that you do need to deal with truncation issues described in step 4), even
    if you use the export/import method.
    Option 2 - export only the convertible data and continue using this note.
    For 9i and lower:
    a. If you have "convertible" data for the sys objects SYS.METASTYLESHEET,
    SYS.RULE$ or SYS.JOB$ then follow the following note for those objects:
    Note 258904.1 Convertible data in data dictionary: Workarounds when changing character set
    make sure to combine the next steps in the example script given in that note.
    b. Export all the tables that csscan shows have convertible data
    (make sure that the character set part of the NLS_LANG is set to the current
    database character set during the export session)
    c. Truncate those tables
    d. Run csscan again to verify you only have "changeless" application data left
    e. If this now reports only Changeless data then proceed to step 8), otherwise
    do the same again for the rows you've missed out.
    For 10g and up:
    a. Export all the USER tables that csscan shows have convertible data
    (make sure that the character set part of the NLS_LANG is set to the current
    database character set during the export session)
    b. Fix any "convertible" in the SYS schema, note that the 10g way to change
    the characterset (= the CSALTER script) will deal with any CLOB data in the
    sys schema. All "no 9i only" fixes in
    Note 258904.1 Convertible data in data dictionary: Workarounds when changing character set
    should NOT be done in 10g
    c. Truncate the exported user tables.
    d. Run csscan again to verify you only have "changeless" application data left
    e. If this now reports only Changeless data then proceed to step 8), otherwise
    do the same again for the rows you've missed out.
    When using CSSCAN 2.x (10g database) you should see in WE8TOUTF8.txt this:
    The data dictionary can be safely migrated using the CSALTER script
    If you do NOT have this when working on a 10g system CSALTER will NOT work and this
    means you have missed something or not followed all steps in this note.
    8) Perform the character set change:
    Perform a backup of the database.
    Check the backup.
    Double-check the backup.
    For 9i and below:
    Then use the "alter database" command, this changes the current database
    character set definition WITHOUT changing the actual stored data.
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    1. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all.
    If you are using RAC see
    Note 221646.1 Changing the Character Set for a RAC Database Fails with an ORA-12720 Error
    2. Execute the following commands in sqlplus connected as "/ AS SYSDBA":
    SPOOL Nswitch.log
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
    ALTER SYSTEM SET AQ_TM_PROCESSES=0;
    ALTER DATABASE OPEN;
    ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8;
    SHUTDOWN IMMEDIATE;
    -- a alter database takes typically only a few minutes or less,
    -- it depends on the number of columns in the database, not the amount of data
    3. Restore the parallel_server parameter in INIT.ORA, if necessary.
    4. STARTUP;
    Without the INTERNAL_USE you get a ORA-12712: new character set must be a superset of old character set
    WARNING WARNING WARNING
    Do NEVER use "INTERNAL_USE" unless you did follow the guidelines STEP BY STEP
    here in this note and you have a good idea what you are doing.
    Do NEVER use "INTERNAL_USE" to "fix" display problems, but follow Note 225938.1
    If you use the INTERNAL_USE clause on a database where there is data listed
    as convertible without exporting that data then the data will be corrupted by
    changing the database character set !
    For 10g and up:
    Shutdown the listener and any application that connects locally to the database.
    There should be only ONE connection the database during the WHOLE time and that's
    the sqlplus session where you do the change.
    Then you do in sqlplus connected as "/ AS SYSDBA":
    -- check if you are using spfile
    sho parameter pfile
    -- if this "spfile" then you are using spfile
    -- in that case note the
    sho parameter job_queue_processes
    sho parameter aq_tm_processes
    -- (this is Bug 6005344 fixed in 11g )
    -- then do
    shutdown
    startup restrict
    SPOOL Nswitch.log
    @@?\rdbms\admin\csalter.plb
    -- Csalter will aks confirmation - do not copy paste the whole actions on one time
    -- sample Csalter output:
    -- 3 rows created.
    -- This script will update the content of the Oracle Data Dictionary.
    -- Please ensure you have a full backup before initiating this procedure.
    -- Would you like to proceed (Y/N)?y
    -- old 6: if (UPPER('&conf') <> 'Y') then
    -- New 6: if (UPPER('y') <> 'Y') then
    -- Checking data validility...
    -- begin converting system objects
    -- PL/SQL procedure successfully completed.
    -- Alter the database character set...
    -- CSALTER operation completed, please restart database
    -- PL/SQL procedure successfully completed.
    -- Procedure dropped.
    -- if you are using spfile then you need to also
    -- ALTER SYSTEM SET job_queue_processes=<original value> SCOPE=BOTH;
    -- ALTER SYSTEM SET aq_tm_processes=<original value> SCOPE=BOTH;
    shutdown
    startup
    and the 10g database will be AL32UTF8
    9) Reload the data pump packages after a change to AL32UTF8 / UTF8 in Oracle10
    If you use Oracle10 then the datapump packages need to be reloaded after
    a conversion to UTF8/AL32UTF8. In order to do this run the following 3
    scripts from $ORACLE_HOME/rdbms/admin in sqlplus connected as "/ AS SYSDBA":
    For 10.2.X:
    catnodp.sql
    catdph.sql
    catdpb.sql
    For 10.1.X:
    catnodp.sql
    catdp.sql
    10) Reimporting the exported data:
    If you exported any data in step 5) then you now need to reimport that data.
    Make sure that the character set part of the NLS_LANG is still set to the
    original database character set during the import session (just as it was during
    the export session).
    11) Verify the clients NLS_LANG:
    Make sure your clients are using the correct NLS_LANG setting:
    Regards,
    Chotu,
    Bangalore

  • ENTERPRIZE BACKUP UTILITY

    제품 : ORACLE SERVER
    작성날짜 : 1996-11-29
    1. The Problem
    과거 메인 프레임 환경에서 사용하던 고객들이 저 비용, C/S환경으로
    Migration시에는 주로 Open system과 Oracle RDBMS를 채택하고 있다.
    이런 관계로 해당 제품 시장은 고 성장을 구가하고 있기도 하다.
    이러한 고 성장을 가능케 하는 요소는 물론 H/W,S/W에서의 혁신적인
    진보이나, 메인 프레임급에 걸 맞는 대용량 DB를 관리하고 Backup하는
    데 필수적인 System management tool은 과거와 크게 달라진 바가 없다.
    tar, cpio, dd같은 UNIX backup tool들은 실제 backup media의 관리
    측면에서 별 기여 를 하지 못하고 있다. 대용량 DB를 위한 Backup script는
    source만 해도 수백line에 달하며 test와 troubleshoot에서도 많은 문제점을
    갖고 있다. 뿐만 아니라 대용량 Database(이하 VLDB)의 Restore는
    Backup보다도 훨씬 더 복잡한 문제를 갖고 있다.
    최근의 Backup file이 어느 곳에 저장 되었는지를 알기 위해 Backup
    tape을 분류하고, damaged file들을 restore 해야 하는 등의 일련의 작업이
    필요하다.
    2. The Solution
    최근에 몇몇 media management vendor들은 media management기능,
    scheduling기능, 강력한 security기능, UNIX system에서 제공하는 backup
    tool이상의 신뢰성을 갖는 강력한backup product를 공급하기 시작했다.
    이러한 product들은 system level에서의 관리를 용이하게 해주는 장점은
    있으나 이미 위에서 지적된 VLDB의 backup시 제기되는 문제점들을
    완전히 해결해 주지는 못한다.
    Oracle7 Enterprise Backup Utility(이하 EBU)는 이런 문제점을 해결하는
    solution이다. EBU는 backup/restore시 media management product와 강력한
    인터페이스를 제공하며 다른 어느것과 비교 할 수 없는 강점을 갖고
    있다.
    3. Introducing Oracle7 Enterprise Backup Utility
    EBU는 open system상에서 메인 프레임급 성능을 제공하여 한 기업
    전반에 걸쳐 전략적으로 중요한 역할을 감당하게 된다. 이 utility는 DB의
    신속한backup과 restore를 위해disk와 backup될 media device에서H/W
    parallelism을 이용한다.open system의 full capacity를 이용하기 위해
    device가 add될 때 performance가 증가된다.parallel H/W, backup
    configuration check ,error detection and cleaning ,restore시의
    database의block verification과 같은 특징들이EBU가 high reliability를 갖게
    하는 특징들이다.
    Oracle7 데이타베이스는 data files, control files, redo log files로
    구성되는데 EBU는data file, control file을 backup받고 archived redo log
    file을 받는다. (release2.0.8) Media failure의 경우 DB를 full restore/
    partial restore가 가능하며 data file도 선택적으로 최근의 last backup까지,
    어느 일정 시점 까지 만의 선택이 가능하다. 또한 recovery도 일정 시점으로
    완전recovery가 가능하다.
    4. Architecture
    EBU는 크게 두 부분으로 구성되는데 backup하려 하는 DB에 관한 모든
    current정보와history를 관리하는 catalog부분, 그리고 실행 파일 부분으로
    구성된다. 실행 파일들과 catalog는 반드시 같은 host내에 존재할 필요는
    없다. 실행 파일들은 다음으로 구성된다.
    *obackup : 모든 다른 프로세스들을 monitor하며 backup하려하는 DB,
    catalog와의 communication을 담당한다.또한 Instance Manager를
    running시킨다.
    *brio  : obackup, brdk, brtp사이 조정 역할을 수행한다.parallel I/O
    stream당 하나의 brio process가 발생한다.
    *brdk : Disk file의 read/write를 조절한다. File당 하나의 brdk를 발생시키
    므로 만일 여러 개의 file들을 하나의 single data stream으로 결합시
    키는 경우 하나의 I/O stream내에 여러 개의 brdk process를
    create한다.
    *brtp : Tape file들의 read/write를 조절한다. I/O stream당 하나의 brtp
    create.
    *brd : Instance Manager로써 backup catalog, obackup을 monitor하는
    daemon process이다. backup을 위한 cleanup과 비정상 종료된 작업을
    복구 한다.
    Backup동안 database file들은 disk process인 brdk에 의해 읽혀져 backup
    buffer area로 넘겨지고 그 후 third vendor의 media management s/w를
    경유하여 backup device에 brtp process에 의해 write된다. Restore시는
    반대의 과정이 진행된다. 위의backup buffer는 hard disk와 tape device간에
    발생하는 I/O상의 speed mismatch를 조절하는 역할을 한다.
    5. EBU의 장점(Benefits)
    Third vendor의 media management product와 함께 사용되는 EBU는
    기존의 UNIX 에서 제공하는 backup tool보다 다음과 같은 강력한 장점을
    지닌다.
    (1) 강력한 Reliability.
    *. Consistent backup procedures
    기존의 hot backup script의 경우 만일 datafile이 추가될 경우 script에
    새로 지정해 주어야 하며 VLDB의 경우 script만 해도 수백line에 달하여
    관리상 번거로운 게 사실 이였다. 그러나 EBU의 경우는 어느 datafile이
    어느directory에 위치하는지를 작업자가 알 필요가 없고 script는 짧고
    단순하여 누구든 쉽게 알아볼 수 있다.
    또한 DB 구성이 자동으로 확인되며 backup media를 자동으로 인식 및
    관리한다.
    예).full online backup script .partial online backup script
    backup online database backup online
    db_name = "PRODB" db_name = "PRODB"
    oracle_sid = "PROD" oracle_sid = "PROD"
    tablespace = "SALES"
    .full restore script
    restore database
    db_name = "PRODB"
    oracle_sid = "PROD"
    이와 같이 간단한 script에 의해 DBA는 VLDB를 쉽게 backup과 restore를
    할 수 있으며 작업에 필요한 세부정보는 backup catalog내에 있게 된다.
    *. End-to-End checksum
    Backup시 EBU는 각 backup data의 byte를 계산하고, restore시도 계산하여
    정상 수행 여부를 판단 시 checksum 값을 상호 체크 한다.
    *. Instance Manager
    Instance Manager는 backup/restore시에 진행 상황을 monitor하고 작업
    중에 에러가 발생시 복구 및 각종resource를 release시키는 기능을 담당하게
    한다.
    (2) High Performance
    다음과 같은 EBU의 특징과 고속media device의 지원 아래 VLDB의
    backup을 적은시간 내에 완전하게 수행 할 수 있게 되었다.
    *. Parallel Hardware Capability
    Backup/restore시 multiple device를 사용하여 동시에 여러 tablespace에
    대한 작업을 수행 할 수 있다. VLDB와 같이 다수의 tablespace, datafile이
    여러 disk에 분산되어 있는 경우 이와 같이 물리적으로 다수의 device를
    parallel하게 사용함으로서 performance를 극대화 시킬 수 있다.
    Backup시 동시 사용되는 device의 수와 performance는 정비례하게 나타난다.
    *. Multiplexing
    EBU는 backup device가 최고의 속도로 backup을 수행 할 수 있게 script에
    지정 할 수가 있다.이것은 backup media access속도보다 disk access
    속도가 훨씬 작은 경우에 여러 disk에 분산된 datafile들을 한device에
    multiplexing되게 지정함으로 가능하다.
    *. Null Block Compression
    EBU는 backup시 null data block을 skip시키고 restore시 재구성함으로써
    공간 을 절약하며 performance를 증가시킨다.
    *. Buffered I/O
    EBU는 disk I/O와 device I/O사이의 access speed의 mismatch를 줄이기
    위하여 모든 I/O stream을 bufferring한다. 양쪽의 I/O는 shared memory
    buffer를 거치게 함으로써 backup/restore수행 중 항시 일정 속도를 유지
    하게 한다. shared memory는 user가 parameter로 구성할 수 있으며 기본
    적으로는 parallel data stream의 개수와 buffer size에 dependent하다.
    (3) Availability
    한 기업에서 전략적으로 중요한 DB들은 대부분 24시간*7일 full로
    가동되며, 사용 중에 backup을 수행할 수 밖에 없다. Offline full backup은
    당연히 지원되며, EBU는 DB를 online상태로 full backup이 가능하며,
    또한partial backup도 가능하다. 기존의 online backup이 backup으로 인해
    performance를 저하시키는 단점이 있었으나 EBU는DB 운영에 최소의
    영향만을 미친다.
    Recovery시에도 full로 restore 시킬 수 있으며 나머지는 사용 중인
    상태에서 필요 부분만 partial restore가 가능하다. 일단 EBU가 data를
    restore시킨 후에는 기존의 Oracle7의 표준 recovery process가 적용되어
    진다.
    (4) Usability
    *. Auto-configuration
    Online backup을 수행 시 EBU는 current DB configuration과 backup catalog
    내의 가장 최근의 configuration정보를 비교하여 backup catalog정보가 더
    오래 전의 것이면 EBU는 DB의 backup직전으로 update시킨다.
    Auto-configuration은 default이며 명시적으로 지정하여 disable시킬 수 있다.
    Offline backup시는 backup script내에 backup command기술 전에 register
    라는 명령어를 기술 해야만 configuration정보를 update시킬 수 있다.
    *. Light-Out backups
    Third party의 media management s/w를 이용하여 EBU를 operator의 개입 없
    이 자동으로 수행이 가능하다. UNIX의 cron에서도 자동으로 수행되게 할 수
    있다.
    *. Raw device support
    EBU는 raw device로 잡힌 DB도 operator의 특별한 조치 없이 일반 file
    system의 backup과 똑같이 수행할 수 있게 한다.
    *. Aggregated restore
    기업의 mission-critical system에서의 backup 전략들은 흔히 partial
    backup에 의존하게 된다. 이와 같은 환경에서는 backup data내의 모든file에
    대한 current version을 검색하는 일이 중요한데 EBU는 이것을 자동으로
    수행한다. partial backup data는 restore시 media failure이전에 취해진
    가장 최근의 full backup data와 aggregated 되어져 이용된다.
    *. Point-in-Time restore
    EBU는 backup catalog내에 backup하려하는 DB에 대한 모든 configuration
    history를 갖고 있어서, 만일 하나의 tablespace가 drop된 후 다시 필요하게
    되면 비록 그 후 일정 시간이 경과되고 많은 변경이 일어났더라도 drop직전으로
    일부분만 restore하여 사용이 가능하게 할 수 있다. 이 같은 tablespace는
    다른 machine에 restore 될 수도 있어 현재의 DB작동에 아무 영향을 미치지
    않고 수행할 수가 있다.
    *. Backup catalog
    EBU에 의해 제공되는 backup catalog는 모든 backup대상 DB의 configuration
    과 history의 저장소로써 제공된다. backup catalog에 저장된 정보는 backup된
    file, tablespace 그리고 multiplexing정보, 마지막backup이나 restore의
    시간기록,backup type(full or subset),backup들이 저장 되어 있는 file set
    등을 포함하고 있다. 하나의 backup catalog는 기업 내 모든 분산된 dB들을 관리
    할 수 있다.
    *. Dry Runs
    VLDB를 backup시 겪는 문제 중 하나는 과연 script가 정상적으로 수행이 완료
    되는지를 검증하는 것이다. 특별한 방법이 없는 한 긴 시간 동안 수행시켜 끝난
    후 확인 할 수 밖에 없는데 EBU는 I/O없이 간단히 test로 수행을 시킬 수가
    있다. 이로 인해 신뢰성 있는 backup전략을 DBA가 수립 가능하게 한다.
    *. Enterprise-wide backup
    이것은 backup이 단순히 single database에 국한되지 않고 한 기업 전체에
    걸쳐서 global하게 수행 가능함을 말한다. 예로 전국적으로 다수의 site에
    database가 구축 되어진 상황에서 각database들이 two phase commit,
    snapshot 같은 분산처리 작업을 통해 다른 database들을 synchronous하게
    또는 asynchronous하게 update할 경우 기업 전체를 하나의 logical
    database로 보고 이런 logical database를 중앙에서 EBU를 통해 backup시킬
    수가 있다.EBU는 중앙의 catalog내에 모든database들에 대한 backup정보를
    간직하고 있다.
    이와 같은 중앙의 한 지점을 통하여 기업 전체 모든 database들에 대한
    backup수행 기능은 EBU의 가장 큰 장점 중 하나이며 이런 경우 N/W상에 대량의
    data가 전송되어 network의 bandwidth가 backup의 중요한 요소로 부각된다.
    6. Flexibility
    *. EBU API
    EBU는 앞에서도 소개되었듯이 단독으로 사용되는 것이 아니고 third-part
    에서 제공하는 media management product와 같이 사용된다. 이때 두 product
    사이의 interface는 backup/restore Application Programming Interface
    (API Glue)에 의해서 이루어 진다. 이것은 third part에 의해 제공되며
    EBU install시 installer에 의해 자동적으로 access된다.
    *. Media management products
    Epoch*s EpochBackup, IBM Adstar*s Distributed Storage Manager,
    Legato Systems*NetWorker, HP*s Omniback*, SprectraLogic*s
    Alexandria,StorageTek*s REEL, etc.
    *. Media device
    4mm and 8mm DATS,3480s,3490s,WORM,Writable Optical Devices and
    automated systems such as stackers and silos.
    7. Pricing
    EBU는 무상으로 공급되며 사용을 위해서는 third vendor의 media
    management product를 구입하여 설치하여야 한다.

    Sure, would you clarify which DB, which OS and which product?
    Concept in general is quite simple:
    - stop BOE (SIA on XI3, All services, finishing with CMS on XIr2, ideally putting a sleep between all servers shutdown and CMS shutdown). NET STOP on windows.
    - Backup the CMS DB (imp/exp on Oracle, mysqldump for MySQL)
    - Backup the FileStore (use robocopy in a batch for delta backup)
    - Restart BOE
    The "catches" are around Oracle doing export in US7ASCII by default under Unix, which is a destructive conversion from UTF8 (your CMS DB should be under this format as per supported platforms), so you need to make sure to set the NLS_LANG to UTF8 on Unix user doing the backup.
    Other than that, batch as shell can do all of that.
    Regarding the "general" best practices, I recommend the following doc:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0020482-ca8d-2c10-9bad-d1bd332bbb28
    Note on integrated BO tools: Import Wizard as biarengine are good tools for promoting a set of data between wto BOE environment or if you need to backup a specific set, but they are definitly not recommended as backup solution. I would refrain from this solution IF the point of the backup is to be able to recover the system when it crashes.
    Does that answer the question?

  • Imp-00008: unrecognized statement in the export file

    Hi All,
    I am trying to import an export from Oracle 8.1.7 source system to Oracle 11.2 using imp . I am getting the following errors may times during the import process.
    imp-00008: unrecognized statement in the export file
    The character set of Source data base is WE8DEC and the character set of target data base is WE8MSWIN1252 and i get the below statement during the start of the import
    import done in US7ASCII character set and AL16UTF16 NCHAR character set
    import server uses WE8MSWIN1252 character set (possible charset conversion)
    export client uses WE8DEC character set (possible charset conversion)
    export server uses WE8DEC NCHAR character set (possible ncharset conversion)
    The error imp-00008: unrecognized statement in the export file is it due to the Oracle version compatibility or due to Character set compatibility issue.
    I tried to create a new Database on same Oracle sever and i cant find WE8DEC in the list of character set to choose. Please help me on how to proceed.
    Regards,
    alen.

    934571 wrote:
    Hi Srini,
    Data is getting loaded correctly, but i get several of these error imp-00008: unrecognized statement in the export file messages during the import process, so i am not sure what is missing.
    Pl post the complete error message from the import log file.
    1) Is it possible to import the dump exported form oracle 8 into oracle 11?? Do we need to take any special care during the import ??
    Yes - no special requirements are needed.
    2) The Oracale Database character set it WE8DEC, but when i try to create a new database in 11 , i dont find that character set , is the character set obsolete now , if so what character set is super set of WE8DEC ??
    Pl post exact OS and database versions. Ideally you should be using AL32UTF8 for all new databases. WE8DEC is a deprecated characterset.
    See section 4.2.1 here - http://docs.oracle.com/cd/E11882_01/install.112/e24186/install.htm#BABFDDEA
    Thanks,
    Alen.HTH
    Srini

  • Dashboard Design – Request processing failed (XLS 000009)

    Hello,
    When I try to add a new query of universe in DashBoard, I have the error message at the step of Preview Query Result.
    Message : Request processign failed (XLS 000009).... Conversion from UTF8 to charset failed...
    Thank you in advance
    EMI
    Edited by: EMI on Dec 8, 2011 4:42 PM

    Okay here is the thing.
    I was getting the same error message for while ago. My case was the custom SQL I used in list of values in the universe. If you are using custom SQL code in any where of your universe check the FROM statement and you'll see that <ConnectionName>.<TableName>.
    If you've changed the connection name at some point for some reason, you need to put the new connection name there.
    Edited by: zahidyasar on Jan 4, 2012 11:05 AM
    Edited by: zahidyasar on Jan 4, 2012 11:07 AM

  • How to convert character sets???

    I need to load a CLOB from a BFile, the BFile is in an (HP-UX) US7ASCII character set and the database is UTF8. I need to load the CLOB using dbms_lob.loadfromfile but can't find any info on how to convert the US7ASCII file into a UTF8 CLOB.
    HELP!!!
    --Joan Armstrong                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Joan,
    I don't know if this will help with conversion of you BFILE, but at
    http://www.xml.com/lpt/a/2000/04/26/encodings/xmlparser.html
    and at
    http://xmlsoft.org/encoding.html
    there is some information on conversion to UTF8.
    Hope it helps. Let us know.
    Dave

  • {SOL}Problem in Export/Import a simple table between two diff. characterset

    Hi ,
    I have created a simple table on SCOTT schema....
    SQL> CREATE TABLE TEST(A NUMBER(1) , B VARCHAR2(10));
    Table created
    SQL> INSERT INTO TEST VALUES(1 , 'TEST_TEST');
    1 row inserted
    SQL> COMMIT;
    Commit complete
    SQL> INSERT INTO TEST VALUES(2 , 'ΤΕΣΤ_ΤΕΣΤ');     <------------greek chars
    1 row inserted
    SQL> COMMIT;
    Commit complete
    The nls_parameters:
    SQL> SELECT * FROM NLS_INSTANCE_PARAMETERS;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   GREEK
    NLS_TERRITORY                  GREECE
    NLS_SORT                      
    NLS_DATE_LANGUAGE             
    NLS_DATE_FORMAT               
    NLS_CURRENCY                  
    NLS_NUMERIC_CHARACTERS        
    NLS_ISO_CURRENCY              
    NLS_CALENDAR                  
    NLS_TIME_FORMAT               
    NLS_TIMESTAMP_FORMAT          
    NLS_TIME_TZ_FORMAT            
    NLS_TIMESTAMP_TZ_FORMAT       
    NLS_DUAL_CURRENCY             
    NLS_COMP                      
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    17 rows selected
    SQL> SELECT * FROM NLS_SESSION_PARAMETERS;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    17 rows selected
    and db characterset is EL8MSWIN1253
    I export such as(following generally the instuctions found on Note:227332.1-Metalink):
    C:\Documents and Settings\s_k>SET ORACLE_SID=EPESY
    C:\Documents and Settings\s_k>SET NLS_LANG=GREEK_GREECE.EL8MSWIN1253
    C:\Documents and Settings\s_k>C:\oracle\product\10.2.0\database10g\BIN\exp SYSTE
    M/passwd@EPESY FILE=C:\TEST.DMP TABLES=(SCOTT.TEST) ROWS=Y LOG=C:\TEST2.TXT
    Export: Release 10.2.0.1.0 - Production on ╩Ϋ± ╔ΎΫΊ 22 12:28:58 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ╕ήώΊί ≤²Ίϊί≤ύ ≤ί: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP and Data Mining options
    ╟ ίΌάή∙ή▐ ▌ήώΊί ≤ΪΎ ≤ίΪ ≈ά±άΆΪ▐±∙Ί EL8MSWIN1253 Άάώ ≤ΪΎ ≤ίΪ ≈ά±άΆΪ▐±∙Ί NCHAR AL1
    6UTF16
    ╨±ΎίΪΎώΉά≤▀ά ήώά ίΌάή∙ή▐ Ϊ∙Ί Ώ±Ύ≤ϊώΎ±ώ≤Ή▌Ί∙Ί ΏώΊ▄Ά∙Ί Ή▌≤∙ ╙ΫΉέάΪώΆ▐≥ ─ώάϊ±ΎΉ▐≥ .
    ╧ Ϊ±▌≈∙Ί ≈±▐≤Ϊύ≥ ▄ΈΈάΌί ≤ί SCOTT
    . . ίΌάή∙ή▐ ΪΎΫ Ώ▀ΊάΆά                           TEST          2 ή±άΉΉ▌≥ ίΌ▐≈ϋύ≤
    άΊ
    ╟ ίΌάή∙ή▐ ΪίΈί▀∙≤ί ίΏώΪΫ≈■≥ ≈∙±▀≥ Ώ±ΎίώϊΎΏΎ▀ύ≤ύ.Then , i shutdown this database and i start the other.....
    with this nls_parameters
    SQL> select * from nls_session_parameters;
    PARAMETER                                                                        VALUE
    NLS_LANGUAGE                                                                     AMERICAN
    NLS_TERRITORY                                                                    AMERICA
    NLS_CURRENCY                                                                     $
    NLS_ISO_CURRENCY                                                                 AMERICA
    NLS_NUMERIC_CHARACTERS                                                           .,
    NLS_CALENDAR                                                                     GREGORIAN
    NLS_DATE_FORMAT                                                                  DD-MON-RR
    NLS_DATE_LANGUAGE                                                                AMERICAN
    NLS_SORT                                                                         BINARY
    NLS_TIME_FORMAT                                                                  HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT                                                             DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT                                                               HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT                                                          DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY                                                                $
    NLS_COMP                                                                         BINARY
    NLS_LENGTH_SEMANTICS                                                             CHAR
    NLS_NCHAR_CONV_EXCP                                                              FALSE
    17 rows selected
    SQL>
    SQL> select * from nls_instance_parameters;
    PARAMETER                                                                        VALUE
    NLS_LANGUAGE                                                                     GREEK
    NLS_TERRITORY                                                                    GREECE
    NLS_SORT                                                                        
    NLS_DATE_LANGUAGE                                                               
    NLS_DATE_FORMAT                                                                 
    NLS_CURRENCY                                                                    
    NLS_NUMERIC_CHARACTERS                                                          
    NLS_ISO_CURRENCY                                                                
    NLS_CALENDAR                                                                    
    NLS_TIME_FORMAT                                                                 
    NLS_TIMESTAMP_FORMAT                                                            
    NLS_TIME_TZ_FORMAT                                                              
    NLS_TIMESTAMP_TZ_FORMAT                                                         
    NLS_DUAL_CURRENCY                                                               
    NLS_COMP                                                                        
    NLS_LENGTH_SEMANTICS                                                             CHAR
    NLS_NCHAR_CONV_EXCP                                                              FALSE
    17 rows selected
    with this db characterset: UTF8
    C:\Documents and Settings\s_k>SET NLS_LANG=GREEK_GREECE.EL8MSWIN1253
    C:\Documents and Settings\s_k>C:\oracle\product\10.2.0\database10g\BIN\imp syste
    m/passwd@info FROMUSER=SCOTT TOUSER=SCOTT FILE=C:\TEST.DMP LOG=C:\TEST0_IMP.TXT
    Import: Release 10.2.0.1.0 - Production on ╩Ϋ± ╔ΎΫΊ 22 12:40:16 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ╕ήώΊί ≤²Ίϊί≤ύ ≤ί: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP and Data Mining options
    ┴±≈ί▀Ύ ίΌάή∙ή▐≥ ϊύΉώΎΫ±ή▐ϋύΆί άΏⁿ EXPORT:V10.02.01 Ή▌≤∙ ≤ΫΉέάΪώΆ▐≥ ϊώάϊ±ΎΉ▐≥
    ίώ≤άή∙ή▐ ▌ήώΊί ≤ί ≤ίΪ ≈ά±άΆΪ▐±∙Ί EL8MSWIN1253 Άάώ ≤ίΪ ≈ά±άΆΪ▐±∙Ί NCHAR UTF8
    server ίώ≤άή∙ή▐≥ ≈±ύ≤ώΉΎΏΎώί▀ ≤ίΪ ≈ά±άΆΪ▐±∙Ί UTF8 (ϊΫΊάΪ▐ ΉίΪάΪ±ΎΏ▐ ≤ίΪ ≈ά±άΆΪ▐±
    ∙Ί)
    server ίΌάή∙ή▐≥ ≈±ύ≤ώΉΎΏΎώί▀ ≤ίΪ ≈ά±άΆΪ▐±∙Ί NCHAR AL16UTF16 (ϊΫΊάΪ▐ ΉίΪάΪ±ΎΏ▐ ≤ί
    Ϊ ≈ά±άΆΪ▐±∙Ί nchar)
    . ίώ≤άή∙ή▐ Ϊ∙Ί άΊΪώΆίώΉ▌Ί∙Ί ΪΎΫ SCOTT ≤ΪΎ SCOTT
    . . ίώ≤άή∙ή▐ ΪΎΫ Ώ▀ΊάΆά                         "TEST"          2 ή±άΉΉ▌≥ ίώ≤▐≈ϋ
    ύ≤άΊ
    ╟ ίώ≤άή∙ή▐ ΪίΈί▀∙≤ί ίΏώΪΫ≈■≥ ≈∙±▀≥ Ώ±ΎίώϊΎΏΎ▀ύ≤ύ.
    C:\Documents and Settings\s_k>SQLPLUS SCOTT/TIGER
    SQL*Plus: Release 10.2.0.1.0 - Production on ╩Ϋ± ╔ΎΫΊ 22 12:41:20 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    ╙²Ίϊί≤ύ ≤ί:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> SELECT * FROM TEST;
             A B
             1 TEST_TEST
             2 ????_????What may be the cause.....????
    Note: I use db 10g v.2 on Windows XP platform.. and the two db instances reside on the same machine....
    Thanks...
    Sim

    "Generally speaking the value of the NLS_LANG registry key or environment variable needs to be equal to the characterset of the database."
    Yes...that's why i have set the NLS_LANG env.variable to GREEK_GREECE.EL8MSWIN1253 ..equal to:
    SQL> select * from nls_database_parameters;
    PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET EL8MSWIN1253
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              10.2.0.1.0"nls_language doesn't come into play, nor nls_instance_parameters."
    Yes...it's true.
    "So, in the dump you posted, no one can tell whether those characters were INSERTed correctly at all. Your NLS_LANG *registry key* may have been set to an incorrect value (it defaults to American_America.MSWIN1252)."
    Actually , i have used a third-party tool PL/SQL Developer (which does have the OracleDB10g as default home).
    Looking at the Windows registry of OracleDB10g the NLS_LANG is equal to GREEK_GREECE.EL8MSWIN1253.
    "Thirdly, as I implied above the NLS_LANG on import should have been American_America.UTF8."
    According to the Note 227332.1 , if the db characterset of the two dbs are not the same.. then it is preferable the conversion should be done on the import process and not the export....
    So, in an example described there -export from a AMERICAN_AMERICA.WE8MSWIN1252 db and import on UTF8 db - (seems exactly the same as mine) the import is done as such:
    c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
           c:\>imp ....
    The conversion to UTF8 is done while inserting the data
           in the UTF8 database.Additional notes....
    I have used many different patterns doing the import......
    1) Use of AMERICAN_AMERICA.UTF8
    2) Use of GREEK_GREECE.EL8ISO8859P7
    3) Use the appropriate NLS_LANG that corresponds to the display of chcp command....
    All tries display some '?' chars.....
    Anyway... I 'll continue reading ... and testing
    Thanks... a lot for your points
    Sim

  • How to update\insert data into a NVARCHAR column using ODBC API

    I am trying to update a Sybase table via Microsofts ODBC API. The following is the basics of the C++ I am trying to execute. In table, TableNameXXX, ColumnNameXXX has a type of  NVARCHAR( 200 ).
    SQLWCHAR updateStatement[ 1024 ] = L"UPDATE TableNameXXX SET ColumnNameXXX = N 'Executive Chair эюя' WHERE PKEYXXX = 'VALUE'";
    if( ret = SQLExecDirect( hstmt, ( SQLWCHAR* ) updateStatement, SQL_NTS ) != SQL_SUCCESS )
    // Handle Error
    The Sybase database has a CatalogCollation of 1252LATIN1, CharSet of windows-1252,  Collation of 1252LATIN1, NcharCharSet of UTF-8 and an NcharCollation of UCA.
    Once this works for the Sybase ODBC connection I need to get it to work in various other ODBC drivers for other databases.
    The error i get is "[Sybase][ODBC Driver][SQL Anywhere]Syntax error near 'Executive Chair ' on line 1"
    If i take out the Unicode characters and remove the N it will update. 
    Does anyone know how to get this to work? What am I missing?
    I wrote a C# .net project using an ODBCConnection to a SQL Server database and am getting "sort of" the same error. I means sort of as this error contains the Unicode Text in the message whereas the Sybase ODBC error has "lost" the unicode.
    static void Main(string[] args)
    using (OdbcConnection odbc = new OdbcConnection("Dsn=UnicodeTest;UID=sa;PWD=password")) // ;stmt=SET NAMES 'utf8';CharSet=utf16"
    //using (OdbcConnection odbc = new OdbcConnection("Dsn=Conversion;CharSet=utf8")) // ;stmt=SET NAMES 'utf8';CharSet=utf8
    try
    odbc.Open();
    string queryString = "UPDATE TableNameXXX SET ColumnNameXXX = N 'Executive Chair эюя' WHERE PKEYXXX = 'AS000008'";
    System.Console.Out.WriteLine(queryString);
    OdbcCommand command = new OdbcCommand(queryString);
    command.Connection = odbc;
    int result = command.ExecuteNonQuery();
    if( result == 1)
    System.Diagnostics.Debug.WriteLine("Success");
    catch(Exception ex)
    System.Diagnostics.Debug.WriteLine(ex.StackTrace);
    System.Diagnostics.Debug.WriteLine(ex.Message);
    "ERROR [42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]Incorrect syntax near 'Executive Chair эюя'."

    Your error comes from Sybase, so I suggest you post your question to a Sybase forum.  And be aware that Sybase does not use the same tsql dialect as sql server, so you must use their dialect (if, indeed, there is any difference in this particular situation). 
    One note - there should be no space between "N" and the Unicode string literal to which it applies in tsql.  E.g.,
    = N'Executive Chair эюя'
    not
    = N 'Executive Chair эюя'

Maybe you are looking for