DB CHARACTER SET을 잘못 변경하였을 때의 복구 방법

제품 : ORACLE SERVER
작성날짜 : 1997-05-12
Database character set 을 잘못 변경하였을 때의 복구 방법
RDBMS 7.3 이하의 버젼 에서 database character set 을 잘못 변경하면,
ora-12708 에러를 발생시키면서 shutdown 되어, database 를 open 할 수 없다.
그러나, RDBMS 7.3 버젼부터는 database character set 을 잘못 변경해도
startup 이 되고, 기존의 character set 을 조회할 수 있는 dictionary
(props$,nls_database_parameters)들에서 잘못 입력한 character set을 확인할
수 있다.
(참고: 7.3 에서 characterset 을 잘못 갱신하면, v$nls_parameters 은
US7ASCII 로 setting 되어서 startup 이 되도록 한다.)
잘못 갱신된character set 으로 인하여 startup 이 되지 않을 때의
복구 방법은 다음과 같다. 먼저, 서버 쪽에 system01.dbf 의 backup 을
1 copy 받아둔다.
System01.dbf 를 편집할 수 있는 환경을 만든다.(예를 들면, binary file 을
편집할 수 있는 editor 를 구한다. 그리고, system01.dbf 를 PC 로 down 받는
다. 이 editor 로는 norton unitlity 중의 하나인 diskedit 가 있는데,
dos 6.x 대의program 이므로, PC 를 DOS diskette 으로 booting 한다.
Diskedit 로 system01.dbf 를 읽어 들인다. Diskedit 에서 find menu 를
통하여 ASCII 값으로 CHARACTER 를 찾으면, 이 값이 있는 곳으로 jump 한다.
위의 editor 를 통하여 보면, binary 값과 ASCII 값이 동시에 display 되므로
자신이 어떻게 잘못 갱신 했는지 확인 할 수 있다. 만약, KO16KSC5601 을
잘못하여 KO16KSCo601 로 했다면, 이 두 값의 길이가 같기 때문에 KO16KSCo601
에서 단순히 o 를 5 로 수정한다.
수정한 화일을 저장하고 Server 로 올려서 re-startup 하면 된다.
잘못 갱신된 값과 올바른 character set 의 길이가 틀릴 경우에는 props$
테이블의 NAME,VALUE$,COMMENT$ 의 3개의 컬럼 중에서 value$ , comment$ 의
길이도 수정하여야 한다. 다음의 예를 통해서 보면,
4E 4C 53 5F 43 48 41 52 41 43 54 45 52 53 45 54
0B 4B 4F 31 36 4B 53 43 35 36 30 31 0D 43 68 61
72 61 63 74 65 72 20 73 65 74 2C 00 03
이것을 아스키로 보면 다음과 같다.
N L S _ C H A R A C T E R S E T
* K O 1 6 K S C 5 6 0 1 * C h a
r a c t e r s e t ................
여기서는 CHARACTER SET이 KO16KSC5601인 경우인데 위에서 보면 첫번째
*(0B)는 KO16KSC5601(VALUE$ 컬럼에 저장)의 길이를 나타내고 두번째 *(0D)는
Character set(COMMENT$ 컬럼에 저장)의 길이를 나타낸다.
1) 잘못된 character set 의 길이가 더 길 경우
KO16KSC56011로 잘못 바꾸었다면 하나를 줄여야 하므로 길이를 첫번째 *에
해당하는 값을0C에서 0B로 바꿔주고, CHARACTERSET의 길이가 줄어드는 경우
이므로 줄어드는 만큼COMMENT$ 컬럼의 내용을 늘어나도록 editing을 하여야
한다.
즉, KO16KSC5601 다음에 COMMENT$ 의 길이를 늘려서 기록해주고, 늘린 길이
만큼 임의의 character 를 추가한다. (이 경우 차이가 1byte 이므로, 길이를
0E 로 하고, 임의의 1 character를 추가한다)
2) 잘못된 character set 의 길이가 짧을 경우
KO16KSC5601로 해야 하는 것을 KSC5601로 바꾸었다면 길이를 07에서 0B로
늘려주고 KSC5601을 KO16KSC5601로 수정한다. 이 경우 COMMENT$ 컬럼을
4byte 침범하게 되므로 COMMENT$ 컬럼의 Cha 는 없어지고 r 부분에
COMMENT$ 의 길이에 09를 넣어주면 된다.

Similar Messages

  • ORA-12712 error while changing nls character set to AL32UTF8

    Hi,
    It is strongly recommend to use database character set AL32UTF8 when ever a database is going to used with our XML capabilities. The database character set in the installed DB is WE8MSWIN1252. For making use of XML DB features, I need to change it to AL32UTF8. But, when I try doing this, I'm getting ORA-12712: new character set must be a superset of old character set. Is there a way to solve this issue?
    Thanks in advance,
    Divya.

    Hi,
    a change from we8mswin1252 to al32utf8 is not directly possible. This is because al32utf is not a binary superset of we8mswin1252.
    There are 2 options:
    - use full export and import
    - Use of the Alter in a sort of restricted way
    The method you can choose depends on the characters in the database, is it only ASCII then the second one can work, in other cases the first one is needed.
    It is all described in the Support Note 260192.1, "Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)". Get it from the support/metalink site.
    You can also read the chapters about this issue in the Globalization Guide: [url http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430]Change characterset.
    Herald ten Dam
    http://htendam.wordpress.com

  • Oracle 8.1.5 install on Linux Redhat 6.0: character set (and other) problem(s)

    I am trying to install Oracle 8i on Linux and it does not work : once the install is finished, I have a message saying that "Character Set not found".
    I am runing a french version of Linux (fr-latin 1) and I try to install Oracle with French and English as languages
    An other problem about this install : Oracle does not seem to recognize that I have 6,9 Giga for it to install, and says that I have not enough space for the install...
    And at the end of the install, it takes for ages (about 15mns) during which nothing seems to happen. On one machine I got out of this phase, but on the other I never saw it finish, it looks as if the computer crashed. Is that normal?
    I went through all the initialization phases, set the correct environment variables...
    thanks
    Solange
    null

    I've been dealing with the same problems in the english version but could bypass thiss by doing the folowing.
    -Just ignore the disk space stuff
    -Ignore the charset message, also
    -When creating a database, choose custom and then select the WE8ISO8859P1 char set. It worked for portuguese, must work for french also.
    -Everyone here recommended, and I do the same, leave the database creation for later, not during instalation.
    Good Luck!

  • The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.

     We developed a SSIS Package to pull the data From Oracle source to Sql Server 2012. Here we used ADO.Net source to pull the records from Source but getting the below error after pulling some 40K records.
      [ADO NET Source [2]] Error: The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.
    [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. 
     The PrimeOutput method on ADO NET Source returned error code 0xC02090F5. 
     The component returned a failure code when the pipeline engine called PrimeOutput(). 
    The meaning of the failure code is defined by the component, 
    but the error is fatal and the pipeline stopped executing. 
     There may be error messages posted before this with more 
    information about the failure.
    Anything that we can do to fix this?

    Hi,
      Tried both....
      * Having schema type as Nvarchar(max). - Getting the same error.
      * Instead of ADO.Net Source used OLEDB Source with driver as " Oracle Provide for OLE DB" Getting error as below.
           [OLE DB Source [478]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
           [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on OLE DB Source returned error code 0xC0202009.  The component returned a failure
    code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the
    failure.
    Additional Info:
       * Here the Source task is getting failed not the conversion or destination task.
    Thanks,
    Loganathan A.

  • Message uses a character set that is not supported by the internet service

    Does any one have any advice on how to fix this problem?
    E-mails sent from my iphone 3G periodically arrive in an unreadable form at the recipient. The body of the e-mail has been replaced with the message "This message uses a character set that is not supported by the internet service...." The problem e-mails also include an attachment that contains an unformatted text file containing the original message surrounded by what appears to be lots of formatting data that is displayed as gibberish.
    This occurs sometimes, but not always, even with the same recipients. I am sending e-mail through a G-mail account that is configured on the iphone using IMAP. I have tried the gmail account to use the two available formatting options for mail, but neither fixes the problem.
    I have also upgraded to 2.01 and restored a few times without impact.

    Hi,
    I got somewhat similar problem with special charecters(German umlaud �,�,�..).
    I create a file with java having special charecters in it. Now if I open this file I am able to view the special charecters in it.But If I attach this file send it using following code then receiver can not see the umlaud charecters in it.They get replaced by _ or ?
    MimeBodyPart mbp2 = new MimeBodyPart();
    FileDataSource fds = new FileDataSource(fileName);
    mbp2.setDataHandler(new DataHandler(fds));
    mbp2.setFileName(output.getName());
    Multipart mp = new MimeMultipart();
    mp.addBodyPart(mbp2);
    msg.setContent(mp);
    Transport.send(msg);
    From you message it looks like you are able to send the mail attachment correctly(by preserving special charecters).
    Can you tell me what might be wrong in my code.
    I appriciate your efforts in advance.
    Prasad

  • Oracle XE and character set

    Hello all,
    I installed Oracle XE on RHEL 4 Linux and I found out that database character set is AL32UTF8. Does anyone know why oracle choose this character set? Maybe because of NLS_LANG env variable? Is it possible to change it to EE8ISO8859P2? Since database is still empty I can drop it and crate new database.
    Do you think it is possible to set some env variables and do new oracle xe instalation including database with iso charset?
    I want to have EE8ISO8859P2 charset because of doing exp/imp from another oracle iso db to oracle xe and it is much easier to do this without charset conversion.
    Any help will be appreciated.
    regards,
    Miha

    When you download XE, you have a choice - take the 'western european' character set download, or the 'unicode' download.
    No other choices.
    Join us over in the XE forum where people have discussed this and found workarounds. Info about finding that forum at Re: Oracle XE Installation failed

  • Character sets and ado

    I have a table with a clob field on an Oracle 8.1.7.4 database. When querying the clob field via odbc and ado the value is truncated. The Oracle server and client are using a WE8ISO8859P1 character set. Has anyone come across this before.
    Thanks.

    I believe the data should be able to be represented by IS0-8859. The data is a long random string of characters that represents a fingerprint image.
    We seem to only get 996 characters back from the database. If I do a getchunk on the data then I get 996 characters of data, then 996 NULLS, then 996 characters of data and so on. The 996 NULLS should be data.
    The data is in the database because I can do a dbms_lob.substr and get the correct info back.

  • HOW can I enter text using Japanese character sets?

    The "Text, Plates, Insets" section of the LOOKOUT(6.01) Help files states:
    "Click the » button to the right of the Text field to expand the field for multiple line entries. You can enter text using international character sets such as Chinese, Korean, and Japanese."
    Can someone please explain HOW to do this? Note, I have NO problem inputting Hirigana, Katakana, and Kanji into MS WORD; the keyboard emulates the Japanese layout and characters (Romaji is default) and the IME works fine converting Romaji, and I can also select charcters directly from the IME Pad. I have tried several different fonts with success and am currently using MS UI Gothic.ttf as default. Again, everything is normal and working in a predictable manner within Word.
    I cannot get these texts into Lookout. I can't cut/paste from HTML pages or from text editors, even though both display properly. Within Lookout with JP selected as language/keyboard, when trying to type directly into the text field, the IME CORRECTLY displays Hirigana until <enter> is pressed, at which point all text reverts to question marks (?? ???? ? ?????). If I use the IME Pad, it does pretty much the same. I managed to get the "Yen" symbol to display, though, if that's relevant. As I said, font selected (in text/plate font options) is MS UI Gothic with Japanese as the selected script. Oddly enough, at this point the "sample" window is showing me the exact Hirigana character I want displayed in Lookout, but it won't. I've also tried staying in English and copying unicode characters from the Windows Character Map. Same results (Yen sign works, Hirigana WON'T).
    Help me!
    JW_Tech

    JW_Tech,
    Have you changed the regional setting to Japanese?
    Doug M
    Applications Engineer
    National Instruments
    For those unfamiliar with NBC's The Office, my icon is NOT a picture of me
    Attachments:
    language.JPG ‏50 KB

  • Arabic Character set conversion-help needed

    We have our main database running in 10g (Solaris o/s) & planning to move these to RAC 11g.
    One of our old oracle DB(8.0.5)/solaris, which is not used till recently need to upgrade to 10g Rel2.
    I know Supported direct upgrade 8.0.6/8.1.7/9i -> 10g
    Current DB: 8.0.5 (Character Set: AR8ISO8859P6)
    Target DB : 10g Rel 2 (Character Set: AR8MSWIN1256)
    I am thinking to go the following way by using exp/imp
    8.0.5(AR8ISO8859P6) -> 8.1.7(AR8ISO8859P6) -> 10G(AR8MSWIN1256)
    OR
    8.0.5(AR8ISO8859P6) -> 8.1.7(AR8MSWIN1256) -> 10G(AR8MSWIN1256)
    please advice
    thanks

    (1) At source db 8.0.5 (solaris)
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-YY
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET AR8ISO8859P6
    NLS_SORT BINARY
    NLS_NCHAR_CHARACTERSE T AR8ISO8859P6
    $set NLS_LANG=AMERICAN_AMERICA.AR8ISO8859P6
    $exp sys/dba file=full251109.dmp full=y
    (2):>> At target db 10g R2 (solaris)
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RRRR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET AR8ISO8859P6
    NLS_SORT BINARY
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    $export NLS_LANG=AMERICAN_AMERICA.AR8ISO8859P6
    $imp testdba/testdba file=full251105.dmp fromuser=PROFINAL touser=PROFINAL
    *$csscan testdba/testdba FULL=Y FROMCHAR=AR8ISO8859P6 TOCHAR=AR8ISO8859P6 LOG=P6check CAPTURE=Y ARRAY=100000 PROCESS=2*
    There is EXCEPTIONAL DATA in .err file+
    clients accessing 8.0.5 dataabase uses characterset AR8IS08859P6, which is SAME as 8.0.5 database
    -CSSCAN result->>[Database Scan Parameters]
    Parameter Value
    CSSCAN Version v2.1
    Instance Name MIG1
    Database Version 10.2.0.1.0
    Scan type Full database
    Scan CHAR data? YES
    Database character set AR8ISO8859P6
    FROMCHAR AR8ISO8859P6
    TOCHAR AR8ISO8859P6
    Scan NCHAR data? NO
    Array fetch buffer size 100000
    Number of processes 2
    Capture convertible data? YES
    [Scan Summary]
    Some character type data in the data dictionary are not convertible to the new
    haracter set Some character type application data are not convertible to the new characters
    [Data Dictionary Conversion Summary]
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 2,235,403 0 0 *1,492*
    CHAR 1,097 0 0 0
    LONG 155,188 0 0 6
    CLOB 24,643 0 0 0
    VARRAY 21,352 0 0 0
    Total 2,437,683 0 0 1,498
    Total in percentage 99.939% 0.000% 0.000% 0.061%
    The data dictionary can not be safely migrated using the CSALTER script
    [Application Data Conversion Summary]
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 16,986,594 0 0 *1,240,383*
    CHAR 164,114 0 0 0
    LONG 7 0 0 0
    CLOB 1 0 0 0
    VARRAY 1,436 0 0 0
    Total in percentage 93.256% 0.000% 0.000%
    6.744%
    [Distribution of Convertible, Truncated and Lossy Data by Table]
    USER.TABLE Convertible Truncation Lossy
    PROFINAL.BASE_MASTER_DATAS 0 0 *362,003*
    PROFINAL.CODE_ALLOW 0 0 *53*
    PROFINAL.CODE_ALLOWANCE_TYPES 0 0 *1*
    PROFINAL.CODE_BONUS_TYPES 0 0 *2*
    PROFINAL.CODE_BRANCHES 0 0 *2*
    PROFINAL.CODE_CERTIFICATES 0 0 *94*
    Kindly help,,,
    Edited by: userR12 on Nov 25, 2009 1:43 AM
    Edited by: userR12 on Nov 25, 2009 1:52 AM

  • Problem with Character Set in Oracle database 10g

    Hi,
    I tried to import one tablespace into test server. Source server with Oracle 8i and Target server with Oracle database 10g. The error I get is
    Import: Release 10.2.0.1.0 - Production on Thu Aug 3 00:20:49 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Username: sys as sysdba
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V08.01.07 via conventional path
    About to import transportable tablespace(s) metadata...
    import done in WE8DEC character set and AL16UTF16 NCHAR character set
    export server uses WE8DEC NCHAR character set (possible ncharset conversion)
    . importing SYS's objects into SYS
    . importing SYS's objects into SYS
    IMP-00017: following statement failed with ORACLE error 19736:
    "BEGIN sys.dbms_plugts.beginImport ('8.1.7.4.0',2,'2',NULL,'NULL',67051,25"
    "51,2); END;"
    IMP-00003: ORACLE error 19736 encountered
    ORA-19736: can not plug a tablespace into a database using a different national character set
    ORA-06512: at "SYS.DBMS_PLUGTS", line 2386
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1946
    ORA-06512: at line 1
    IMP-00000: Import terminated unsuccessfully
    PLZ somebody help in geting resolve this. Has anybody seen this error before.

    The solution to this problem is described in MetaLink note #211920.1. But this note is published with LIMITED access as it involves using a hidden parameter.
    You can get access to the note through Oracle Support only.
    The problem itself is solved generically, if the source database is at least 10.1.0.3 and the target database is 10.2
    -- Sergiusz

  • How to add a new character set encoding?

    Hello,
    can anybody please explain to me, how to add a new character set encoding to Mac OS Tiger?
    I have two Mac laptops, a new one with Snow Leopard and an older one with Tiger, and on the old one i cannot use or enable anywhere the "Russian (DOS)" character set encoding, which i need to be able to use some old text files.
    On the Snow Leopard, this encoding is present in the list of available encodings of TextWrangler, but not in TIger.
    If i have understood correctly, this is not a problem of TextWrangler, and the same encodings are available systemwide.
    So, the question is: how to add new encodings to Tiger (or to Mac OS in general)?
    Thanks.

    I think possibly that's in the Get Info window of Finder?
    I don't think either that or the input menu have any effect on available encoding choices. Adding languages to system prefs/international/languages can do that, but once you have added Russian there, I don't know of any way to add an additional Russian encoding (there are quite a number of them).

  • How do you define which character set gets embedded with a font embedded in the library (i.e. Korean)?

    I have project that uses a shared fonts. The fonts are all
    contained in a single swf ("fonts.swf"), are embedded in that swf's
    library and are set to export for actionscript and runtime sharing.
    The text in the project is dynamic and is loaded in from
    external XML files. The text is formatted via styles contained in a
    CSS object.
    This project needs to be localized into 20 or so different
    languages.
    Everything works great with one exception: I can’t
    figure out how to set which character set gets exported for runtime
    sharing. i.e. I want to create a fonts.swf that contains Korean
    characters, change the XML based text to Korean and have the text
    display correctly.
    I’ve tried changing the language of my OS (WinXP) and
    re-exporting but that doesn’t work correctly. I’ve also
    tried adding substitute font keys to the registry (at:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
    NT\CurrentVersion\FontSubstitutes) as outlined here:
    http://www.quasimondo.com/archives/000211.php
    but the fonts I added did not show up in Flash's font menue.
    I’ve also tried the method outlined here:
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275
    to no avail.
    I know there must be a simple solution that will allow me to
    embed language specific character sets for the fonts embedded in
    the library but I have yet to discover what it is.
    Any insight would be greatly appreciated.
    http://www.quasimondo.com/archives/000211.php
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275

    Thanks Jim,
    I know that it is easy to specify the language you want to
    use when setting the embed font properties for a specific text
    field but my project has hundreds of text fields and I'm setting
    the font globally by referencing the font symbols in a single swf.
    I have looked at the info you've pointed out but wasn't
    helped by it. What I'd like to be able to do is to tell Flash to
    embed a language specific character-set for the font symbols in the
    library. It currently is only embedding Latin characters even
    though I know the fonts specified contains characters for other
    languages.
    For example. I have a font symbol in the libary named
    "Font1". When I look at its properties I can see it is spcified as
    Tahoma. I know the Tahoma font on my system contains the characters
    for Korean but when I compile the swf it only contains Latin
    characters (gylphs) - this corresponds to the language of my OS (US
    English). I want to know how to tell Flash to embedd the Korean
    language charaters rather than or as well as the Latin characters
    for any given FONT SYMBOL. If I could do that, then, when I enter
    Korean text into my XML files the correct characters will be
    available to Flash. As it is now, the characters are not available
    and thus the text doesn' t display.
    Make sense?
    Many thanks,
    Mike

  • Error while running package ORA-06553: PLS-553 character set name is not re

    Hi all.
    I have a problem with a package, when I run it returns me code error:
    ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-553: character set name is not recognized
    The full context of the problem is this:
    Previously I had a developing data base, then was migrated to a new server. After that I started to receive the error, so I began to check for the solution.
    My first move was compare the “old database” with the “new database”, so I check the nls parameters, and this was the result:
    select * from nls_database_parameters;
    Result from the old
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    Result from the new
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    As the result was identical, I decided to look for more info in the log file of the new database. What I find was this:
    Database Characterset is US7ASCII
    Threshold validation cannot be done before catproc is loaded.
    Threshold validation cannot be done before catproc is loaded.
    alter database character set INTERNAL_CONVERT WE8MSWIN1252
    Updating character set in controlfile to WE8MSWIN1252
    Synchronizing connection with database character set information
    Refreshing type attributes with new character set information
    Completed: alter database character set INTERNAL_CONVERT WE8MSWIN1252
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    Errors in file e:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_3132.trc:
    Regards

    Ohselotl wrote:
    Hi all.
    I have a problem with a package, when I run it returns me code error:
    ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-553: character set name is not recognized
    The full context of the problem is this:
    Previously I had a developing data base, then was migrated to a new server. After that I started to receive the error, so I began to check for the solution.
    My first move was compare the “old database” with the “new database”, so I check the nls parameters, and this was the result:
    select * from nls_database_parameters;
    Result from the old
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    Result from the new
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    As the result was identical, I decided to look for more info in the log file of the new database. What I find was this:
    Database Characterset is US7ASCII
    Threshold validation cannot be done before catproc is loaded.
    Threshold validation cannot be done before catproc is loaded.
    alter database character set INTERNAL_CONVERT WE8MSWIN1252
    Updating character set in controlfile to WE8MSWIN1252
    Synchronizing connection with database character set information
    Refreshing type attributes with new character set information
    Completed: alter database character set INTERNAL_CONVERT WE8MSWIN1252
    *********************************************************************This is an unsupported method to change the characterset of a database - it has caused the corruption of your database beyond repair. Hopefully you have a backup you can recover from. Whoever did this did not know what they were doing.
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    Errors in file e:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_3132.trc:
    RegardsThe correct way to change the characterset of a database is documented - http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1476
    HTH
    Srini

  • Character set marker unknown error while importing data in 10g from 8i

    Hi All,
    I am trying to import the whole database schema wise (one by one) through oracle 10g database control page (which is browser based)
    But when i try to import the objects of a schema i get the error that
    "IMP-00037 : Character set marker unknown "
    Import terminated unsuccessfully
    Would anybody please tell me about the mentioned error ???
    Kindly provide the solution.
    Mentioned below are the character sets available in both the version.
    Character sets in oracle 8.1.7 :-
    Database Character Set :: WE8ISO8859P1
    National Character Set :: WE8ISO8859P1
    Character sets in oracle 10.2.0.1.0 :-
    Database Character Set :: WE8ISO8859P1
    National Character Set :: AL16UTF16
    Regards
    Milin...
    Message was edited by:
    user640001

    Hi,
    As you have asked, i have mentioned the export command below to get the full database export file.
    And I have copied this export file (dump) to another machine(Server) where i have to import that file to upgrade the database to 10g Rel 2.
    SET CC=%DATE:~4,2%-%DATE:~7,2%-%DATE:~10,4%
    exp username/password@db_name
    file=D:\PAY_BKP\EXP_FULL_PAY_%CC%.DMP log=D:\PAY_BKP\EXP_FULL_PAY.log
    indexes=yes
    full=yes
    Kindly provide guidance.
    Regards
    Milin

  • Java Character set error while loding data using iSetup

    Hi,
    I am getting the following error while migrating settup data from R12 (12.1.2) Instance to another R12 (12.1.2) Instance, Both the Database has same DB character set (AL32UTF8)
    we are getting this error while migrating any setup data
    Actual error is
    Downloading the extract from central instance
    Successfully copied the Extract
    Time taken to download Extract and write as zip file = 0 seconds
    Validating Primary Extract...
    Source Java Charset: AL32UTF8
    Target Java Charset: UTF-8
    Target Java Charset does not match with Source Java Charset
    java.lang.Exception: Target Java Charset does not match with Source Java Charset
         at oracle.apps.az.r12.common.cpserver.PreValidator.validate(PreValidator.java:191)
         at oracle.apps.az.r12.loader.cpserver.APILoader.callAPIs(APILoader.java:119)
         at oracle.apps.az.r12.loader.cpserver.LoaderContextImpl.load(LoaderContextImpl.java:66)
         at oracle.apps.az.r12.loader.cpserver.LoaderCp.runProgram(LoaderCp.java:65)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
    Error while loading apis
    java.lang.NullPointerException
         at oracle.apps.az.r12.loader.cpserver.APILoader.callAPIs(APILoader.java:158)
         at oracle.apps.az.r12.loader.cpserver.LoaderContextImpl.load(LoaderContextImpl.java:66)
         at oracle.apps.az.r12.loader.cpserver.LoaderCp.runProgram(LoaderCp.java:65)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
    Please help in identifying and resolving the issue
    Sachin

    The Source and Target DB character set is same
    Output from the query
    ------------- Source --------------
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    And target Instance
    -------------- Target----------------------
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    The Error is about Source and Target JAVA Character set
    I will check the Prevalidator xml from How to use iSetup and update the note
    Thanks
    Sachin

  • Changing Character set in SAP BODS Data Transport

    Hi Experts,
    I am facing issue in extracting data from SAP.
    Job details: I am using an ABAP data Flow which fetches the data from SAP and loads into Oracle table using Data Transport.
    Its giving me below error while executing my job:
    (12.2) 05-06-11 11:54:30 (W) (3884:2944) FIL-080102: |Data flow DF_SAP_EXTRACT_QMMA|Transform R3_QMMA_EXTRACT__AL_ReadFileMT_Process
                                                         End of file was found without reading a complete row for file <D:/DataService/SAP/Local/Z_R3_QMMA>. The expected number of
                                                         columns was <30> while the number of columns actually read was <10>. Please check the input file for errors or verify the
                                                         schema specification for the file format. The number of rows processed was <8870>.
    reason: When analyzed I found the reason for this is presence of special characters in data. So while generating the data file in SAP working directory which is available on SAP Application server the SAP code page is 1100 due to which the delimeter of the file and the special characters are represented with #. So once the ABAP is executed and data is read from the file it is treating the # as delimiter and throwing the above error.
    I tried to replace the special characters in ABAP data Flow but the ABAP data Flow doesnot support replace_substr function. I also tried changing the Code Page value to UTF-8 in SAP datastore properties but this didnt work as well.
    Please let  me know what needs to be done to resolve this issue. Is there any way we change the character set while reading from the generated data file in BODS to convert code page 1100 to UTF-8.
    Thanks in advance.
    Regards,
    Sudheer.

    Unfortunately, I am no longer working on this particular project/problem. What I did discover though, is that /127 actually refers to character <control>+<backspace>. (http://en.wikipedia.org/wiki/Delete_character)
    In SAP this and any other unknown characters get converted to #.
    The conclusion I came to at the time, was that these characters made their way into the actual data and was causing the issue. In fact I think it is still causing the issue, since no one takes responsibility for changing the records, even after being told exactly which records need to be updated ;-)
    I think I did try to make the changes on the above mentioned file, but without success.

Maybe you are looking for

  • Help needed in writing a function.

    I am using Oracle 11g and SQL plus. I am a complete newbie, so I need some help here in writing a function. I guess my question is more about writing the trigonometric functions within the function. latA, longA latB, longB // these are the four input

  • Problem with Web Dynpro Theme in 2004s ABAP Sneak Preview

    Hi ! Working on the Web Dynpro for ABAP 2004s Sneak Preview I encountered a problem in applying a web Dynpro theme via the URL-parameter sap-cssurl. I tried the following URL: http://hostname:8000/sap/bc/webdynpro/sap/webdynproapp?sap-cssurl=http://h

  • Bridge does not accept Keywords for video files

    Lately, when I try to add a Keyword to a video file (.mov Quicktime) in Bridge, I get the following error message, "The file (name) cannot store XMP metadata. No changes will occur." How do I fix this? Thank you!

  • 6280 Using IMAP4 to access e-mail account password...

    I have a nokia 6280 which at the moment works for internet browsing etc, I have entered the settings for IMAP to enable me to recieve send e-mail from my personal e-mail account(tiscali).When I try to retrieve my e-mails the phone shows connecting fo

  • Server Client Program

    I want to write a server program to accept five clients to send integers to the server only and to accept other five clients to receive integers from the server only. I have got a big stick in here. Anyone can give me an idea?