한글이 ??? 로 DISPLAY 되는 경우(CHARACTER SET)

제품 : SQL*PLUS
작성날짜 : 1996-07-02
한글이 ??? 로 DISPLAY 되는 경우(CHARACTER SET)
=============================================
Oracle Tools(SQL*Plus, Forms 30, Forms 40, Reports 20 등)을 이용하여 한글
DATA를 조회할 때 ???로 출력되는 경우 해결 방법.
DATABASE는 SQL COMMAND 'CREATE DATABASE'를 포함하는 STATEMENT를 수행할
때 만들어지는데 우리가 그 STATEMENT를 수행하기 앞서 고려해야 할 사항 중의
하나가 DB CHARACTERSET이다.
DB를 CREATE할 때 DATABSE CHARACTERSET을 명시해야만 하는데, 한번 선택되고
난 후에는 CHARACTER SET을 변경하는 것은 쉽지가 않다.
DATA DICTIONARY에 있는 DATA를 포함해서 모든 DATA는 선택된 CHARACTERSET에
의해 입출력 되기 때문에 USER가 다른 CHARACTERSET으로 ACCESS한다면 한글 데이
타가 ???로 출력된다.
또한, 분산 DB 환경이나 UPGRADE할 경우에는 DATABASE CHARACTERSET이 같아야
하므로, 사용자들은 DATABASE의 CHARACTERSET을 알아 두어야 한다.
< 현재 DATABASE 의 CHARACTERSET 확인 및 변경 >
1. 데이타베이스 CHARACTERSET 확인
$ sqldba lmode=y
SQLDBA> connect internal
SQLDBA> select * from v$nls_parameters;
PARAMETER VALUE
NLS_CHARACTERSET KO16KSC5601 (or US7ASCII)
(A)
2. 환경 변수의 NLS_LANG 확인
$ env
NLS_LANG=American_America.US7ASCII
(B)
위의 (A)와 (B)가 동일한 경우에만 한글 데이타 처리에 문제가 없으며, 이것이
서로 다른 상태에서 한글 데이타를 조회할 경우에는 ??? 로 출력 된다.
3. CHARACTERSET을 일치시키는 방법
* NLS_LANG 환경 변수를 변경하여 일치시키는 방법
<UNIX>
Bourne shell, k-shell을 사용하는 경우 .profile을 수정한다.
NLS_LANG = American_Amerca.KO16KSC5601; export NLS_LANG
c-chell을 사용하는 경우 .cshrc 혹은 .login 수정
setenv NLS_LANG American_America.KO16KSC5601
수정 후 다시 $env를 실행하여 변경되었는지 확인한다.
<WINDOWS 3.1>
C:\WINDOWS\ORACLE.INI 수정
NLS_LANG=American_America.KO16KSC5601
WINDOW 재기동
<WINDOWS 95>
WINDOWS 95에서는 NLS_LANG이 ORACLE.INI에 들어있지 않고
REGISTRY에 기록되므로 REGISTRY EDITOR를 이용하여 수정해야 한다.
MS DOS 창으로 나가서 REGEDIT.EXE 실행
또는
시작 -> 실행 -> regedit -> HKEY_LOCAL_MACHINE -> SOFTWARE -> ORACLE
오른쪽 마우스 버튼을 이용하여 NLS_LANG을 수정한다.
REGISTRY 변경 후에 PC를 REBOOTING 할 필요는 없습니다.
<WINDOWS NT>
WINDOWS NT 에서도 WINDOWS 95의 경우와 마찬가지로 REGISTRY에 기록된
정보를 변경해 주면 됩니다. 다음과 같이 합니다.
DOS 창에서 REGEDT32.EXE 실행
HKEY_LOCAL_MACHINE -> SOFTWARE -> ORACLE 선택
메뉴를 선택하여 NLS_LANG을 수정
* 서로 다른 character set 을 갖는 DB를 access 하는 client에서는 다음과
같은 setting 을 하면 편리합니다.
예를 들어서 SERVER의 character set이 US7ASCII이고 PC의 NLS_LANG이
American_America.KO16KSC5601과 같이 서로 다르게 설정되어 있는 경우
다음을 각 client의 환경에 추가하면 한글 문제가 해결됩니다.
ORA_NLS_CHARACTERSET_CONVERSION=NO_CHARACTER_SET_CONVERSION

두 디비가 다른 캐랙터 셋을 쓴다고 해도
디비 자체에는 문제가 없다고 보는데 말이죠..
두 플렛폼을 비교한번해보시고요.
그래도 문제가 생긴다면 님 말씀 대로 바꿔보심이.
위에건 그냥 보시라는거고
중요한건 imp,exp할때 약간 조정이 필요할거 같은데 말이죠.

Similar Messages

  • Lync in combination with Plantronics P540-M: Display changes to logographic character set

    Hello,
    I've recently started to use Lync on my iMac 21,5" from 2011. Now the program itself works fine until i connect a Plantronics P540-M usb phone. Lynn keeps operating as it should apart from the fact that
    the display on the Plantronics phone changes to some sort of asian character set. I'm using Lync version 14.0.10. 
    Does anyone recognise this problem and have a solution for this?
    With kind regards,
    Patrick van Kleeff
    Windesheim University of Applied Sciences

    Language is passed to phone from Lync.
    Try this:
    Go to System Preferences – Available from the Apple menu.
    Click on Language & Region – It is the flag icon in the top row of icons.
    Make sure English is the primary language. If another language is listed, remove it.
    Reboot
    Launch Lync
    If that does not work try this:
    Change the language from "English-Primary" to "English-US"
    Reboot
    Go back into Settings, Language & Region
    Change the language back to "English-Primary"
    Reboot
    Launch Lync

  • Website not displaying correctly. Firefox is changing the character set to Western (ISO-8859-1) automatically.

    Normally I have set Firefox (or it's set by default) to Character Set Unicode (UTF-8) and everything displays perfectly. I've never had a problem before.
    Now however, whenever I upload my own website, for some bizarre reason on that particular tab (and only that tab) the Character Set is changed over to Western (ISO-8859-1) and then there's a few characters within my site that do not display correctly, namely apostrophes and hypens.
    It definitely isn't my software (Serif WebPlus X4) because the page displays correctly in every other browser. Plus it displays correctly in Firefox if I change the Character set back to Unicode.
    PS The site is a work in progress

    That happens because the server sends a content-type (<b>text/html; charset=ISO-8859-1</b>) via the HTTP response headers and in that case that content type prevails. The page code is saved with an UTF-8 byte order mark () that you see in this case.
    *http://web-sniffer.net/?url=http%3A%2F%2Fwww.valuevisionglasses.co.uk&http=1.1&gzip=yes&type=HEAD&uak=0
    *http://httpd.apache.org/docs/current/mod/mod_mime.html#AddType

  • Problem displaying japanese character set in shopping cart smartform

    Hi All,
    whenever users are entering some text in Japanese character set while creating a shopping cart in SRM, the smartform print output is displaying some junk characters!! even though the system is unicode compatable, did any one have problem ??
    Thanks.

    Hi,
    May be there is some problem with UNICODE conversion.
    See the foll links;
    Note 548016 - Conversion to Unicode
    http://help.sap.com/saphelp_srm50/helpdata/en/9f/fdd13fa69a4921e10000000a1550b0/frameset.htm
    Europe Languages  work  in  Non- Unicode  System
    Re: Multiple Backends
    Re: Language issue
    Standard Code Pages in Non-Unicode System
    Re: Upgrade from EBP 4.0 to SRM 5.0
    http://help.sap.com/saphelp_srm50/helpdata/en/e9/c4cc9b03a422428603643ad3e8a5aa/content.htm
    http://help.sap.com/saphelp_srm50/helpdata/en/11/395542785de64885c4e84023d93d93/content.htm
    BR,
    Disha.
    Do reward points for  useful answers.

  • Glyph panel not displaying font character sets in cs5.5

    Hi
    I'm having issues with accessing font character sets & glyphs in ID cs5.5.
    I have all my fonts in fontbook and can see the full font character sets displayed but
    when I go to my glyph panel to access any additional character or glyphs sets there
    is nothing displayed. Any idea where I'm going wrong?
    Thank you

    You potentially have a "bad" font. Check your fonts and remove the offender.
    Mylenium

  • Displaying multiple asian character sets?

    Hi all,
    Just wondering is there anyway to display multiple asian keysets in a java application?
    Right now I can set the locale to say Chinese with the command line property:
    -Duser.language=zh
    I could do the same with Japanese or Korean. But I was wondering is there anyway to display all these character sets in the same program? Would I have to set the JTextFields individually to the relevant fonts so i can see the characters? E.g. if the user wants to input Chinese, all JTextFields are set to Simsun. Same for other character sets?
    thanks,
    J

    Hi Justin,
    As for myself, I used Robot class on Win Xp to switch between input languages in a form (French/Japanese). That works well but requires that you map manually each language to a key combination a Robot can emule.
    I hope that could be useful,
    Best regards,
    Lionel Badiou
    CodeFutures -
    Java Code Generation
    http://www.codefutures.com

  • Message uses a character set that is not supported by the internet service

    Does any one have any advice on how to fix this problem?
    E-mails sent from my iphone 3G periodically arrive in an unreadable form at the recipient. The body of the e-mail has been replaced with the message "This message uses a character set that is not supported by the internet service...." The problem e-mails also include an attachment that contains an unformatted text file containing the original message surrounded by what appears to be lots of formatting data that is displayed as gibberish.
    This occurs sometimes, but not always, even with the same recipients. I am sending e-mail through a G-mail account that is configured on the iphone using IMAP. I have tried the gmail account to use the two available formatting options for mail, but neither fixes the problem.
    I have also upgraded to 2.01 and restored a few times without impact.

    Hi,
    I got somewhat similar problem with special charecters(German umlaud �,�,�..).
    I create a file with java having special charecters in it. Now if I open this file I am able to view the special charecters in it.But If I attach this file send it using following code then receiver can not see the umlaud charecters in it.They get replaced by _ or ?
    MimeBodyPart mbp2 = new MimeBodyPart();
    FileDataSource fds = new FileDataSource(fileName);
    mbp2.setDataHandler(new DataHandler(fds));
    mbp2.setFileName(output.getName());
    Multipart mp = new MimeMultipart();
    mp.addBodyPart(mbp2);
    msg.setContent(mp);
    Transport.send(msg);
    From you message it looks like you are able to send the mail attachment correctly(by preserving special charecters).
    Can you tell me what might be wrong in my code.
    I appriciate your efforts in advance.
    Prasad

  • HOW can I enter text using Japanese character sets?

    The "Text, Plates, Insets" section of the LOOKOUT(6.01) Help files states:
    "Click the » button to the right of the Text field to expand the field for multiple line entries. You can enter text using international character sets such as Chinese, Korean, and Japanese."
    Can someone please explain HOW to do this? Note, I have NO problem inputting Hirigana, Katakana, and Kanji into MS WORD; the keyboard emulates the Japanese layout and characters (Romaji is default) and the IME works fine converting Romaji, and I can also select charcters directly from the IME Pad. I have tried several different fonts with success and am currently using MS UI Gothic.ttf as default. Again, everything is normal and working in a predictable manner within Word.
    I cannot get these texts into Lookout. I can't cut/paste from HTML pages or from text editors, even though both display properly. Within Lookout with JP selected as language/keyboard, when trying to type directly into the text field, the IME CORRECTLY displays Hirigana until <enter> is pressed, at which point all text reverts to question marks (?? ???? ? ?????). If I use the IME Pad, it does pretty much the same. I managed to get the "Yen" symbol to display, though, if that's relevant. As I said, font selected (in text/plate font options) is MS UI Gothic with Japanese as the selected script. Oddly enough, at this point the "sample" window is showing me the exact Hirigana character I want displayed in Lookout, but it won't. I've also tried staying in English and copying unicode characters from the Windows Character Map. Same results (Yen sign works, Hirigana WON'T).
    Help me!
    JW_Tech

    JW_Tech,
    Have you changed the regional setting to Japanese?
    Doug M
    Applications Engineer
    National Instruments
    For those unfamiliar with NBC's The Office, my icon is NOT a picture of me
    Attachments:
    language.JPG ‏50 KB

  • How do you define which character set gets embedded with a font embedded in the library (i.e. Korean)?

    I have project that uses a shared fonts. The fonts are all
    contained in a single swf ("fonts.swf"), are embedded in that swf's
    library and are set to export for actionscript and runtime sharing.
    The text in the project is dynamic and is loaded in from
    external XML files. The text is formatted via styles contained in a
    CSS object.
    This project needs to be localized into 20 or so different
    languages.
    Everything works great with one exception: I can’t
    figure out how to set which character set gets exported for runtime
    sharing. i.e. I want to create a fonts.swf that contains Korean
    characters, change the XML based text to Korean and have the text
    display correctly.
    I’ve tried changing the language of my OS (WinXP) and
    re-exporting but that doesn’t work correctly. I’ve also
    tried adding substitute font keys to the registry (at:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
    NT\CurrentVersion\FontSubstitutes) as outlined here:
    http://www.quasimondo.com/archives/000211.php
    but the fonts I added did not show up in Flash's font menue.
    I’ve also tried the method outlined here:
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275
    to no avail.
    I know there must be a simple solution that will allow me to
    embed language specific character sets for the fonts embedded in
    the library but I have yet to discover what it is.
    Any insight would be greatly appreciated.
    http://www.quasimondo.com/archives/000211.php
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=tn_16275

    Thanks Jim,
    I know that it is easy to specify the language you want to
    use when setting the embed font properties for a specific text
    field but my project has hundreds of text fields and I'm setting
    the font globally by referencing the font symbols in a single swf.
    I have looked at the info you've pointed out but wasn't
    helped by it. What I'd like to be able to do is to tell Flash to
    embed a language specific character-set for the font symbols in the
    library. It currently is only embedding Latin characters even
    though I know the fonts specified contains characters for other
    languages.
    For example. I have a font symbol in the libary named
    "Font1". When I look at its properties I can see it is spcified as
    Tahoma. I know the Tahoma font on my system contains the characters
    for Korean but when I compile the swf it only contains Latin
    characters (gylphs) - this corresponds to the language of my OS (US
    English). I want to know how to tell Flash to embedd the Korean
    language charaters rather than or as well as the Latin characters
    for any given FONT SYMBOL. If I could do that, then, when I enter
    Korean text into my XML files the correct characters will be
    available to Flash. As it is now, the characters are not available
    and thus the text doesn' t display.
    Make sense?
    Many thanks,
    Mike

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Crystal XI R2 exporting issues with double-byte character sets

    NOTE: I have also posted this in the Business Objects General section with no resolution, so I figured I would try this forum as well.
    We are using Crystal Reports XI Release 2 (version 11.5.0.313).
    We have an application that can be run using multiple cultures/languages, chosen at login time. We have discovered an issue when exporting a Crystal report from our application while using a double-byte character set (Korean, Japanese).
    The original text when viewed through our application in the Crystal preview window looks correct:
    性能 著概要
    When exported to Microsoft Word, it also looks correct. However, when we export to PDF or even RPT, the characters are not being converted. The double-byte characters are rendered as boxes instead. It seems that the PDF and RPT exports are somehow not making use of the linked fonts Windows provides for double-byte character sets. This same behavior is exhibited when exporting a PDF from the Crystal report designer environment. We are using Tahoma, a TrueType font, in our report.
    I did discover some new behavior that may or may not have any bearing on this issue. When a text field containing double-byte characters is just sitting on the report in the report designer, the box characters are displayed where the Korean characters should be. However, when I double click on the text field to edit the text, the Korean characters suddenly appear, replacing the boxes. And when I exit edit mode of the text field, the boxes are back. And they remain this way when exported, whether from inside the design environment or outside it.
    Has anyone seen this behavior? Is SAP/Business Objects/Crystal aware of this? Is there a fix available? Any insights would be welcomed.
    Thanks,
    Jeff

    Hi Jef
    I searched on the forums and got the following information:
    1) If font linking is enabled on your device, you can examine the registry by enumerating the subkeys of the registry key at HKEY_LOCAL_MACHINEu2013\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontLink\SystemLink to determine the mappings of linked fonts to base fonts. You can add links by using Regedit to create additional subkeys. Once you have located the registry key that has just been mentioned, from the Edit menu, Highlight the font face name of the font you want to link to and then from the Edit menu, click Modify. On a new line in the dialog field "Value data" of the Edit Multi-String dialog box, enter "path and file to link to," "face name of the font to link".u201D
    2) "Fonts in general, especially TrueType and OpenType, are u201CUnicodeu201D.
    Since you are using a 'true type' font, it may be an Unicode type already.However,if Bud's suggestion works then nothing better than that.
    Also, could you please check the output from crystal designer with different version of pdf than the current one?
    Meanwhile, I will look out for any additional/suitable information on this issue.

  • Chinese Character Sets

    I'm setting up a web site, as a favor for a friend, that will
    be in Chinese and French. I'm not sure what the best page encoding
    would be. My friend sent me some sample Chinese, and she said, "The
    font I use is 'Simsun'." Otherwise, she is not very technical.
    There are apparently 4 character sets available in
    Dreamweaver that will display Chinese (after a fashion):
    1. charset=big5: displays in both code and design view, but
    Dreamweaver complains that not all characters can be displayed when
    I save the file, and some characters appear as "?" in both IE7 and
    Firefox. My friend says the characters don't look good.
    2. charset=gb2312: displays correctly in Dreamweaver and both
    of my browsers, and my friend says it looks OK.
    3. charset=hz-gb-2312: displays correctly in Dreamweaver code
    view, but not design view. Displays in both browsers I use, and my
    friend says it looks OK. This is apparently a 7 bit code that will
    encode both Chinese (as two byte characters) and ASCII. There is an
    escape character, "~" for toggling between the two sets. The HTML
    Dreamweaver creates seems to deal with this transparently.
    4. charset=utf-8: doesn't display in Dreamweaver (code or
    design view). Does seem to display correctly in my browsers, but my
    friend doesn't like it.
    I also noticed that although the characters displayed on my
    desktop system, I got all "?" on my laptop. Then it occurred to me
    that my friend had used my desktop to check her e-mail, so she had
    apparently installed the correct font(s). I displayed the pages in
    IE on my laptop, was prompted to install the fonts (Firefox didn't
    mention a problem), and then the text also displayed on my laptop.
    I intend to deal with this problem by creating any home page text
    (there won't be much) as Fireworks images. Anyone looking at the
    other Chinese text will undoubtedly have the correct fonts already.
    If any of you have done anything like this before, I would be
    grateful for any suggestions.

    Clean & Sober wrote:
    > 2. charset=gb2312: displays correctly in Dreamweaver and
    both of my browsers,
    > and my friend says it looks OK.
    GB2312 is the correct character set for Chinese as used in
    the People's
    Republic. It's sometimes known as "simplified Chinese". That
    doesn't
    mean it's simple, but that the characters have been
    modernized by
    removing some of the more complex strokes.
    To view pages written in Chinese or any other non-alphabetic
    script, you
    need to have the correct fonts installed. That shouldn't be a
    problem
    for the target audience.
    David Powers
    Adobe Community Expert
    Author, "Foundation PHP for Dreamweaver 8" (friends of ED)
    http://foundationphp.com/

  • Logon with non-English language LSMW donot display character

    hi,When I Logon with non-english language ,
    When I use LSMW ,
    But the screen donot display character,
    and in smlt , i setting supplementation language is english ,
    who can help me ,thanks.

    Hi Benson,
    Can you please elaborate on the issue. What characters are missing?
    Regards.
    Ruchit.

  • Multiple character sets on a single page

    JDev 11.1.1.5 - WLS 10.0.3.5
    I have an application that needs to have some fields in a different character set (like Amharic) and some in English.
    These are fixed - so when the user enters the field - it should already be in the different language.
    I use UTF8 for all my jspx. The fonts are unicode. The database is setup for NVARCHAR.
    I am using ADF.
    What do I need to do to create this kind of page? Where do I install the fonts? And how do I make the Input Text default to the appropriate character set for display/input?

    No, you can only have one <f:view> per JSP page (including any pages that page includes), and the locale must be the same for the complete response (because it's also used when the post-back request is parsed).
    It's hard to say from your description if this makes sense or not, but why don't you use static text for the part that is always in German, and only localize the parts of the page that needs it?
    Hans Bergsten (EG member)

  • Character sets - UTF8 or Chinese

    Hi,
    I am looking into enhancing the application I have built in Oracle to save/display data in Chinese & English. I have looking into how to change the character set of a database to accept different languages i.e. different characters.
    From what I understand I can create a database to use a Chinese character set (apparently English ascii characters are also a part of any Chinese character set) or I can set the database to use a unicode multi-byte character set (UTF8) - which seems to be okay for all languages.
    Has anyone had any experience of a) changing an existing standard 7 byte ascii database into database which can handle Chinese and/or b) the difference/ implications between using a Chinese and unicode character sets.
    I am using Oracle RDBMS 8.1.7 on SuSE Linux 7.2
    Thanks in advance.
    Dan

    If the data is segmented so that character set 1 data is in a table and character set 2 data is in another table then you may have a chance to salvage the data with help from support. The idea would be to first export and import only your CL8MSWIN1251 data to UTF8. Be careful that your NLS_LANG is set to CL8MSWIN1251 for export so that no conversion takes place. Confirm the import is successful and remove CL8MSWIN1251 data from database. Oracle support can now help you override the character set via ALTER database to say MSWIN1252. Now selectively export/import this data, again make sure NLS_LANG is set to MSWIN1252 for export so that no conversion takes place. Confirm the import is successful and remove MSWIN1252 data from database. And then do the same steps for 1250 data.

Maybe you are looking for

  • Not able to see Print Preview of Purchase Order.

    Dear All ,                                                                    I m not able to see  the Print preview of Purchase Order , as the P.O is well release , but as i m click on Print Preview , an info displays in task bar , -> "  Error in OP

  • Creating database in other mount point

    i had installed oracle 10g in mount point u01 but i want to create database in mount point u02 how can i do this?? i had to do this by using dbca

  • SAPScript issues - J_1B_BORDERO

    Hi all, I am having a hard time figuring out where the data for the SAPScript form J_1B_BORDERO is begin collected. I need to modify the form by adding additional data to it, but unless I figure out which program to modify I can not do it. I have bee

  • SQL statement - tricky search problem

    Hi, I am working in asp and use an Access db. I have a database with 500 shops around the world, the table contains among other things shop name, country and region (Europe, Asia etc). Then a search page with dropdown lists, one for Region and one fo

  • Cell rendering bug on win 2k?

    hi, I have extended DefaultCellRenderer for a JTree to switch text color depending on a flag for different tree nodes. The problem is that although the getTreeCellRendererComponent call works (and the flags are correct - verified through println call