Database Character Set Conversion from WE8ISO8859P1 to UTF8

Hi All
I want to migrate data from one database to another database But my original database character set is WE8ISO8859P1 but i want to migrate it to
database which has character set UTF8
because of character set it don't shows me Marathi data which is in original database .
it shows me some symbol for Marathi words ..
please help me out.
Thanking You
Gaurav Sontakke

Dear GauravSontakke,
Since your database version is unknown, i will show you an online documentation of character set migration for 10gR2.
http://www.oracle.com/pls/db102/search?remark=quick_search&word=character+set+migration&tab_id=&format=ranked
http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1442
*http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#NLSPG011*Please read those carefully.
Hope it Helps,
Ogan

Similar Messages

  • CSSCAN for database character set conversion failing with ORA-01578

    Hi ,
    CSSCAN for database character set conversion failing with ORA-01578: ORACLE data block corrupted (file # 84, block # 23930). please help me out in this regard.
    Thanks,
    Sravan.

    Hi Anand,
    Thanks for your update. The segment is a table not an index in my case. And i got this error while running CSSCAN on Apps database for character set conversion to UTF8 from WE8ISO8859P1. Please find the snapshot below for your reference.
    SQL> select segment_name, segment_type, owner from dba_extents where file_id = 84 and 23930 between block_id and block_id + blocks - 1;
    SEGMENT_NAME
    SEGMENT_TYPE OWNER
    EDW_LOOKUP_M
    TABLE POA
    SQL> ANALYZE TABLE POA.EDW_LOOKUP_M VALIDATE STRUCTURE CASCADE;
    ANALYZE TABLE POA.EDW_LOOKUP_M VALIDATE STRUCTURE CASCADE
    ERROR at line 1:
    ORA-01578: ORACLE data block corrupted (file # 84, block # 23930)
    ORA-01110: data file 84: '/d911/oracle/dbcondata/poad01.dbf'
    Thanks,
    Sravan.

  • Character set Problem (From WE8ISO8859P1 to EL8MSWIN1253)

    Hi there,
    I would like to describe a problem that I face with import, export and Greek characters.
    I have two databases with the following characteristics
    Source database:
    O/S version Windows
    Database version à 10GR2
    NLS_CHARACTERSET à WE8ISO8859P1
    NLS_NCHAR_CJARACTERSET à AL16UTF16
    NLS_LANGUAGE à GREEK
    NLS_TERRITORY à GREECE
    Target database:
    O/S version Windows
    Database version à 10GR2
    NLS_CHARACTERSET à EL8MSWIN1253
    NLS_NCHAR_CJARACTERSET à AL16UTF16
    NLS_LANGUAGE à GREEK
    NLS_TERRITORY à GREECE
    From the source database, using the export tool I am exporting a table (TABLE_A) which contains Greek records. At this point I want to mention that the from the source database I am able to read the Greek characters from TABLE_A by using sqlplus i.e (select * from table_a;)
    On the target database by using import I load the table TABLE_A into the database.
    When I select the newly imported TABLE_A on the target database I am not able to read the Greek characters.
    I have tested various scenarios by using different values for the NLS_LANG variable regarding export and import clients but I did not manage to read the Greek characters on the target database.
    If anyone faced the same or a similar problem please I am asking for assistance.
    Thank you in advance

    Please, review this thread:
    Re: Arabic Character set conversion-help needed
    The thread describes a similar issue for Arabic data. Therefore, when reading the thread, substitute 'WE8ISO8859P1' for 'AR8ISO8859P6', 'EL8MSWIN1253' for 'AR8MSWIN1256', and 'Greek' for 'Arabic'.
    -- Sergiusz

  • Clarification on Character set migration from US7ASCII to UTF8

    Hi,
    I need clarification on the below.
    I need to migrate the database from US7ASCII to UTF8.
    For this I ran csscan for user "TEST" as well as against full database.
    Below log is the csscan output against full database. but my application is depended on TEST schema only. Shall I need to migrate SYS objects data as shown below or it's not required?. If required how to migrate these objects data?
    Looking forward you help.
    USER.TABLE Convertible Exceptional
    SYS.METASTYLESHEET 58 TEST.Table_1 9 0
    TEST.Table_2 11 0
    TEST.Table_3 17 0
    TEST.Table_4 11 0
    [Distribution of Convertible Data per Column]
    USER.TABLE|COLUMN Convertible Exceptional
    SYS.METASTYLESHEET|STYLESHEET 58 0
    Thanks,
    Sankar

    I think you need to migrate all schemas data not only one application schema because
    the database character set is common to all CHAR, VARCHAR2, LONG and CLOB colums
    for any tables in any schema.
    In your case (US7ASCII to UTF8), you need to use export/import because:
    Another restriction of the ALTER DATABASE CHARACTER SET statement is that it can be used only when the character set migration is between two single-byte character sets or between two multibyte character sets. If the planned character set migration is from a single-byte character set to a multibyte character set, then use the Export and Import utilities.
    (see http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96529/ch10.htm#1009904)

  • Characterset conversion from WE8ISO8859P1 to UTF8

    Hi All,
    While running CSSCAN utility from WE8ISO8859P1 to UTF8 after running 8-9H, the process is going to sleep state.
    We need to complete this task in next 1week of time.Can any one share why this is happening.
    Thanks,
    Sekhar

    There is a Metalink note on [Bug 6460895 - CSSCAN may hang / run slowly|https://metalink.oracle.com/CSP/ui/flash.html#tab=Home(page=Home&id=fm2z4pjj()),(page=KBNavigator&id=fm2z4vd6(&viewingMode=1141&userQuery=csscan%20sleep&searchControl=1146))] that looks to be vaguely related. This indicates that the bug was fixed in 10.2.0.4, but unfortunately the bug itself isn't published, so it is rather hard to know if this is your problem or not. It would probably be worth logging a support request to see if Oracle support can determine whether or not you're being affected by this bug.
    Meanwhile, can you apply the 10.2.0.4 patchset and see if that solves the problem?
    Justin

  • Character set Conversion (US7ASCII to AL32UTF8) -- ORA-31011 problem

    Hello,
    We've run into some problems as part of our character set conversion from US7ASCII to AL32UTF8. The latest problem is that we have a query that works in US7ASCII, but after converting to AL32UTF8 it no longer works and generates an ORA-31011 error. This is very concerning to us as this error indicates an XML parsing problem and we are doing no XML whatsoever in our DB. We do not have XML columns (nor even CLOBs or BLOBs) nor XML tables and it's not XMLDB.
    For reference, we're running 11.2.0.2.0 over Solaris.
    Has anyone seen this kind of problem before?
    If need be, I'll find a way to post table definitions. However, it's safe to assume that we are only using DATE, VARCHAR2 and NUMBER column types in these tables. All of the tables are local to the DB.
    Thanks

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Character set conversion in advanced replication environment 10g

    we have a replication environment with 5 databases using oracle 10g. one master definition site, 4 master sites. we want to change database character set in replication environment.
    Does somebody has experience about database character set conversion in replication environment.
    Thanks

    842106 wrote:
    Hi All,
    I am currently facing a strange issue while loading a csv into oracle BE table.
    The data is coming as Técnicas but I need to store it as Técnicas . I think there is some character set issue for which data is coming as in distorted manner .
    I am using oracle 10G enterprise edition .
    Any help is greatly appreciated.How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    what do you observe when you inspect the actual content of the CSV file?

  • XML data from BLOB to CLOB - character set conversion

    Hi All,
    I'm trying to solve a problem with a character set conversion in PL/SQL in the following scenario:
    1. source is an XML as a BLOB variable.
    2. target is an XML as a CLOB variable.
    3. the problem I have is the following:
    - database character set is set to UTF-8
    - XML character set could be anything (UTF-8, ISO 8859-1, ISO 8859-2, ASCII, ...)
    - I need to write a procedure which converts the source BLOB content into the target CLOB taking into account the XML encoding and converts it into the DB default character set (UTF8).
    I've been able to implement a simple conversion function. However, this function expects static XML encoding ISO-8859-1. The main part of the function looks as follows:
    buffer := UTL_RAW.cast_to_varchar2(
    UTL_RAW.convert(
    DBMS_LOB.SUBSTR(source_blob_variable, 16000, pos)
    , 'American_America.UTF8'
    , 'American_America.we8iso8859p1')
    Does anyone have an idea how to rewrite the code to handle "any" XML encoding in the source BLOB file? In other words, is there a function in Oracle which converts XML character set names into Oracle character set values (ISO-8859-1 to we8iso8859p1, UTF-8 to UTF8, ...)?
    Thanks a lot for any help.
    Julius

    I want to pass a BLOB to some "createXML" procedure and get a proper XMLType in UTF8 character set, properly converted from whatever character set is the input in.As per documentation the generated XML has always the encoding set at the client side depending on NLS_LANG (default UTF-8), regardless of the input encoding, so I don't see a need to parse the PI of the XML:
    C:\>echo %NLS_LANG%
    %NLS_LANG%
    C:\>sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 30 08:54:12 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> var cur refcursor
    SQL>
    SQL> declare
      2     b   blob := utl_raw.cast_to_raw ('<a>myxml</a>');
      3  begin
      4     open :cur for select xmlroot (xmltype (utl_raw.cast_to_varchar2 (b))) xml from dual;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL>
    SQL> print cur
    XML
    <?xml version="1.0" encoding="UTF-8"?><a>myxml</a>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\>set NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1
    C:\>sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Mi Apr 30 08:55:02 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    SQL> var cur refcursor
    SQL>
    SQL> declare
      2     b   blob := utl_raw.cast_to_raw ('<a>myxml</a>');
      3  begin
      4     open :cur for select xmlroot (xmltype (utl_raw.cast_to_varchar2 (b))) xml from dual;
      5  end;
      6  /
    PL/SQL-Prozedur erfolgreich abgeschlossen.
    SQL>
    SQL> print cur
    XML
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <a>myxml</a>

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Changing database character set from US7ASCII to AL32UTF8

    Our database is running on Oracle database 10.1.0.4.0 (AIX) The following are its parameters:
    SQL> select value from NLS_DATABASE_PARAMETERS where parameter='NLS_CHARACTERSET';
    VALUE
    US7ASCII
    We would like to change the database character set to AL32UTF8. After following Metalink notes: 260192.1 (which helped us resolve "Lossy" and "Truncated" data, the final output of the CSSCAN utility is:
    [Scan Summary]
    All character type data in the data dictionary are convertible to the new character set
    All character type application data are convertible to the new character set
    [Data Dictionary Conversion Summary]
    The data dictionary can be safely migrated using the CSALTER script
    We have no (0) Truncation and Lossy entries on the .txt file. We only have Changeless and Convertible. Now accdg to the documentation, we can do a FULL EXP and FULL IMP. But it did not detail how to do the conversion on the same database. The discussion on the document tells how to do it from one database to another database. But how about on the same database?
    We cannot use CSALTER as stated on the document.
    (Step 6
    Step 12
    12.c) When using Csalter/Alter database to go to AL32UTF8 and there was NO "Truncation" data, only "Convertible" and "Changeless" in the csscan done in point 4:)
    After performing a FULL export of the database, how can we change its character set? What do we need to do the the existing database to change its character set to AL32UTF8 before we import back our dump file into the same database?
    Please help.

    There you are! Thanks! Seems like I am right in my understanding about the Oracle Official Documentation. Thanks!
    Hmmmmm...when you say:
    *"you can do selective export of only convertible tables, truncate the tables, use CSALTER, and re-import."*
    This means that:
    1. After running csscan on database PROD, i will take note of the convertible tables in the .txt output file.
    2. Perform selective EXPORT on PROD (EXP the convertible tables)
    3. Truncate the convertible tables on PROD database
    4. Use CSALTER on PROD database
    5. Re-import the tables into PROD database
    6. Housekeeping.
    Will you tell me if these steps are the correct one? Based on our scenario: This is what i have understood referring to the Official Doc.
    Am i correct?
    I really appreciate your help Sergiusz.

  • UTF8 character set conversion for chinese Language

    Hi friends,
    Would like to some basic explanation on UTF8 feature,what does it help while converting the data from chinese language.
    Would like to know what all characters this UTF8 will not support while converting from chinese language.
    Thanks & Regards
    Ramya Nomula

    Not exactly sure what you are looking for, but on MetaLink, there are numerous detailed papers on NLS character sets, conversions, etc.
    Bottom line is that for traditional Chinese characters (since they are more complicated), they require 4 bytes to store the characters (such as UTF-8, and AL32UTF8). Some mid-eastern characters sets also fall in this category.
    Do a google search on "utf8 al32utf8 difference", and you will get some good explanations.
    e.g., http://decipherinfosys.wordpress.com/2007/01/28/difference-between-utf8-and-al32utf8-character-sets-in-oracle/
    Recently, one of our clients had a question on the differences between these two character sets since they were in the process of making their application global. In an upcoming whitepaper, we will discuss in detail what it takes (from a RDBMS perspective) to address localization and globalization issues. As far as these two character sets go in Oracle, the only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.
    You may also consider posting your question on the Globalization Suport forum which pertains more to these types of questions.
    Globalization Support

  • Use of UTF8 and AL32UTF8 for database character set

    I will be implementing Unicode on a 10g database, and am considering using AL32UTF8 as the database character set, as opposed to AL16UTF16 as the national character set, primarily to economize storage requirements for primarily English-based string data.
    Is anyone aware of any issues, or tradeoffs, for implementing AL32UTF8 as the database character set, as opposed to using the national character set for storing Unicode data? I am aware of the fact that UTF-8 may require 3 bytes where UTF-16 would only require 2, so my question is more specific to the use of the database character set vs. the national character set, as opposed to differences between the encoding itself. (I realize that I could use UTF8 as the national character set, but don't want to lose the ability to store supplementary characters, which UTF8 does not support, as this Oracle character set supports up to Unicode 3.0 only.)
    Thanks in advance for any counsel.

    I don't have a lot of experience with SQL Server, but my belief is that a fair number of tools that handle SQL Server NCHAR/ NVARCHAR2 columns do not handle Oracle NCHAR/ NVARCHAR2 columns. I'm not sure if that's because of differences in the provided drivers, because of architectural differences, or because I don't have enough data points on the SQL Server side.
    I've not run into any barriers, no. The two most common speedbumps I've seen are
    - I generally prefer in Unicode databases to set NLS_LENGTH_SEMANTICS to CHAR so that a VARCHAR2(100) holds 100 characters rather than 100 bytes (the default). You could also declare the fields as VARCHAR2(100 CHAR), but I'm generally lazy.
    - Making sure that the client NLS_LANG properly identifies the character set of the data going in to the database (and the character set of the data that the client wants to come out) so that Oracle's character set conversion libraries will work. If this is set incorrectly, all manner of grief can befall you. If your client NLS_LANG matches your database character set, for example, Oracle doesn't do a character set conversion, so if you have an application that is passing in Windows-1252 data, Oracle will store it using the same binary representation. If another application thinks that data is really UTF-8, the character set conversion will fail, causing it to display garbage, and then you get to go through the database to figure out which rows in which tables are affected and do a major cleanup. If you have multiple character sets inadvertently stored in the database (i.e. a few rows of Windows-1252, a few of Shift-JIS, and a few of UTF8), you'll have a gigantic mess to clean up. This is a concern whether you're using CHAR/ VARCHAR2 or NCHAR/ NVARCHAR2, and it's actually slightly harder with the N data types, but it's something to be very aware of.
    Justin

  • Approach to converting database character set from Western European to Unicode

    Hi All,
    EBS:12.2.4 upgraded
    O/S: Red Hat Linux
    I am looking for the below information. If anyone could help provide would be great!
    INFORMATION NEEDED: Approach to converting database character set from Western European to Unicode for source systems with large data exceptions
    DETAIL: We are looking to convert Oracle EBS database character set from Western European to Unicode to support Kanji characters. Our scan results show
    both “lossy (110K approx.)” and “truncation (26K approx.)” exceptions in the database which needs to be fixed before the database is converted to Unicode.
    Oracle Support has suggested to fix all open and closed transactions in the source Production instance using forms and scripts.
    We’re looking for information/creative approaches  who have performed similar exercises without having to manipulate data in the source instance.
    Any help in this regard would be greatly appreciated!
    Thanks for yourn time!
    Regards,

    There are two aspects here:
    1. Why do you have such large number of lossy characters? Is this data coming from some very old eBS release, i.e. from before the times of the Java applet interface to Oracle Forms?  Have you analyzed the nature of this lossy data?
    2. There is no easy way around truncation issues as you cannot modify eBS metadata (make columns wider). You must shorten or remove the data manually through the documented eBS interfaces. eBS does not support direct manipulation of data in the database due to complex consistency rules enforced by the application itself (e.g. forms).
    Thanks,
    Sergiusz

  • How to alter database character set from AL32UTF8 to EE8MSWIN1250

    Hi folks,
    I'm using an Oracle 10g, XE database, which has a database character set set to AL32UTF8, what causes that some characters like "č ť ř ..." are not displayed.
    To fix this issue, I would like to change it to EE8MSWIN1250 character set as it's set on server.
    Unfortunatelly below steps don;t work for me:
    connect sys as sysdba;
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    ALTER SYSTEM ENABLE RESTRICTED SESSION;
    ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
    ALTER SYSTEM SET AQ_TM_PROCESSES=0;
    ALTER DATABASE OPEN;
    ALTER DATABASE NATIONAL CHARACTER SET EE8MSWIN1250;
    ALTER DATABASE CHARACTER SET EE8MSWIN1250;
    SHUTDOWN IMMEDIATE; -- or SHUTDOWN NORMAL;
    STARTUP;Value in regedit: HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\KEY_XE is set to CZECH_CZECH REPUBLIC.EE8MSWIN1250
    I'm struggling with this issue for hours and unfortunatelly can't make it work. Any advise is more than welcome.
    Thanks,
    Tomas
    I tried to use hints listed here , but always getting some ORA messages.

    Hi Sergiusz,
    Thank you for your reply. You're right, I've probably didn't provide a full details about my issue, but at least now I know, that database encoding(characterset) is correct.
    Here is my issue:
    Within my APEX application I would like to use a JasperReportsIntegration, so to be able to create and run iReports straight from APEX application. Installation, and implementation of JasperReports works fine, I had no issue with it.
    As a second step I've created a simple report using iReport tool, when if preview function is used, all static characters (from report labels) are displayed correctly. Database items are displayed incorectly - some Czech characters are not displayed. Language within report is set to cs_CS, but I've tried also other options. No sucess.
    When I run that report from APEX application (from server) then the same issue - data from database are returned without some czech characters.
    Kind regards,
    Tomas

  • Changing the Database Character Set  From WE8MSWIN1252 to AR8MSWIN1256

    good morning everybody,
    I need your help to know all step which it is necessary to follow to change the Database Character Set From WE8MSWIN1252 to AR8MSWIN1256
    thank you

    If you have not already done so, read up on Character Set Migration and using the CSSCAN tool to verify the database before making any changes.

Maybe you are looking for