Character set conversion problem when importing application with script.

Our database has character set: WE8MSWIN1252
We have a region with the following title: Kopiëren. The ë is stored as character 235 in the database.
When I make an application export the file is in UTF-8. The ë is now stored as hex C3 AB in this file.
When importing the application using the Apex tool, the ë is again stored as character 235 in the database.
In our installation script we use some code to load the application. It look something like this:
declare
begin
   -- Determine workspace ID
   apex_application_install.set_workspace_id(l_workspace_id);
   -- Determine app ID
   apex_application_install.set_application_id(l_app_id);
   apex_application_install.generate_offset;
   apex_application_install.set_schema(l_schema);
   apex_application_install.set_application_alias(l_schema);
   l_app_id := apex_application_install.get_application_id;
end;
@..\apex\f200.sqlThis works fine except that no character set conversion takes place. The UTF-8 character is placed in the database so our region has as title: Kopiëren
Is there a way to fix this?

Hi Rene,
The character set portion of your local NLS_LANG environment variable should be AL32UTF8, and this should be set prior to you importing your application via SQL*Plus.
Joel

Similar Messages

  • Oracle to Mysql character set conversion problem!!! PLZ IGNORE

    Hi Experts,
    I have created a database link from Oracle 10g to Mysql 5.
    I have installed Oracle Gateway 11g for this purpose.
    When i retreive the data from sql plus the text is displayed as question marks.
    Oracle 10g Database character set is WE8MSWIN1252
    Mysql character set --->latin1
    Character set of ODBC connector for mysql is  latin7
    Character set in the parameter file of HS folder is WE8MSWIN1252When i retrieve data from sql developer the text is fine(as i think it directly takes the character set of target) but
    when i login from sqlplus i get question marks!
    I have another post in Heterogeneous Connectivity forum
    Re: Oracle to Mysql character set conversion problem!!! PLZ HELP
    Kindly update your comments there,
    @@@@@@@@@@@@@@2
    Appreciate your help,
    regards
    Edited by: user10243788 on Apr 21, 2010 3:25 AM

    It is OK to post a globalization-related question in this forum in addition to the forum pertaining to the main technology. Not all experts follow all possible forums on OTN. Of course, you should cross-link the posts to let people merge the answers.
    Regarding the problem itself, make sure that SQL*Plus has the right NLS_LANG setting in the environment. On Windows, in the Command Prompt:
    C:\> set NLS_LANG=.WE8PC850
    C:\> sqlplus ...On Unix:
    $ setenv NLS_LANG .WE8ISO8859P1   (or NLS_LANG=.WE8ISO8859P1; export NLS_LANG)
    $ sqlplus ...-- Sergiusz

  • CHARACTER SET CONVERSION PROBLEM BETWEEN WIN XP (SOURCE EXPORT) AND WIN 7

    Hi colleagues, please assist:
    I have a laptop running win 7 professional. Its also running oracle database 10g release 10.2.0.3.0. I need to import a dump into this database. The dump originates from a client pc running win XP and oracle 10g release 10.2.0.1.0 When i use the import utility in my database(on the laptop), the following happens:
    Import: Release 10.2.0.3.0 - Production on Tue Nov 9 17:03:16 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Username: system/password@orcl
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Import file: EXPDAT.DMP > F:\uyscl.dmp
    Enter insert buffer size (minimum is 8192) 30720>
    Export file created by EXPORT:V08.01.07 via conventional path
    Warning: the objects were exported by UYSCL, not by you
    import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    export client uses WE8ISO8859P1 character set (possible charset conversion)
    export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
    List contents of import file only (yes/no): no >
    when i press enter, the import windows terminates prematurely without completing the process. What should i do to fix this problem?

    Import: Release 10.2.0.3.0 - Production on Fri Nov 12 14:57:27 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Username: system/password@orcl
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Import file: EXPDAT.DMP > F:\Personal\DPISIMBA.dmp
    Enter insert buffer size (minimum is 8192) 30720>
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    List contents of import file only (yes/no): no >
    Ignore create error due to object existence (yes/no): no >
    Import grants (yes/no): yes >
    Import table data (yes/no): yes >
    Import entire export file (yes/no): no >
    Username:

  • Color space conversion problem when importing JPEG's

    Hi,
    I'm currently playing with the trial version of LR. While importing JPEG's with different color spaces (sRGB and Adobe RGB) to LR I've noticed a strange effect: There is a small but noticable difference in color, depending if the JPEG was previously saved in AdobeRGB, or sRGB. All the images I've tested so far should not contain critical colors that exceed normal sRGB. When opened in CS2 both versions of a JPEG, AdobeRGB and sRGB, typically look perceptually identical, no matter if I leave the sRGB image to sRGB, or convert it to the working space (AdobeRGB). Also my color-managed image viewer behaves as it should. So I don't think it's a matter of the different color spaces.
    Looking at the imported images in LR I would say that the AdobeRGB image is correctly converted while the sRGB image suffers from a slight reddish cast, most noticable in skin tones. The effect is not as strong as if I would load the sRGB image into CS2 and skip color conversion to my working space (AdobeRGB).
    The sRGB versions of the JPEG's were obtained from the AdobeRGB JPEG's using CS2 for conversion.
    Anyone else here experienced a similar problem? Is this a bug in the xRGB-to-ProPhotoRGB conversion of LR, or a feature?
    /Steffen

    Hi Uli,
    thanks for pointing me to your thread. I followed the discussion with great interest. Actually, I think the effect I am describing here is of different nature and a LOT stronger, at least for the type of images I've tested.
    I did some more experiments yesterday with interesting results:
    1) When I export a processed RAW from LR to JPEG or PSD, no matter what Color Space (I tested AdobeRGB, sRGB and ProphotoRGB), and re-import those JPEG/PSD's to LR, they look absolutely identical to the RAW I started with. Also, at first glance, they look similar when opened in CS2, but only because I tested with color images. I can indeed see small differences when testing with B/W, as you described in your "Color management bug" thread.
    2) When I change the color space of an PSD or JPEG inside CS2 (I used the default setting 'relative colorimetric') and save it to JPEG and then import this JPEG to LR, colors are far off. The strength of this mostly reddish color cast depends on the color space of the imported JPEG, strongest for Prophoto, less strong for Adobe and sRGB. Interestingly, when I convert the color space inside CS2 and save the result to PSD, it will display correctly when imported in LR. Another interesting side effect: the thumbnails of LR-exported JPEG's in the "Open" dialogs of CS2 and LR (I guess those are not color-managed) show the typical color-flatness for the Adobe and even more the ProPhoto version. For the CS2-converted JPEG's, all thumbnails look just a colorful as the thumbnail of the sRGB version.
    3) Such an image which doesn't display correctly in LR will keep its color cast when exported again to a JPEG (not sure about PSD). So something goes wrong with the color conversion during the import of such CS2-converted images.
    My explanation so far is that CS2 uses a slightly different way of coding the colorspace information in the metadata of JPEG's which somehow prevents LR to recognise the color space correctly.
    Can you confirm this behaviour?
    Steffen

  • Oracle to Mysql character set conversion problem!!! PLZ HELP

    Hi Experts,
    I have created a database link from Oracle 10g to Mysql 5.
    I have installed Oracle Gateway 11g for this purpose.
    When i retreive the data from sql plus the text is displayed as question marks.
    Oracle 10g Database character set is WE8MSWIN1252
    Mysql character set --->latin1
    Character set of ODBC connector for mysql is  latin7
    Character set in the parameter file of HS folder is WE8MSWIN1252When i retrieve data from sql developer the text is fine(as i think it directly takes the character set of target) but
    when i login from sqlplus i get question marks!
    Appreciate your help,
    regards

    thank you for replying damorgan,
    my previous two threads in the "heterogeneous Connectivity" forum were for different issues, one was to enquire as to how i could connect from oracle to mysql(which i have marked as answered), the other is for error when i get when i tried accessing data(which i am still facing on my office machine ).
    I followed the steps from these two threads and was able to successfully connect to mysql on my personal PC at home, but faced some problem with text not displayed so i created this thread.
    I had created another thread similar to this in the globalisation support as i was facing issue with the character sets in a heterogenous setup, so wasen't clear as to which forum would be suitable for this issue.
    My apologies to everyone if this has offended you.

  • Character set conversion problem during upgrde.

    Dear Friends,
    I am trying upgrade one of my windows database with version 9.2.0.5 to 10.2.0.4 on unix. I am following exp/imp. During import I am seeing followinig errors for couple of tables,
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column
    IMP-00058: ORACLE error 1461 encountered
    ORA-01461: can bind a LONG value only for insert into a LONG column
    This may be due to character set issue, since database on windows has WE8MSWIN1252 and on unix it has UTF8.
    Please let me know how I can resolve this issue.
    Regards.
    Mahdu

    Hello,
    It's better that your Target Database is created with the same character set than the source one.
    This is an option you can choose at the database creation.
    If you have to stay in UTF8 on your Target database then, you'll have to extend the column size or, use the
    option CHAR (as Unicode may use up to *4 bytes* for one character instead of *1 byte* for WE8MSWIN1252).
    To use the option CHAR you may specify it on the column datatype, for instance:
    col1 VARCHAR2 (100 CHAR)Else, without this option, VARCHAR2 (100) means 100 Bytes (which can store 25 characters in Unicode).
    You also have the parameter NLS_LENGTH_SEMANTICS that you can set to CHAR, but the export/import
    utility doesn't manage it well.
    So, the safest way is to create your target database with the same character set than the source one
    except if you want to migrate to Unicode.
    Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Mar 3, 2010 10:11 PM

  • Character set Conversion problem

    Hello
    I have a problem in converting AL32UTF8 character set to AR8MSWIN1256 ,We have two databases in linux suppose
    db1 and db2 , I have a procedure in db2 that connect to db1 by a database link and extract records from a table
    in db1 database and save them in a file , at this process I user CONVERT function for converting AL32UTF8 characters to AR8MSWIN1256 but when I open that file by any editor in Microsoft Windows it show me unreadable characters,I must mention that the table has Farsi (Persion) characters and Persion characters are unreadable
    in db1 database NLS_CHARACTERSET=AL32UTF8 and NLS_NCHAR_CHARACTERSET=AL16UTF16
    in db2 database NLS_CHARACTERSET=AL32UTF8 and NLS_NCHAR_CHARACTERSET=UTF8
    db1 is oracle10 and db2 is oracle9i
    the table structure is like it
    CREATE TABLE "ALI"."STMP"
    (     "P_EXCHANGECODETRADER" VARCHAR2(8),
         "P_NEGOUSLOGIN" VARCHAR2(40),
         "I_BIC_CODE" VARCHAR2(11),
         "P_NEGOLIBELLE" VARCHAR2(60),
         "DAT" DATE DEFAULT trunc(sysdate),
         "SEQ" NUMBER(15,0),
         "OPERATION" CHAR(1))
    and the procedure is :
    create or replace procedure khosravi.traders_changes_file2(code_page varchar2:='AR8MSWIN1256')
    as
    ft UTL_FILE.FILE_TYPE;
    head varchar2(3000 );
    line varchar2(3000 );
    tail varchar2(3000 );
    temp char(3000 char);
    c_date date;
    counter pls_integer:=0;
    function repeat_char(inp char,repeat pls_integer) return varchar2
    as
    res varchar2(2000 char):='';
    begin
    if (LENGTH(inp)*repeat) > 2000 then
    raise_application_error(-20101, 'the length is bigger than 2000');
    end if;
    for i in 1..repeat loop
    res:=res || inp;
    end loop;
    return(res);
    end repeat_char;
    begin
    c_date:=sysdate;
    head:='01' || 'DEALER' || repeat_char(' ',18) ||to_char(c_date,'YYYYMMDD')||
    to_char(c_date,'hh24miss') ;
    temp:=head;
    ft:=UTL_FILE.fopen(location => 'TEST_DIR',filename => 'ali_'||code_page,open_mode => 'w',max_linesize => 3000);
    UTL_FILE.PUT_LINE(file => ft,buffer =>convert(trim(temp),code_page) );
    for rec in (select city,p_exchangecodetrader,p_negouslogin,i_bic_code,p_negolibelle,dat,operation from traders_changes@db1) loop
    line:='12' || 'DEALER' || repeat_char(' ',18) ||to_char(c_date,'YYYYMMDD')|| to_char(c_date,'hh24miss')
    || rec.p_exchangecodetrader || repeat_char(' ',8-length(rec.p_exchangecodetrader))
    || rec.p_negolibelle || repeat_char(' ',50-length(rec.p_negolibelle))
    || substr(rec.p_exchangecodetrader,1,3) || ' '
    || substr(rec.p_exchangecodetrader,1,3) || ' '
    || substr(rec.p_exchangecodetrader,6,3)
    || rec.i_bic_code || repeat_char(' ',11-length(rec.i_bic_code))
    || substr(rec.p_negouslogin,1,8)
    || repeat_char(' ',14)
    || 'I'
    || repeat_char(' ',4)
    || repeat_char(' ',10)
    || ' '
    ||' '
    ||' '
    || repeat_char(' ',40)
    || repeat_char(' ',40)
    || repeat_char(' ',40)
    || repeat_char(' ',5)
    || rec.city || repeat_char(' ',40-length(rec.city))
    || repeat_char(' ',28)
    || repeat_char(' ',15)
    || repeat_char(' ',15)
    || repeat_char(' ',10)
    || rec.operation
    temp:=line;
    UTL_FILE.PUT_LINE(file => ft,buffer =>convert(trim(temp),code_page) );
    counter:=counter+1;
    end loop;
    for rec in (select city,p_exchangecodetrader,p_negouslogin,i_bic_code,p_negolibelle,dat,operation from traders_changes@db1) loop
    line:='13' || 'DEALER' || repeat_char(' ',18) ||to_char(c_date,'YYYYMMDD')|| to_char(c_date,'hh24miss')
    || rec.p_exchangecodetrader || repeat_char(' ',8-length(rec.p_exchangecodetrader))
    || substr(rec.p_exchangecodetrader,1,3) || ' '
    || substr(rec.p_exchangecodetrader,1,3) || ' '
    || substr(rec.p_exchangecodetrader,6,3)
    || repeat_char(' ',151)
    temp:=line;
    UTL_FILE.PUT_LINE(file => ft,buffer =>convert(trim(temp),code_page) );
    counter:=counter+1;
    end loop;
    counter:=counter+2;
    tail:='99'||'DEALER' || repeat_char(' ',18)
    || to_char(c_date,'YYYYMMDD')
    || to_char(c_date,'hh24miss')
    || repeat_char('0',9-length(to_char(counter)))
    || to_char(counter);
    temp:=tail;
    UTL_FILE.PUT_LINE(file => ft,buffer =>convert(trim(temp),code_page) );
    UTL_FILE.fclose(ft);
    end;
    Do you know what is problem or do you know better solution ?
    Thanks

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions027.htm#SQLRF00620 might be of some help
    Take a look at oracle clients you are connecting from too (I'm not experienced as I don't have privileges to do anything regarding the operating system)
    Regards
    Etbin

  • Interconnect(db adapter) character-set conversion problem#########urgency

    the senario is two different characterset db need to integrate though db adater

    the publish one is based on US7ASCII, the subscribe one is ZHS16GBK. and the Interconnect is setup on the redhat Linux server(NLS_LANG=AMERICAN_AMERICA.ZHS16GBK), but i can not get the right chinese content.
    so i have changed the two adapter.ini file(add parameter: encoding=US7ASCII,encoding=ZHS16GBK),then restart the adapters,but seems no use to it. anybody can help me!
    much appreciate!

  • Character set Conversion (US7ASCII to AL32UTF8) -- ORA-31011 problem

    Hello,
    We've run into some problems as part of our character set conversion from US7ASCII to AL32UTF8. The latest problem is that we have a query that works in US7ASCII, but after converting to AL32UTF8 it no longer works and generates an ORA-31011 error. This is very concerning to us as this error indicates an XML parsing problem and we are doing no XML whatsoever in our DB. We do not have XML columns (nor even CLOBs or BLOBs) nor XML tables and it's not XMLDB.
    For reference, we're running 11.2.0.2.0 over Solaris.
    Has anyone seen this kind of problem before?
    If need be, I'll find a way to post table definitions. However, it's safe to assume that we are only using DATE, VARCHAR2 and NUMBER column types in these tables. All of the tables are local to the DB.
    Thanks

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • CSSCAN for database character set conversion failing with ORA-01578

    Hi ,
    CSSCAN for database character set conversion failing with ORA-01578: ORACLE data block corrupted (file # 84, block # 23930). please help me out in this regard.
    Thanks,
    Sravan.

    Hi Anand,
    Thanks for your update. The segment is a table not an index in my case. And i got this error while running CSSCAN on Apps database for character set conversion to UTF8 from WE8ISO8859P1. Please find the snapshot below for your reference.
    SQL> select segment_name, segment_type, owner from dba_extents where file_id = 84 and 23930 between block_id and block_id + blocks - 1;
    SEGMENT_NAME
    SEGMENT_TYPE OWNER
    EDW_LOOKUP_M
    TABLE POA
    SQL> ANALYZE TABLE POA.EDW_LOOKUP_M VALIDATE STRUCTURE CASCADE;
    ANALYZE TABLE POA.EDW_LOOKUP_M VALIDATE STRUCTURE CASCADE
    ERROR at line 1:
    ORA-01578: ORACLE data block corrupted (file # 84, block # 23930)
    ORA-01110: data file 84: '/d911/oracle/dbcondata/poad01.dbf'
    Thanks,
    Sravan.

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Character set in MDL export/import

    Hi,
    we are running OWB 10.1.0.2. In order to get version control, we perform MDL exports of collections from our development environment and then import them into our test and production environments. Each environment uses its own design repository, but all repositories are in the same database.
    When doing an export of a collection, we always specify the character set AL32UTF8 because that is what we are running in the repository database. When later doing an import using the graphical user interface, it is not possible to specify the character set (but this can be done when using the import utility). According to the documentation, the GUI import will then assume that the character set in the collection is the character set of the client, which usually is WE8MSWIN1252. (The documentation also says that it IS possible to specify the character set during GUI import, this is obviously a documentation error).
    My questions are: What is the point of specifying character set when doing exports and imports? Could an AL32UTF8 export followed by an WE8MSWIN1252 import cause problems? I assume that the character set used by export is specified in the collection file, so does the import then convert it to WE8MSWIN1252 (or the character set specified in the import utility)?
    Or, to be more general: What is actually happening with the character sets during MDL export/import?
    /Kjell Gullberg

    Dear ski123,
    I think you are not going to loose any data of yours when you migrate the database. You may proceed to the import.
    Please find below documentations;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14196/install003.htm#sthref81
    For Database Character Set, select from one of the following options:
        *Use the Default—Select this option if you need to support only the language currently used by the operating system for all your database users and your database applications.
        *Use Unicode (AL32UTF8)—Select this option if you need to support multiple languages for your database users and your database applications.
        *Choose from the list of character sets—Select this option if you want the Oracle Database to use a character set other than the default character set used by the operating system.Choosing a Character Set;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch2charset.htm#NLSPG002
    AL32UTF8;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/glossary.htm#sthref2039
    Hope That Helps.
    Ogan

  • Character set conversion on heterogeneous queries?

    Hi,
    We have an Oracle 11g (11.1.0.7.0) database running on a Windows 2003 SP1 server. Heterogeneous connectivity has been implemented to an MS SQL Server (2002) database. Whilst the link itself is working correctly, with queries returning expected results, we are however experiencing a problem with character set conversion. For example the French character é (e-acute) is not being translated correctly and results in an "ORA-29275: partial multibyte character", when queries are made against the table row containing this character.
    The SQL Server database uses the ISO8859-1 character set and our database is using AL32UTF8.
    In the initSID.ora configuration file for the heterogeneous link, we have defined the character set (used by SQL Server) as follows:
    HS_LANGUAGE=american_america.we8iso8859p1
    However this has not resulted in character set translation taking place.
    We have also tried adjusting the "Perform translation for character data" option in the MS ODBC driver. This has not yielded the desired result either.
    What I would like to know is:
    (1) How to correctly configure a heterogeneous link to perform character set translation?
    (2) If (1) above is not possible, then what alternative options are there, to achieve this?
    Thanks,
    William Walker

    In general the conversation should work. You need to make sure no character conversation is enabled in the ODBC driver and for WE8 characters even no HS_LANGUAGE needs to be set in the gateway config file.
    Most commonly the corrupted output is happening, because the application which is used to view the characters is not unicode compliant. So to dig into your issue I would suggest:
    1. Use SQl Developer as client - SQL Developer is unicode compliant and available for free from: http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    2. When you now select from your table, are the characters displayed now correctly?
    If not, please continue:
    3. Now create in your SQL Server a demo table (if possible:
    create table french_char (col1 integer, col2 varchar(20));
    and insert a demo record:
    insert into french_char values (1,'éèç')
    4. Use SQL Developer and the gateway to select from this demo table:
    select "col1", "col2", dump("col2",16) from "french_char"@<your database link>;
    and provide the output.
    Also check out the ORACLE registry key belonging to the ORACLE_HOME containing DG4ODBC installation and provide me the NLS_LANG setting.

  • XML data from BLOB to CLOB - character set conversion

    Hi All,
    I'm trying to solve a problem with a character set conversion in PL/SQL in the following scenario:
    1. source is an XML as a BLOB variable.
    2. target is an XML as a CLOB variable.
    3. the problem I have is the following:
    - database character set is set to UTF-8
    - XML character set could be anything (UTF-8, ISO 8859-1, ISO 8859-2, ASCII, ...)
    - I need to write a procedure which converts the source BLOB content into the target CLOB taking into account the XML encoding and converts it into the DB default character set (UTF8).
    I've been able to implement a simple conversion function. However, this function expects static XML encoding ISO-8859-1. The main part of the function looks as follows:
    buffer := UTL_RAW.cast_to_varchar2(
    UTL_RAW.convert(
    DBMS_LOB.SUBSTR(source_blob_variable, 16000, pos)
    , 'American_America.UTF8'
    , 'American_America.we8iso8859p1')
    Does anyone have an idea how to rewrite the code to handle "any" XML encoding in the source BLOB file? In other words, is there a function in Oracle which converts XML character set names into Oracle character set values (ISO-8859-1 to we8iso8859p1, UTF-8 to UTF8, ...)?
    Thanks a lot for any help.
    Julius

    I want to pass a BLOB to some "createXML" procedure and get a proper XMLType in UTF8 character set, properly converted from whatever character set is the input in.As per documentation the generated XML has always the encoding set at the client side depending on NLS_LANG (default UTF-8), regardless of the input encoding, so I don't see a need to parse the PI of the XML:
    C:\>echo %NLS_LANG%
    %NLS_LANG%
    C:\>sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 30 08:54:12 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> var cur refcursor
    SQL>
    SQL> declare
      2     b   blob := utl_raw.cast_to_raw ('<a>myxml</a>');
      3  begin
      4     open :cur for select xmlroot (xmltype (utl_raw.cast_to_varchar2 (b))) xml from dual;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL>
    SQL> print cur
    XML
    <?xml version="1.0" encoding="UTF-8"?><a>myxml</a>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\>set NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1
    C:\>sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Mi Apr 30 08:55:02 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    SQL> var cur refcursor
    SQL>
    SQL> declare
      2     b   blob := utl_raw.cast_to_raw ('<a>myxml</a>');
      3  begin
      4     open :cur for select xmlroot (xmltype (utl_raw.cast_to_varchar2 (b))) xml from dual;
      5  end;
      6  /
    PL/SQL-Prozedur erfolgreich abgeschlossen.
    SQL>
    SQL> print cur
    XML
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <a>myxml</a>

  • Character set Conversion Buffer Overflow Error

    Hi,
    I have got an issue while loading data from a flat file to a staging table. i.e., Character set Conversion Buffer Overflow. Suppose there are 10,000 records in a flat file, after running control file only 100+ records are loading to the staging table. Remaining are errored out. I think there is no issue with control file because when I load data from different flat file containing same no. of records as the previous flat file, it is loading all the records. what could be the reason and solution for this issue.
    Can anyone please suggest me how to resolve this issue.

    DBNS_OUTPUT is a poor choice for debugging. It has very limited used. And as you've discovered, merely debugging code can now result in new exceptions in the code.
    The proper approach would be to create your own debug procedure (or package). Have your code call this instead of DBMS_OUTPUT.
    In your debug procedure, you can decide what you want to do with that debug data for that specific program in the current environment and circumstances.
    The program that runs could be a DBMS_JOB in which case DBMS_OUTPUT is useless. The program can be called several layers deep from other PL/SQL code.. and you want to know just who is calling your code. Etc.
    Having your own debug procedure allows you to:
    - create an autonomous transaction and log the debug data to a log table
    - write it to a DBMS_PIPE for interactive debugging
    - write it to DBMS_OUTPUT
    - record the PL/SQL call stack to determine who is calling who
    - record the current session's environment (e.g. session_context)
    - record the current session's statistics, opens cursors, current SQL, etc. (courtesy of the V$ views)
    etc. etc.
    In other words, your debug procedure gives you the flexibility to decide on HOW to handle the debugging.
    And when you code goes into production, your debug procedure ships with, containing a simple NULL command.. Which means that at any time the DBA can (when the need arise), add his/her debug methods into it in order to trace a production problem.
    Using DBMS_OUTPUT is a very poor, and often just wrong, choice.
    It is fine for writing a quick test. But when you are developing production code and using DBMS_OUTPUT, you must ask yourself whether you have made the right choice.
    And this is not just about wrapping DBMS_OUTPUT. But also wrapping other system calls like RAISE_APPLICATION_ERROR and so on.

Maybe you are looking for

  • How do you connect macbook to a tv?

    How do i connect my macbook to a tv?

  • Acrobat reader not viewing the PDF with dotted lines

    Hi,  We are using Acrobat Reader Version 10. In that version, if we open the PDF which has figures with dotted lines. It is showing as solid lines. We checked the other versions of Acrobat Reader and the same problem persists in the versions 8, 9  an

  • Can individual projects be backed up/archived?

    Before making drastic changes to an existing project, how can I make a backup/archive in case my changes are for the worse? I'd like to duplicate a project, work on it, and if the changes are good, keep this version. If my changes don't work out, I'd

  • Error in  T-CODE MIRO

    HI guru's very urgent have error in the posting the INVOICE VERIFICATION T-CODE MIRO HAVE A ERROR MESSAGE IN GREEN . MESSAGES IS INVOICE DOCUMENT STILL CONTAINS MESSAGES THIS DOCUMENT IS NOT SAVE PLS CLARIFIED OF MY ISSUE REGARDS SHIVA

  • Error occurs at _MAIN_

    I have written an application in Java. It compiles and runs fine with jdk. However, when I try to run that through my web application I am having some error message when it tries to run the batch file written within the application. Structure of the