Switching Character Sets in JDeveloper

Hi, we have developed a number of Portlets on JDev 10.1.2 but used the default encoding of cp1252. When looking back on my course notes I saw that the course instructor had recommended that we set the default project encoding to UTF-8. What are the implications of using the default encoding? We have coded our portlets to use resource bundles in order that we can make them multi-lingual at some point in the future. Is UTF-8 essential for this and if so, should we take the hit now to re-encode ( on trying it, I got errors about malformed input values, mostly hyphens and apostorphes) or will it be a painless process if we do it later on?
Thanks for any feedback.

Hello,
Specifying the character set for a WebLogic webservice (see:
http://edocs.bea.com/wls/docs81/webserv/i18n.html#1069629) is one of the
many enhancements made since the 6.1 release. If possible, the best
solution for your webservice development would be to upgrade.
Bruce
"özkan Demir" wrote:
>
Hi all;
I am developing RPC style webservices on weblogic server 6.1 with service pack
2.
I have a problem about character sets. I have to turkish language so that the
character sets must be ISO-8859-9. But the soap messages are default UTF-8 so
that turkish characters becomes undetermined.
Is there a way that I can send soap messages in ISO-8859-9 character set or what
do I have to solve the problem of turkish characters between the client and the
server applications with webservices...Please Help!
Thanks

Similar Messages

  • SQLException Character Set Not Supported

    I'm at wit's end.
    I have the following versions:
    Win2k
    JDeveloper 9i R2
    Oracle9iAS 9.0.3.0.0
    Oracle 9i client 9.0.1.1
    Sun J2SDK 1.3.1
    Oracle 9i Enterprize Edition 9.0.1.3.0
    I'm trying to get a JDBC connection from a DataSource in a servlet. The result is the following error: java.sql.SQLException: Character Set Not Supported !!: DBConversion.
    I get the same problem just trying a simple JDBC connection in JDeveloper. I can fix this problem by forcing JDeveloper to use classes12.jar from the Oracle client directory BEFORE oc4j.jar in the JDeveloper libraries.
    Unfortunately, I can't get the external 9iAS server to make this switch. It's (naturally) using its copy of oc4j.jar.
    Any ideas on:
    1) How to properly avoid the Character Set error.
    2) How to force 9i AS to use different libraries before it looks in oc4j.
    Any help is greatly appreciated

    I have tried the embedded server in JDeveloper with the same results. Currently, I am using the external one.
    I believe the data source is valid. I can duplicate this problem by just using a plain JDBC connection from DriverManager. Also, I switched to the 1.0.2 version of oc4j, replaced the Oracle stuff with 8i versions and was able to connect using the same data-sources.xml.
    This problem also goes away when using the thin driver but we're reluctant to use the thin driver for performance reasons.
    Thanks for the response,
    TG

  • Character set Conversion (US7ASCII to AL32UTF8) -- ORA-31011 problem

    Hello,
    We've run into some problems as part of our character set conversion from US7ASCII to AL32UTF8. The latest problem is that we have a query that works in US7ASCII, but after converting to AL32UTF8 it no longer works and generates an ORA-31011 error. This is very concerning to us as this error indicates an XML parsing problem and we are doing no XML whatsoever in our DB. We do not have XML columns (nor even CLOBs or BLOBs) nor XML tables and it's not XMLDB.
    For reference, we're running 11.2.0.2.0 over Solaris.
    Has anyone seen this kind of problem before?
    If need be, I'll find a way to post table definitions. However, it's safe to assume that we are only using DATE, VARCHAR2 and NUMBER column types in these tables. All of the tables are local to the DB.
    Thanks

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • DIR7 Character Set Problem / Foreign Language

    Hi there,
    I am working on an app built using Director 7 that until now
    has used the standard English (latin-1) character set.
    However, I am required to deliver a new version including
    some elements displayed in a second language, in this case Welsh,
    which uses characters outside of the normal set. I believe those
    required are included in Latin-1 Extended, otherwise in Unicode as
    a whole, obviously.
    I am having specific problems with two characters that appear
    to be missing from Latin-1, which are: ŵ and ŷ
    (w-circumflex, and y-circumflex [i think!]).
    In a standard text box I create using Director, I am unable
    either to paste either character in, or enter it using its
    ALT+combination, let alone save to the associated database.
    I have read that Dir 11 is the first version with full
    Unicode support - which surprises me - however I would assume that
    someone would likely have hit this, or a similar issue before the
    release of this version and was wondering if there is a possible
    solution without upgrade.
    My possible thinking is either a declaration that allows
    change of a Charset, as I might do in XHTML for example, or
    deployment of an Xtra that allows me to use a different character
    set.
    If anyone could shed some light on the matter, it would be
    very helpful! Thanks in advance!
    Rich.

    Yes, this was always a problem for years. Back when I was
    **** this, we had
    some projects that needed text displayed in various
    languages. Each
    language presented its own challenges. Things like Greek
    weren't too bad,
    because the Symbol font works for most Greek text. (Only
    problem was the
    's' version of Sigma, which had to switch back to Times New
    Roman.) Various
    eastern European languages (Polish, Czech, Hungarian, etc.)
    posed a problem
    with some of the accents that were not available in standard
    font sets. We
    were forced to live without some of the more exotic accents,
    but were told
    that it would still be readable without them, if not exactly
    correct. This
    would probably be the closest to your situation, from what
    little I know
    about Welsh. It could be worse, though. Hebrew and Arabic
    were challenging
    as they are written right-to-left, and thus had to have code
    written to
    input them backwards. Russian was also tough, as the Cyrillic
    alphabet has
    more characters than the others, but I was able to find a
    font to fake it.
    (It replaced some of the lesser-used standard characters in
    order to fill in
    all the letters, which unfortunately meant that in the rare
    cases where
    those characters *were* needed, we had to improvise.) The
    hardest by far
    were any east Asian languages. In that case, I gave up on
    trying to display
    any of the text in text form, and just converted it all to
    bitmaps. Without
    Unicode, trying to display Mandarin or Japanese or Korean
    correctly as text
    is pretty much impossible.

  • Problems with LPX-00245 (character set problem?)

    Hi all,
    I've got a problam with ORA-19202 and LPX-00245 (extra data after end of document) when querying my xmltype table. The table contains one large xml document. This xml document is valid, I've checked it against the corresponding XSD (using JDeveloper and also Notepad++, no validation errors).
    I gues it has something to do with the encoding of the document. The original encoding is ISO-8859-1 (<?xml version="1.0" encoding="ISO-8859-1"?>). When I load the document to the database it is autoamtically changed to UTF-8 (<?xml version="1.0" encoding="UTF-8"?>) maybe because the character setting of my database is AL32UTF8.
    I use the following statement to store my XML:
          insert into my_table
          values( my_seq_spp.nextval,
               r_get_files.file_name,
                  xmltype(
                  bfilename(p_directory, r_get_files.file_name) -- p_directory is the name of an oracle dircetory
                  , nls_charset_id('WE8ISO8859P1')
    Nevertheless the retrieved charset id 31 is ignored. Also if II use csid = 0, it doesn't work...
    Any idea how to enforce using ISO-8859-1 instead UTF-8 as character set?
    Best regards
    Matthias

    Hi Marco,
    I don't think it has anything to do with encoding (client-side or not).
    I'd be more inclined to say it's related to XML fragments manipulation.
    @Matthias :
    Does this work better :
    select m.version
         , sp.Betriebsstelle
         , spa.Betriebsstellenfahrwege
    from imp_spurplan t
       , xmltable('/XmlIssDaten'
           passing t.xml_document
           COLUMNS
             Version                  varchar2(6) path 'Version/Name'
           , Spurplanbetriebsstellen  xmltype     path 'Spurplanbetriebsstellen'
         ) m
       , xmltable('/Spurplanbetriebsstellen/Spurplanbetriebsstelle'
           passing m.Spurplanbetriebsstellen
           COLUMNS
             Betriebsstellenfahrwege_xml xmltype      path 'Betriebsstellenfahrwege'
           , Betriebsstelle              varchar2(6)  path 'Betriebsstelle'
         ) sp
       , xmltable('/Betriebsstellenfahrwege'
           passing sp.Betriebsstellenfahrwege_xml
           COLUMNS
             Betriebsstellenfahrwege  xmltype path '.'
         ) spa
    where sp.Betriebsstelle = 'NWH'

  • Reading in Latin Extended-A character set from a text file

    Hello all,
    I am writing a small program that reads in a text file containing special characters (beyond the ASCII char set) and converting it into "regular" characters. For example I would read in a uaccent and replace it with a u.
    Now I realize that Unicode support is built into Java from ground up but it goes only so far, you actually have to have the relevant character set to read it. My code is as follows:
    InputStreamReader inStreamReader = new InputStreamReader(new FileInputStream("input.txt"), "ISO-8859-1");
    BufferedReader bufferedReader = new BufferedReader(inStreamReader);
    String line = null;
    StringBuffer buff = new StringBuffer();
    while((line = bufferedReader.readLine()) != null) {
    char[] charArray = line.toCharArray();
    for(int i = 0; i < charArray.length; i++) {
    int x = (int)charArray;
    switch(x) {
    case 224: // this is agrave .. we need to replace it with a
    buff.append('a');
    break;
    case 230: // this is aelig .. we need to replace it with ae
    buff.append("ae");
    break;
    ///////// and so on
    Since I am reading in as ISO-8859-1, this works up to unicode 255. For the rest of the characters, apparently I need a Latin Extended-A and Latin Extended-B character set. How can I get that installed on my Windows OS machine? I am using jdk 1.4.1 on Windows XP. Any help is appreciated.
    Thanks,
    -vk4t

    vkat wrote:
    Since I am reading in as ISO-8859-1, this works up to unicode 255. For the rest of the characters, apparently I need a Latin Extended-A and Latin Extended-B character set. How can I get that installed on my Windows OS machine? I am using jdk 1.4.1 on Windows XP. Any help is appreciated.If your file has characters outside of 8859-1's range (0 - 255), then it isn't ISO-8859-1 encoded. You need to know what encoding was used to store the file. It sounds like you it actually may be Unicode text, in which case you need to know which encoding (UTF8, UTF16, etc) was used.

  • Polish character sets

    Hi reader,
    do you have a clue how to set character set or how to to switch between character sets? e.g. from cp1252 to 1250 and vice versa
    I'd like to display polish characters within a swing GUI and do not know how to switch....
    internationalization works fine (based upon Messages_iso.properties) but the character set does not switch to something fitting.
    thx
    Message was edited by:
    digit

    Java-internal character set is UTF-16, there should be no need for "switching". Property files, however, are restricted to ISO-8859-1 and may use Unicode escape sequences (\uxxxx). Maybe you should check the contents of your property file.

  • MySQL5 and Character Sets

    Hi Everyone.
    We are evaluating switching for MySQL4.0.x (native support
    via CF) to MySQL5.0.x (support via JDBC ConnectorJ) and we are
    having some character set issues with on our evaluation server.
    When we had it configured with MySQL4.0.x using the built in MySQL
    driver we always used the connection string to use the UTF-8
    character set:
    useUnicode=true&characterEncoding=utf-8
    We have tried using this with the JDBC driver but it doesn't
    appear to have any effect, all the special character are coming out
    as mangle multiple character string, which is the same as we see if
    we connect to the server from the command prompt using the default
    "Latin1" character set. If we connect from the command prompt using
    UTF-8 everything looks ok, so I'm guessing the connection string
    has changed syntax. I've checked the ConnectorJ documentation and
    it appears the connection string should now be:
    characterEncoding=UTF-8
    However, this did seem to make any difference.
    Any ideas?

    andrewdixon wrote:
    > Hi Everyone.
    >
    > We are evaluating switching for MySQL4.0.x (native
    support via CF) to
    > MySQL5.0.x (support via JDBC ConnectorJ) and we are
    having some character set
    > issues with on our evaluation server. When we had it
    configured with MySQL4.0.x
    > using the built in MySQL driver we always used the
    connection string to use the
    > UTF-8 character set:
    >
    > useUnicode=true&characterEncoding=utf-8
    >
    > We have tried using this with the JDBC driver but it
    doesn't appear to have
    > any effect, all the special character are coming out as
    mangle multiple
    > character string, which is the same as we see if we
    connect to the server from
    > the command prompt using the default "Latin1" character
    set. If we connect from
    > the command prompt using UTF-8 everything looks ok, so
    I'm guessing the
    > connection string has changed syntax. I've checked the
    ConnectorJ documentation
    > and it appears the connection string should now be:
    >
    > characterEncoding=UTF8
    >
    > However, this did seem to make any difference.
    >
    > Any ideas?
    >
    > Kind regards,
    >
    > Andrew.
    >
    try:
    1) add the following to the end of JDBC URL in CF Admin DSN
    config
    screen for your db:
    ?useUnicode=true&characterEncoding=utf8&characterSetResults=UTF-8
    (note: NOT in the "connection string" box, but at the end of
    jdbc url!)
    2) in your Application.cfm file add the following lines right
    after
    <cfapplication> tag:
    <cfscript>
    SetEncoding("form","utf-8");
    SetEncoding("url","utf-8");
    </cfscript>
    <cfcontent type="text/html; charset=utf-8">
    3) on every cfm page in your application add the line:
    <cfprocessingdirective pageencoding="utf-8">
    as the first line of code
    all three or combination of 2 of the above usually solve the
    problems
    with displaying utf-8/unicode encoded text from db. which
    combination
    works depends on your db setup...
    Azadi Saryev
    Sabai-dee.com
    Vientiane, Laos
    http://www.sabai-dee.com

  • Non supported character set: ...

    Hi,
    I'm sure I have a well known problem. When trying to post a new row to my DB (Oracle 9i Rel. 2) a java.sql.SQLException is thrown with the message Non supported character set: oracle-character-set-178
    What I did to fix this error is:
    - checked if nls_charset12.jar is in the classpath (it is)
    - tried thin instead of oci (the error is the same)
    - changed my NLS_LANG to ....UTF8 or ....WE8ISO8859P1 (didn't help also)
    Does anybody know something else what I can do?
    Thank you.
    Axel

    I found the solution myself.
    I had to use the classes12.jar (etc) shipped with JDeveloper. I replaced these some day because OCI access didn't work with these classes. But now it seems to work.

  • Displaying multiple asian character sets?

    Hi all,
    Just wondering is there anyway to display multiple asian keysets in a java application?
    Right now I can set the locale to say Chinese with the command line property:
    -Duser.language=zh
    I could do the same with Japanese or Korean. But I was wondering is there anyway to display all these character sets in the same program? Would I have to set the JTextFields individually to the relevant fonts so i can see the characters? E.g. if the user wants to input Chinese, all JTextFields are set to Simsun. Same for other character sets?
    thanks,
    J

    Hi Justin,
    As for myself, I used Robot class on Win Xp to switch between input languages in a form (French/Japanese). That works well but requires that you map manually each language to a key combination a Robot can emule.
    I hope that could be useful,
    Best regards,
    Lionel Badiou
    CodeFutures -
    Java Code Generation
    http://www.codefutures.com

  • Foreign character set problem

    Hi Sergiusz, it looks like you are a character set guru. Maybe you would know how to solve my problem? I've got a working application under oracle xe, apache and embedded listener. I would like to switch to a new APEX listener with tomcat but there is a problem with a foreign character set. Existing pages with such characters are displayed correctly but if I type them in into an input filed they are not showing up correctly on the next page. This is done without even saving information in a database. I type in text in one field which is a source of an item on another page. These are character settings in my database.
    SQL> select value from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    SQL> select value from nls_database_parameters where parameter = 'NLS_NCHAR_CHARACTERSET';
    VALUE
    AL16UTF16
    Thanks,
    Art

    Hi Sergiusz, thank you for your help. After setting a URIEncoding to UTF-8 and some further research I was able to fix my problem. Here is the entire solution in case someone else needs it.
    1.) Change $CATALINA_HOME/conf/server.xml and add
    URIEncoding=UTF-8
    <Connector port="8090" protocol="HTTP/1.1"
    connectionTimeout="20000"
    redirectPort="8443" URIEncoding="UTF-8"/>
    2.) Copy $CATALINA_HOME/webapps/examples/WEB-INF/classes/filters/SetCharacterEncoding.class => $CATALINA_HOME/webapps/apex/WEB-INF/classes/filters
    3.) Add the following into $CATALINA_HOME/webapps/apex/WEB-INF/web.xml file after the last </servlet-mapping> tag.
    <filter>
    <filter-name>Set Character Encoding</filter-name>
    <filter-class>filters.SetCharacterEncodingFilter</filter-class>
    <init-param>
    <param-name>encoding</param-name>
    <param-value>UTF8</param-value>
    </init-param>
    </filter>
    <filter-mapping>
    <filter-name>Set Character Encoding</filter-name>
    <url-pattern>/*</url-pattern>
    </filter-mapping>
    Thanks,
    Art

  • Importing from a different character set

    Oracle 8.1.7 / Windows NT
    I'm trying to import a dump file which was created with character set WE8ISO8859P9. My database uses character set UTF8. Some of the records can't be inserted because of error "ORA-1401: Value too large for column". Is this because of the different character sets? If I switch my session to WE8ISO8859P9, imp says "character set conversion from x to y not supported."
    How can I get these last records inserted? Here's an excerpt from the log:
    Verbunden mit: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.0.0 - Production
    <
    Export-Datei wurde von EXPORT:V08.00.05 |ber konventionellen Pfad erstellt
    Warnung: Die Objekte wurden von NOC_ADMIN exportiert, nicht von Ihnen.
    Importvorgang mit Zeichensatz WE8ISO8859P9 und Zeichensatz UTF8 NCHAR durchgef|hrt
    Import-Server verwendet Zeichensatz UTF8 (mvgliche Zeichensatzkonvertierung)
    Export-Server verwendet Zeichensatz WE8ISO8859P9 NCHAR (mvgliche Zeichensatzkonvertierung)
    . Import NOC_ADMIN's Objekte in NOC_ADMIN
    . . Import der Tabelle "ACCESSROUTERIFS_" 782 Zeilen importiert
    . . Import der Tabelle "ITEM_"
    IMP-00019: Zeile zur|ckgewiesen aufgrund von Oracle-Fehler 1401
    IMP-00003: Oracle-Fehler 1401 gefunden
    ORA-01401: Eingef|gter Wert zu gro_ f|r Spalte
    Spalte 1 33886
    Spalte 2
    Spalte 3
    Spalte 4 1323
    Spalte 5
    Spalte 6 11
    Spalte 7 18600
    Spalte 8 18600
    Spalte 9 20-NOV-2000:00:00:00
    Spalte 10 processing
    Spalte 11 inactive
    Spalte 12
    Spalte 13
    Spalte 14 35682.0
    Spalte 15
    Spalte 16
    Spalte 17
    Spalte 18 05.12.00: KD weiss noch nix neues, er wird uns inf...
    Spalte 19
    Spalte 20 kschmid
    Spalte 21 09-FEB-2001:15:50:21
    Spalte 22
    Spalte 23 12
    Spalte 24
    Spalte 25 06-NOV-2000:00:00:00
    null

    Please try ORacle RDBMS support. this issues is to do with Oracle Import.

  • Find/Replace Extended Character Set characters in filenames in one pipeline

    Hello all,
    I have to work with some very bored people. Instead of putting a dash (hex 2d) into a filename, they opt for something from this
    set of extended characters, which makes my regular expressions fail.  IS there a way I can efficiently find & replace anything outside the standard character set
    in one pipelinewithout finding and replacing a character at a time?
    So,I'd like something like:
    get-childitem * | where-object $_.name -match '\x99' | rename-item -newname { $_.name -replace '\x99','='}
    from hex 80 to hex FF rather than a for-each.
    Thanks.

    Answer would depend on the way you want to replace... Easier if you want replace any char in set with selected char:
    $Name = -join (180..190|%{[char]$_})
    New-Item -ItemType File -Name $Name
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    } -WhatIf
    But if you want it more complicated, you may do that too. E.g. defining hashtable that can be used to replace individual elements:
    $Replacer = @{}
    foreach ($Char in (180..190 | % { [char]$_ })) {
    $Replacer.Add(
    [string]$Char,
    (echo _, -, =, . | Get-Random)
    $Replacer
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    $Replacer[$args[0].Value]
    } -WhatIf
    Using this syntax make it possible to include some logic in replace. E.g. you could easily use switch to decide what to do with given string:
    Get-ChildItem * | Rename-Item -NewName {
    [regex]::Replace(
    $_.Name,
    '[\xB4-\xBE]',
    switch ($args[0].Value) {
    º { "0" }
    µ { "u" }
    ¹ { "1" }
    ¸ { "," }
    Default { "_" }
    } -WhatIf

  • Non supported character set: oracle-character-set-46??? - pls help!

    Hello,
    I'm running Hello World example. Could you please help me to resolve "Non supported character set: oracle-character-set-46" error i got?!?!
    This seems to be nls_character problem but somehow i can't figure out what and where should be changed to solve the issue :( .
    thnks,

    Hello all,
    thanks for nothing!
    Actually i resolved issue. But still i don't understand the problem and why solution i did worked out.
    Here is the solution i hope that for somebody somehow it will be usefull:
    1) open ... jdevbin\jdev\lib\ext\jrad\config\jrad.properties
    2) and comment out both rows:
    #JRAD.APPS_JDBC_LIB14={JRAD.APPS_LIBRT_DIR}/ojdbc14.jar;{JRAD.APPS_LIBRT_DIR}/nls_charset12.zip
    #JRAD.APPS_JDBC_LIB13={JRAD.APPS_LIBRT_DIR}/classes12.jar;{JRAD.APPS_LIBRT_DIR}/nls_charset12.zip
    3)closed jdeveloper and opened again
    brgds.

Maybe you are looking for