Convert number to character  - unicode conversion

I have a variable defined as
v (4) type n
I need to put this into a char field of 4 as part of our unicode conversion.
Is there a FM that can help ?

Hi,
Check this
report  ztest.
class cl_abap_char_utilities definition load.
data: v_char(4) type c value cl_abap_char_utilities=>cr_lf.
data: c_numc(4) type n value 1234.
v_char = c_numc.
break-point.

Similar Messages

  • Problem in converting number to character

    Hi All,
    In my report there is number field called <?LINE_TOT_AMOUNT?>. In the next line i want to display that amount in words. I tried using
    <?xdofx:to_char(LINE_TOT_AMOUNT)?>
    but the output is same as the <?LINE_TOT_AMOUNT?>. Can any one help in solving this?

    look at TO_CHAR (number)
    if LINE_TOT_AMOUNT is number then to_char(LINE_TOT_AMOUNT) in string
    it's not display that amount in words
    it's only format row
    if you wnat to convert number to words
    plz see Re: Conversion of number to word

  • Corrupted Number Ranges After Unicode Conversion

    Hello Gurus,
    Has anyone had corrpted number ranges after UC conversion
    After a Unicode conversion the number ranges are no longer valid?
    We are doing a Unicode conversion from Informix to DB6 unicode on AIX.
    Kind Regards
    Lawrence Brown

    Hi Lawrence,
    I never heard from any Unicode conversion customer that number ranges did not work anymore after the Unicode conversion. Maybe the DB migration plays a role here ?
    Best regards,
    Nils Buerckel
    SAP AG

  • ASCII characters to Character ( Unicode Conversion)

    In a non unicode system,i see logic where they are replacing any value from '00' to '1F'  with space in a variable of type X.
    In a unicode system,how this can be done since type x is no more valid.
    I am planning to convert variable to type c, but if i convert to type c, how do I replace those characters from 00 to 1F in unicode system.

    CALL METHOD cl_abap_conv_in_ce=>uccp
            EXPORTING
                    uccp = '000D'
            RECEIVING
                    char = cr.
    link:[UCCP: converts a unicode code point (hexa representation) into a character|http://wiki.sdn.sap.com/wiki/display/ABAP/CL_ABAP_CONV_IN_CE]

  • BI Unicode Conversion impact on Microsft office tools

    Hello Friends,
    The BI version is BI 7.0, but we are using both Bex 3.x and Bex 7.0.BI 7.0 is being Unicode converted.
    Questions
    Does Unicode conversion have any impact on running the new/existing reports in 3.x and 7.x
    Does Unicode conversion have any impact on running the new/existing WAD reports.
    Does Unicode conversion have any impact on any of the Microsoft office products used by BI.
    Thanks!

    Since Excel is the starting point for BEx Analyzer, Excel 2007 would need to be tested. In BEx Web, if you have Excel as a save as option for queries, you will need to test that too. It also requires that the .NET Framework 3.0 be on the end user machine.
    See [OSS Note 1013201|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1013201]

  • Some character display incorrect after unicode conversion

    Dear Experts,
    Currently we upgrade our SAP system from 4.6B NU to ECC6.0 UN.
    in 4.6B system we had implemented Simple Chinese and French language package to the system. after upgrade to ECC6.0 no-unicode system. we perform a MDMP unicode conversion.currently system running with ECC6.0 unicode interface with language 1EFD. SUMG releated tasks had been finished. Chinese and French language checked OK in ECC6 UN system.
    Our issue is. in the past time. some Spanish local end user typing some spanish character in some master date field when they login in 4.6B GUI choice english as login language.these kind of spanish characters could not display correct in current unicode environment.
    could you please given us some suggestion for how to fix this issue.
    I also sent this message in the Netweaver administrator forums with below links.
    Spanish display incorrect after unicode conversion

    Hello,
    I doubt that the data shown by you in the link was caused by Spanish users logged on in EN.
    I think this was rather caused by utf-8 data uploaded into the Non-Unicode system.
    In this case, you need to repair the according texts manually, I do not know any automatic repair.
    Best regards,
    Nils Buerckel
    SAP AG

  • How do I convert the ASCII character % which is 25h to a hex number. I've tried using the scan value VI but get a zero in the value field.

    How do I convert the ASCII character % ,which is 25h, to a hex number 25h. I've tried using the scan value VI but I get a zero in the value field. 

    You can use String to Byte Array for this.

  • How to convert Raw data to string in Unicode conversion

    Hi All,
    I want to convert my report in Unicode while conversion we are getting error as "The key of internal table "IRESBD" contains components of type "X" or "XSTRING". The "READ TABLE IRESBD" statement is not permitted for such tables"
    Regards,

    Thanks for reply ,
    I tried to use the FM but not able to get any value ? is this FM converts all internal table or used to convert one field value.
    My report is Unicode so i need to internal table while reading Internal table which contains fiield as GUID_16 type Raw is showing error.
    Regards,

  • How can i convert an ascii character to a number?

    where can I find the function?

    A string will be converted to an array of U8.
    Unless you convert a single character.
    RayR
    Message Edited by JoeLabView on 06-27-2008 05:39 PM
    Attachments:
    Str2Num.vi ‏7 KB
    String2Number.PNG ‏7 KB

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Character set Conversion (US7ASCII to AL32UTF8) -- ORA-31011 problem

    Hello,
    We've run into some problems as part of our character set conversion from US7ASCII to AL32UTF8. The latest problem is that we have a query that works in US7ASCII, but after converting to AL32UTF8 it no longer works and generates an ORA-31011 error. This is very concerning to us as this error indicates an XML parsing problem and we are doing no XML whatsoever in our DB. We do not have XML columns (nor even CLOBs or BLOBs) nor XML tables and it's not XMLDB.
    For reference, we're running 11.2.0.2.0 over Solaris.
    Has anyone seen this kind of problem before?
    If need be, I'll find a way to post table definitions. However, it's safe to assume that we are only using DATE, VARCHAR2 and NUMBER column types in these tables. All of the tables are local to the DB.
    Thanks

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Unicode Conversion

    Hi ,
           We are into the initial phase of upgradation from 4.6 to ECC6 .
    ineed  some of the questions below ...
    1.    what is mean by unicode conversion ?
    2.    in an object example reoprt or script or bdc or module pool objects what are      need to be done to convert it as ab unicode object ?
    3.    next is after working on the unicode object we need to work on the spoue objects .what is mean by that ?
    could you guys can clear my dought for the above mentioned question ?
    thanks,
    vinay

    plz go through the following links.....
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c5546db3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c55470b3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c55473b3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c55476b3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c5547cb3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c5547fb3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c55482b3dc11d5993800508b6b8b11/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/79/c55485b3dc11d5993800508b6b8b11/content.htm
    The Link will be helpful to you.
    Re: Upgrade 4.6 to ECC - What are the responsibilites
    regarding Unicode influence in Standard programs
    Very good document:
    http://www.doag.org/pub/docs/sig/sap/2004-03/Buhlinger_Maxi_Version.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d37d1ad9-0b01-0010-ed9f-bc3222312dd8
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/589d18d9-0b01-0010-ac8a-8a22852061a2
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f8e316d9-0b01-0010-8e95-829a58c1511a
    You need to use the transaction UCCHECK.
    The report documentation is here
    ABAP Unicode Scan Tool UCCHECK
    You can use transaction UCCHECK to examine a Unicode program set for syntax errors without having to set the program attribute "Unicode checks active" for every individual program. From the list of Unicode syntax errors, you can go directly to the affected programs and remove the errors. It is also possible to automatically create transport requests and set the Unicode program attribute for a program set.
    Some application-specific checks, which draw your attention to program points that are not Unicode-compatible, are also integrated.
    Selection of Objects:
    The program objects can be selected according to object name, object type, author (TADIR), package, and original system. For the Unicode syntax check, only object types for which an independent syntax check can be carried out are appropriate. The following object types are possibilities:
    PROG Report
    CLAS Class
    FUGR Function groups
    FUGX Function group (with customer include, customer area)
    FUGS Function group (with customer include, SAP area)
    LDBA Logical Database
    CNTX Context
    TYPE Type pool
    INTF Interface
    Only Examine Programs with Non-Activated Unicode Flag
    By default, the system only displays program objects that have not yet set the Unicode attribute. If you want to use UCCHECK to process program objects that have already set the attribute, you can deactivate this option.
    Only Objects with TADIR Entry
    By default, the system only displays program objects with a TADIR entry. If you want to examine programs that don't have a TADIR entry, for example locally generated programs without a package, you can deactivate this option.
    Exclude Packages $*
    By default, the system does not display program objects that are in a local, non-transportable package. If you want to examine programs that are in such a package, you can deactivate this option.
    Display Modified SAP Programs Also
    By default, SAP programs are not checked in customer systems. If you also want to check SAP programs that were modified in a customer system (see transaction SE95), you can activate this option.
    Maximum Number of Programs:
    To avoid timeouts or unexpectedly long waiting times, the maximum number of program objects is preset to 50. If you want to examine more objects, you must increase the maximum number or run a SAMT scan (general program set processing). The latter also has the advantage that the data is stored persistently. Proceed as follows:
    Call transaction SAMT
    Create task with program RSUNISCAN_FINAL, subroutine SAMT_SEARCH
    For further information refer to documentation for transaction SAMT.
    Displaying Points that Cannot Be Analyzed Statically
    If you choose this option, you get an overview of the program points, where a static check for Unicode syntax errors is not possible. This can be the case if, for example, parameters or field symbols are not typed or you are accessing a field or structure with variable length/offset. At these points the system only tests at runtime whether the code is sufficient for the stricter Unicode tests. If possible, you should assign types to the variables used, otherwise you must check runtime behavior after the Unicode attribute has been set.
    To be able to differentiate between your own and foreign code (for example when using standard includes or generated includes), there is a selection option for the includes to be displayed. By default, the system excludes the standard includes of the view maintenance LSVIM* from the display, because they cause a large number of messages that are not relevant for the Unicode conversion. It is recommended that you also exclude the generated function group-specific includes of the view maintenance (usually L<function group name>F00 and L<function group name>I00) from the display.
    Similarly to the process in the extended syntax check, you can hide the warning using the pseudo comment ("#EC *).
    Applikation-Specific Checks
    These checks indicate program points that represent a public interface but are not Unicode-compatible. Under Unicode, the corresponding interfaces change according to the referenced documentation and must be adapted appropriately.
    View Maintenance
    Parts of the view maintenance generated in older releases are not Unicode-compatible. The relevant parts can be regenerated with a service report.
    UPLOAD/DOWNLOAD
    The function modules UPLOAD, DOWNLOAD or WS_UPLOAD and WS_DOWNLOAD are obsolete and cannot run under Unicode. Refer to the documentation for these modules to find out which routines serve as replacements.
    Plzz Reward if it is useful

  • Wrong text in ADRC and LFA1 table after Unicode conversion

    Hello,
    We are doing MDMP to Unicode on SAP 4.7 system.
    In post unicode activities,we had corrected the garbage characters in
    table ADRC
    by converting to target langauage ZH using SUMG t-code.
    Condition used for addind table ADRC in SUMG is LANGU ='1' and
    LANGU_CREA = 'EN'
    But after converting and saving the data,most of the records have
    incorrect character in word.
    and text field of search term/company name/city in table LFA1 are also
    wrong in correspond field in table ADRC.
    Thanks
    Harshavardhan

    Hi Harshavardhan,
    Assumption: You have Chinese entries in table ADRC with ADRC-LANGU_CREA = EN
    Now you have to distinguish two problems:
    1) Chinese texts are stored in ADRC fields, which are NOT used for search help texts (most of the fields - e.g. ADRC-NAME1):
    These texts can be handled in two ways:
    a) Change LANGU_CREA = EN to LANGU_CREA = ZH for these entries (e.g. where ADRC-COUNTRY = CN) before the Unicode conversion (by the way, I would not use ADRC-LANGU, since this is the communication language, which does not necessarily comply with the code page of the entry).
    This needs to be done in a similar way as shown in z-reports described in  SAP Note 634839 ( although this specific fool-the-system case is not covered there - you have to adapt those examples)
    b) Use SUMG to repair the entries using language ZH (as you already did)  
    My favourite would be a), as this can be done before the conversion (Note that SUMG ineeds to be executed during downtime !)
    2) Chinese texts are stored in ADRC fields, which are used for search help texts (e.g. ADRC-MC_NAME1):
    a) If these entries were inserted / changed with logon language ZH, then point 1) applies
    b) If users logged on with logon language EN and created the entries, then most of these texts are destroyed already in Non-Unicode system. Then there is no automatic repair possible (at least point 1 does not work). Here my statement from before is valid:
    Those entries need to be generated again - e.g. if you do a dummy change on the name.
    According to my knowledge, this is a manual effort (either before or after the conversion) - but maybe it can be automized ...
    Best regards,
    Nils Buerckel
    SAP AG

  • SAP Data File size considerably reduced after Unicode Conversion

    Hello Experts
    I have just performed a CUUC (Upgrade along with unicode conversion) from R/3 4.7 to ECC 6.0 EHP5. The data size that i earlier had was close to 463 GB (MSSQL MDF, LDF Files), after the data export to be converted to unicode the size was 45 GB (10% of actual data size, i feel this is normal after heterogeneous system copy), but after the import again the DATA file size is only 247 GB, is this normal ? or have i lost some data ? For ex, I tried checking tables like MSEG and the number of entries have reduced from 15,678,790 to 15,290,545.
    Could you kindly let me know if there is a way to check from Basis perspective if i have not lost any data ? I have followed all the procedures as per SAP Standards.
    Waiting for your quick reply.
    Best Regards
    Pritish

    Hi Nicholas
    Data is compressed during the new R3load procedure is understood, but why number of table entries have reduced is something which is still a question to me (in some cases increased as well) ? For example      
                                 Source          Target
    STPO                  415,725          412,150
    STKO                 126,710          126,141
    PLAF                 74,671                   78,336
    MDKP                 193,487          192,747
    MDPB                 55,329                   63,557
    Any suggestions or ideas ?
    Best Regards
    Pritish

  • UTF8 character set conversion for chinese Language

    Hi friends,
    Would like to some basic explanation on UTF8 feature,what does it help while converting the data from chinese language.
    Would like to know what all characters this UTF8 will not support while converting from chinese language.
    Thanks & Regards
    Ramya Nomula

    Not exactly sure what you are looking for, but on MetaLink, there are numerous detailed papers on NLS character sets, conversions, etc.
    Bottom line is that for traditional Chinese characters (since they are more complicated), they require 4 bytes to store the characters (such as UTF-8, and AL32UTF8). Some mid-eastern characters sets also fall in this category.
    Do a google search on "utf8 al32utf8 difference", and you will get some good explanations.
    e.g., http://decipherinfosys.wordpress.com/2007/01/28/difference-between-utf8-and-al32utf8-character-sets-in-oracle/
    Recently, one of our clients had a question on the differences between these two character sets since they were in the process of making their application global. In an upcoming whitepaper, we will discuss in detail what it takes (from a RDBMS perspective) to address localization and globalization issues. As far as these two character sets go in Oracle, the only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.
    You may also consider posting your question on the Globalization Suport forum which pertains more to these types of questions.
    Globalization Support

Maybe you are looking for

  • Table for aggregate status

    Hi Experts, Which table has aggregate status and data? Early answer is appreciated.. Thanks in Advance, Vijaya

  • Mass upload in Jump targets(urgent )

    Hi Experts, In creating Jump targets in RSBBS..let me know the option to have a mass upload multiple selection in jumping from one query to another. I mean that when we go to another query(receiver) from the result area of the first(sender) how would

  • Reduce advanced search criteria LOV

    I have created OAF page with resultsBasedSearch = advanced panel Query Region. In advanced search criteria, I see four values in LOV - "is, is not, after, before". I want to limit options to "is" and "after". Is it possible to do this? If so, can you

  • Can i setup the 'find my phone' app without icloud?

    I'm still using Snow Leopard for work reasons which means no iCloud. Can I use the 'find my phone' app without iCloud? Is there a way? Thanks

  • Font Question for Kurt Lang or anyone else who may know

    We are using the postscript version of Helvetica Neue roman and medium, are the kerning values, line spacing, characters, etc... the same if our client was to use the Open Type version. I know its not expensive to buy and try, just trying to avoid it