Character set trouble

I defined the parameter nls_lan=american.america.zhs16cgb231280 in the .profile in the server.And I defined NLS_LANGUAGE="SIMPLIED CHINESE" AND NLS_TERRITORY=CHINA in the init.ora file. then I insert a chinese string in a field of a table.But when I want select this field and display it
with chinese,it appears wrong ,not in chinese .
Mybe some settings must be corrected.
anyone knows?

With all due respects I can not think of a worse reporting tool than MS Reporting Services.
Give serious consideration to getting a real report writing tool from Oracle or SAP (Business Objects / Crystal).
Reporting Services is one nightmare after another.

Similar Messages

  • Trouble Using Character Sets - Chinese GB2312

    Hi,
    I am trying to display my site in Simplified Chinese
    (GB2312). I have verified that all of the files are encoded with
    the GB2312 character set, when I open them in a language capable
    editor I can see chinese characters.
    I have also used <cfheader name="Content-Type"
    value="text/html; charset=gb2312" /> at the top of my
    Application.cfm file.
    Yet, when I view the file in any browser, it shows only
    question marks or odd characters in place of the chinese
    characters.
    When I look at the browser character encoding settings, they
    are correct (a check mark next to Simplified Chinese).
    Any thoughts, am I missing a step.
    Thanks in advance for any help given.

    lan99 wrote:
    > I have also used <cfheader name="Content-Type"
    value="text/html;
    > charset=gb2312" /> at the top of my Application.cfm
    file.
    what ver of cf?
    if cf6 or better, have you used cfprocessingdirective at the
    top of each file?

  • Trouble with MSSQL Reporting Services and Oracle: PLS-00553: character set

    First off, Oracle version is 8.1.7.4.0. SQL is 64bit 2005 on Windows 2003 with all the latest patches. We’re using the Oracle 10.2 client on the SQL box.
    A few months ago queries against Oracle stopped working on our MSSQL reporting services server - after a service pack on this same system.
    I ended up having to grant permissions on Oracle directories to eliminate the first error and then renamed the NLS_LANG registry entry (effectively deleting it) to get rid of the second error.
    The second error was: ORA-12705: Cannot access NLS data files or invalid environment specified. I don’t have docs on the first error, but it was a reference to a file access problem. Looking at this error again makes me think that I still have some file access problems.
    The old value for the NLS_LANG registry key was: AMERICAN_AMERICA.WE8MSWIN1252
    Everything has been working fine since then.
    Now we are moving some new reports into production which use Oracle stored procedures and they are getting this error:
    --- End of inner exception stack trace ---
    w3wp!processing!8!7/27/2009-09:26:09:: e ERROR: An exception has occurred in data source 'CSUD3_RPTAPL'. Details: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for data set 'MYDATASET'. ---> System.Data.OracleClient.OracleException: ORA-06550: line 1, column 57:
    PLS-00553: character set name is not recognized
    ORA-06550: line 0, column 0:
    PL/SQL: Compilation unit analysis terminated
    I've created a test package as below, which has also gotten the same error.
    create or replace
    PACKAGE BODY SQLTEST AS
    PROCEDURE OPEN_CURSOR (ResultCursor OUT T_CURSOR,
    SQLTESTIN IN VARCHAR
    AS
    l_query varchar2(3000);
    BEGIN
    l_query := 'Hi there';
    open ResultCursor for select l_query "field1" from dual;
    END OPEN_CURSOR;
    END SQLTEST;
    I tried setting the NLS_Lang value back to its old value, but it did not fix this problem – and made all the regular oracle queries break again.
    From what I've read, the client should be using the database default if the charset is not specified on the client.
    Can anyone advise on what the problem might be?
    Thanks!
    Sam Greene

    With all due respects I can not think of a worse reporting tool than MS Reporting Services.
    Give serious consideration to getting a real report writing tool from Oracle or SAP (Business Objects / Crystal).
    Reporting Services is one nightmare after another.

  • Fixing a US7ASCII - WE8ISO8859P1 Character Set Conversion Disaster

    In hopes that it might be helpful in the future, here's the procedure I followed to fix  a disastrous unintentional US7ASCII on 9i to WE8ISO8859P1 on 10g migration.
    BACKGROUND
    Oracle has multiple character sets, ranging from US7ASCII to AL32UTF16.
    US7ASCII, of course, is a cheerful 7 bit character set, holding the basic ASCII characters sufficient for the English language.
    However, it also has a handy feature: character fields under US7ASCII will accept characters with values > 128. If you have a web application, users can type (or paste) Us with umlauts, As with macrons, and quite a few other funny-looking characters.
    These will be inserted into the database, and then -- if appropriately supported -- can be selected and displayed by your app.
    The problem is that while these characters can be present in a VARCHAR2 or CLOB column, they are not actually legal. If you try within Oracle to convert from US7ASCII to WE8ISO8859P1 or any other character set, Oracle recognizes that these characters with values greater than 127 are not valid, and will replace them with a default "unknown" character. In the case of a change from US7ASCII to WE8ISO8859P1, it will change them to 191, the upside down question mark.
    Oracle has a native utility, introduced in 8i, called csscan, which assists in migrating to different character sets. This has been replaced in newer versions with the Database MIgration Assistant for Unicode (DMU), which is the new recommended tool for 11.2.0.3+.
    These tools, however, do no good unless they are run. For my particular client, the operations team took a database running 9i and upgraded it to 10g, and as part of that process the character set was changed from US7ASCII to WE8ISO8859P1. The database had a large number of special characters inserted into it, and all of these abruptly turned into upside-down question marks. The users of the application didn't realize there was a problem until several weeks later, by which time they had put a lot of new data into the system. Rollback was not possible.
    FIXING THE PROBLEM
    How fixable this problem is and the acceptable methods which can be used depend on the application running on top of the database. Fortunately, the client app was amenable.
    (As an aside note: this approach does not use csscan -- I had done something similar previously on a very old system and decided it would take less time in this situation to revamp my old procedures and not bring a new utility into the mix.)
    We will need to separate approaches -- one to fix the VARCHAR2 & CHAR fields,  and a second for CLOBs.
    In order to set things up, we created two environments. The first was a clone of production as it is now, and the second a clone from before the upgrade & character set change. We will call these environments PRODCLONE and RESTORECLONE.
    Next, we created a database link, OLD6. This allows PRODCLONE to directly access RESTORECLONE. Since they were cloned with the same SID, establishing the link needed the global_names parameter set to false.
    alter system set global_names=false scope=memory;
    CREATE PUBLIC DATABASE LINK OLD6
    CONNECT TO DBUSERNAME
    IDENTIFIED BY dbuserpass
    USING 'restoreclone:1521/MYSID';
    Testing the link...
    SQL> select count(1) from users@old6;
      COUNT(1)
           454
    Here is a row in a table which contains illegal characters. We are accessing RESTORECLONE from PRODCLONE via our link.
    PRODCLONE> select dump(title) from my_contents@old6 where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    By comparison, a dump of that row on PRODCLONE's my_contents gives:
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(TITLE)
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,191,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Note that the "174" on RESTORECLONE was changed to "191" on PRODCLONE.
    We can manually insert CHR(174) into our PRODCLONE and have it display successfully in the application.
    However, I tried a number of methods to copy the data from RESTORECLONE to PRODCLONE through the link, but entirely without success. Oracle would recognize the character as invalid and silently transform it.
    Eventually, I located a clever workaround at this link:
    https://kr.forums.oracle.com/forums/thread.jspa?threadID=231927
    It works like this:
    On RESTORECLONE you create a view, vv, with UTL_RAW:
    RESTORECLONE> create or replace view vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    View created.
    This turns the title to raw on the RESTORECLONE.
    You can now convert from RAW to VARCHAR2 on the PRODCLONE database:
    PRODCLONE> select dump(utl_raw.cast_to_varchar2 (title)) from vv@old6 where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    The above works because oracle on PRODCLONE never knew that our TITLE string on RESTORE was originally in  US7ASCII, so it was unable to do its transparent character set conversion.
    PRODCLONE> update my_contents set title=( select utl_raw.cast_to_varchar2 (title) from vv@old6 where pk1=117286) where pk1=117286;
    PRODCLONE> select dump(title) from my_contents where pk1=117286;
    DUMP(UTL_RAW.CAST_TO_VARCHAR2(TITLE))
    Typ=1 Len=49: 78,67,76,69,88,45,80,78,174,32,69,120,97,109,32,83,116,121,108,101
    ,32,73,110,116,101,114,97,99,116,105,118,101,32,82,101,118,105,101,119,32,81,117
    ,101,115,116,105,111,110,115
    Excellent! The "174" character has survived the transfer and is now in place on PRODCLONE.
    Now that we have a method to move the data over, we have to identify which columns /tables have character data that was damaged by the conversion. We decided we could ignore anything with a length smaller than 10 -- such fields in our application would be unlikely to have data with invalid characters.
    RESTORECLONE> select count(1) from user_tab_columns where data_type in ('CHAR','VARCHAR2') and data_length > 10;
       COUNT(1)
        533
    By converting a field to WE8ISO8859P1, and then comparing it with the original, we can see if the characters change:
    RESTORECLONE> select count(1) from my_contents where title != convert (title,'WE8ISO8859P1','US7ASCII') ;
      COUNT(1)
         10568
    So 10568 rows have characters which were transformed  into 191s as part of the original conversion.
    [ As an aside, we can't use CONVERT() on LOBs -- for them we will need another approach, outlined further below.
    RESTOREDB> select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1') ;
    select count(1) from my_contents where main_data != convert (convert(main_DATA,'WE8ISO8859P1','US7ASCII'),'US7ASCII','WE8ISO8859P1')
    ERROR at line 1:
    ORA-00932: inconsistent datatypes: expected - got CLOB
    Anyway, now that we can identify VARCHAR2 fields which need to be checked, we can put together a PL/SQL stored procedure to do it for us:
    create or replace procedure find_us7_strings
    (table_name varchar2,
    fix_col varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    begin
    orig_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname)  select '''||table_name||''',pk1,'''||fix_col||''' from '||table_name||' where '||fix_col||' !=  CONVERT(CONVERT('||fix_col||',''WE8ISO8859P1''),''US7ASCII'') and '||fix_col||' is not null';
    -- Uncomment if debugging:
    -- dbms_output.put_line(orig_sql);
      execute immediate orig_sql;
    end;
    And create a table to store the information as to which tables, columns, and rows have the bad characters:
    drop table cnv_us7;
    create table cnv_us7 (mytablename varchar2(50), myindx number,      mycolumnname varchar2(50) ) tablespace myuser_data;
    create index list_tablename_idx on cnv_us7(mytablename) tablespace myuser_indx;
    With a SQL-generating SQL script, we can iterate through all the tables/columns we want to check:
    --example of using the data: select title from my_contents where pk1 in (select myindx from cnv_us7)
    set head off pagesize 1000 linesize 120
    spool runme.sql
    select 'exec find_us7_strings ('''||table_name||''','''||column_name||'''); ' from user_tab_columns
          where
              data_type in ('CHAR','VARCHAR2')
              and table_name in (select table_name from user_tab_columns where column_name='PK1' and  table_name not  in ('HUGETABLEIWANTTOEXCLUDE','ANOTHERTABLE'))
              and char_length > 10
              order by table_name,column_name;
    spool off;
    set echo on time on timing on feedb on serveroutput on;
    spool output_of_runme
    @./runme.sql
    spool off;
    Which eventually gives us the following inserted into CNV_US7:
    20:48:21 SQL> select count(1),mycolumnname,mytablename from cnv_us7 group by mytablename,mycolumnname;
             4 DESCRIPTION                                        MY_FORUMS
         21136 TITLE                                              MY_CONTENTS
    Out of 533 VARCHAR2s and CHARs, we only had five or six columns that needed fixing
    We create our views on  RESTOREDB:
    create or replace view my_forums_vv as select pk1,utl_raw.cast_to_raw(description) as description from forum_main;
    create or replace view my_contents_vv as select pk1,utl_raw.cast_to_raw(title) as title from my_contents;
    And then we can fix it directly via sql:
    update my_contents taborig1 set TITLE= (select utl_raw.cast_to_varchar2 (TITLE) from my_contents_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from my_contents@old6 taborig,my_contents tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='TITLE'
              and mytablename='MY_CONTENTS'
              and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE );
    Note this part:
          "and convert(taborig.TITLE,'US7ASCII','WE8ISO8859P1') = tabnew.TITLE "
    This checks to verify that the TITLE field on the PRODCLONE and RESTORECLONE are the same (barring character set issues). This is there  because if the users have changed TITLE  -- or any other field -- on their own between the time of the upgrade and now, we do not want to overwrite their changes. We make the assumption that as part of the process, they may have changed the bad character on their own.
    We can also create a stored procedure which will execute the SQL for us:
    create or replace procedure fix_us7_strings
    (TABLE_NAME varchar2,
    FIX_COL varchar2 )
    authid current_user
    as
    orig_sql varchar2(1000);
    TYPE cv_type IS REF CURSOR;
    orig_cur cv_type;
    begin
    orig_sql:='update '||TABLE_NAME||' taborig1 set '||FIX_COL||'= (select utl_raw.cast_to_varchar2 ('||FIX_COL||') from '||TABLE_NAME||'_vv@old6 where pk1=taborig1.pk1)
    where pk1 in (
    select tabnew.pk1 from '||TABLE_NAME||'@old6 taborig,'||TABLE_NAME||' tabnew,cnv_us7@old6
          where taborig.pk1=tabnew.pk1
              and myindx=tabnew.pk1
              and mycolumnname='''||FIX_COL||'''
              and mytablename='''||TABLE_NAME||'''
              and convert(taborig.'||FIX_COL||',''US7ASCII'',''WE8ISO8859P1'') = tabnew.'||FIX_COL||')';
    dbms_output.put_line(orig_sql);
    execute immediate orig_sql;
    end;
    exec fix_us7_strings('MY_FORUMS','DESCRIPTION');
    exec fix_us7_strings('MY_CONTENTS','TITLE');
    commit;
    To validate this before and after, we can run something like:
    select dump(description) from my_forums where pk1 in (select myindx from cnv_us7@old6 where mytablename='MY_FORUMS');
    The above process fixes all the VARCHAR2s and CHARs. Now what about the CLOB columns?
    Note that we're going to have some extra difficulty here, not just because we are dealing with CLOBs, but because we are working with CLOBs in 9i, whose functions have less CLOB-related functionality.
    This procedure finds invalid US7ASCII strings inside a CLOB in 9i:
    create or replace procedure find_us7_clob
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_total_problems NUMBER;
      ins_sql VARCHAR2(4000);
    BEGIN
       DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where dbms_lob.getlength('||fix_col||') >0 and '||fix_col||' is not null order by pk1';
       open orig_table_cur for orig_sql;
       my_total_problems := 0;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            my_offset :=1;
            my_chars_read := 512;
            my_problem_flag :=0;
            WHILE my_offset < my_lob_size and my_problem_flag =0
                    LOOP
                    DBMS_LOB.READ(my_clob,my_chars_read,my_offset,my_output_chunk);
                    my_offset := my_offset + my_chars_read;
                    IF my_output_chunk != CONVERT(CONVERT(my_output_chunk,'WE8ISO8859P1'),'US7ASCII')
                            THEN
                            -- DBMS_OUTPUT.PUT_LINE('Problem with '||my_indx_var);
                            -- DBMS_OUTPUT.PUT_LINE(my_output_chunk);
                            my_problem_flag:=1;
                    END IF;
            END LOOP;
            IF my_problem_flag=1
                    THEN my_total_problems := my_total_problems +1;
                    ins_sql:='insert into cnv_us7(mytablename,myindx,mycolumnname) values ('''||table_name||''','||my_indx_var||','''||fix_col||''')';
                    execute immediate ins_sql;
                    END IF;
       END LOOP;
       DBMS_OUTPUT.PUT_LINE('We found '||my_total_problems||' problem rows in table '||table_name||', column '||fix_col||'.');
    END;
    And we can use SQL-generating SQL to find out which CLOBs have issues, out of all the ones in the database:
    RESTOREDB> select 'exec find_us7_clob('''||table_name||''','''||column_name||''');' from user_tab_columns where data_type='CLOB';
    exec find_us7_clob('MY_CONTENTS','DATA');
    After completion, the CNV_US7 table looked like this:
    RESTOREDB> set linesize 120 pagesize 100;
    RESTOREDB>  select count(1),mytablename,mycolumnname from cnv_us7
       where mytablename||' '||mycolumnname in (select table_name||' '||column_name from user_tab_columns
             where data_type='CLOB' )
          group by mytablename,mycolumnname;
      COUNT(1) MYTABLENAME                                        MYCOLUMNNAME
         69703 MY_CONTENTS                                  DATA
    On RESTOREDB, our 9i version, we will use this procedure (found many years ago on the internet):
    create or replace procedure CLOB2BLOB (p_clob in out nocopy clob, p_blob in out nocopy blob) is
    -- transforming CLOB to BLOB
    l_off number default 1;
    l_amt number default 4096;
    l_offWrite number default 1;
    l_amtWrite number;
    l_str varchar2(4096 char);
    begin
    loop
    dbms_lob.read ( p_clob, l_amt, l_off, l_str );
    l_amtWrite := utl_raw.length ( utl_raw.cast_to_raw( l_str) );
    dbms_lob.write( p_blob, l_amtWrite, l_offWrite,
    utl_raw.cast_to_raw( l_str ) );
    l_offWrite := l_offWrite + l_amtWrite;
    l_off := l_off + l_amt;
    l_amt := 4096;
    end loop;
    exception
    when no_data_found then
    NULL;
    end;
    We can test out the transformation of CLOBs to BLOBs with a single row like this:
    drop table my_contents_lob;
    Create table my_contents_lob (pk1 number,data blob);
    DECLARE
          v_clob CLOB;
          v_blob BLOB;
        BEGIN
          SELECT data INTO v_clob FROM my_contents WHERE pk1 = 16 ;
          INSERT INTO my_contents_lob (pk1,data) VALUES (16,empty_blob() );
          SELECT data INTO v_blob FROM my_contents_lob WHERE pk1=16 FOR UPDATE;
          clob2blob (v_clob, v_blob);
        END;
    select dbms_lob.getlength(data) from my_contents_lob;
    DBMS_LOB.GETLENGTH(DATA)
                                 329
    SQL> select utl_raw.cast_to_varchar2(data) from my_contents_lob;
    UTL_RAW.CAST_TO_VARCHAR2(DATA)
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam...
    Now we need to push it through a loop. Unfortunately, I had trouble making the "SELECT INTO" dynamic. Thus I used a version of the procedure for each table. It's aesthetically displeasing, but at least it worked.
    create table my_contents_lob(pk1 number,data blob);
    create index my_contents_lob_pk1 on my_contents_lob(pk1) tablespace my_user_indx;
    create or replace procedure blob_conversion_my_contents
    (table_name varchar2,
    fix_col varchar2)
    authid current_user
    as
      orig_sql varchar2(1000);
      type cv_type is REF CURSOR;
      orig_table_cur cv_type;
      my_chars_read NUMBER;
      my_offset NUMBER;
      my_problem NUMBER;
      my_lob_size NUMBER;
      my_indx_var NUMBER;
      my_total_chars_read NUMBER;
      my_output_chunk VARCHAR2(4000);
      my_problem_flag NUMBER;
      my_clob CLOB;
      my_blob BLOB;
      my_total_problems NUMBER;
      new_sql VARCHAR2(4000);
    BEGIN
      DBMS_OUTPUT.ENABLE(1000000);
       orig_sql:='select pk1,dbms_lob.getlength('||FIX_COL||') as cloblength,'||fix_col||' from '||table_name||' where pk1 in (select myindx from cnv_us7 where mytablename='''||TABLE_NAME||''' and mycolumnname='''||FIX_COL||''') order by pk1';
       open orig_table_cur for orig_sql;
       LOOP
            FETCH orig_table_cur INTO my_indx_var,my_lob_size,my_clob;
                    EXIT WHEN orig_table_cur%NOTFOUND;
            new_sql:='INSERT INTO '||table_name||'_lob(pk1,'||fix_col||') values ('||my_indx_var||',empty_blob() )';
            dbms_output.put_line(new_sql);
          execute immediate new_sql;
    -- Here's the bit that I had trouble making dynamic. Feel free to let me know what I am doing wrong.
    -- new_sql:='SELECT '||fix_col||' INTO my_blob from '||table_name||'_lob where pk1='||my_indx_var||' FOR UPDATE';
    --        dbms_output.put_line(new_sql);
            select data into my_blob from my_contents_lob where pk1=my_indx_var FOR UPDATE;
          clob2blob(my_clob,my_blob);
       END LOOP;
       CLOSE orig_table_cur;
      DBMS_OUTPUT.PUT_LINE('Completed program');
    END;
    exec blob_conversion_my_contents('MY_CONTENTS','DATA');
    Verify that things work properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob where pk1=xxxx;
    This should let you see see characters > 150. Thus, the method works.
    We can now take this data, export it from RESTORECLONE
    exp file=a.dmp buffer=4000000 userid=system/XXXXXX tables=my_user.my_contents rows=y
    and import the data on prodclone
    imp file=a.dmp fromuser=my_user touser=my_user userid=system/XXXXXX buffer=4000000;
    For paranoia's sake, double check that it worked properly:
    select dump( utl_raw.cast_to_varchar2(data))  from my_contents_lob;
    On our 10g PRODCLONE, we'll use these stored procedures:
    CREATE OR REPLACE FUNCTION CLOB2BLOB(L_CLOB CLOB) RETURN BLOB IS
    L_BLOB BLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_BLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_CLOB);
    DBMS_LOB.CONVERTTOBLOB(L_BLOB,
    L_CLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_BLOB;
    END;
    CREATE OR REPLACE FUNCTION BLOB2CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    And now, for the piece de' resistance, we need a BLOB to CLOB conversion that assumes that the BLOB data is stored initially in WE8ISO8859P1.
    To find correct CSID for WE8ISO8859P1, we can use this query:
    select nls_charset_id('WE8ISO8859P1') from dual;
    Gives "31"
    create or replace FUNCTION BLOB2CLOBASC(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB CLOB;
    L_SRC_OFFSET NUMBER;
    L_DEST_OFFSET NUMBER;
    L_BLOB_CSID NUMBER := 31;      -- treat blob as  WE8ISO8859P1
    V_LANG_CONTEXT NUMBER := 31;   -- treat resulting clob as  WE8ISO8850P1
    L_WARNING NUMBER;
    L_AMOUNT NUMBER;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    L_BLOB_CSID,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    END;
    select dump(dbms_lob.substr(blob2clobasc(data),4000,1)) from my_contents_lob;
    Now, we can compare these:
    select dbms_lob.compare(blob2clob(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOB(OLD.DATA),NEW.DATA)
                                                                 0
                                                                 0
                                                                 0
    Vs
    select dbms_lob.compare(blob2clobasc(old.data),new.data) from  my_contents new,my_contents_lob old where new.pk1=old.pk1;
    DBMS_LOB.COMPARE(BLOB2CLOBASC(OLD.DATA),NEW.DATA)
                                                                   -1
                                                                   -1
                                                                   -1
    update my_contents a set data=(select blob2clobasc(data) from my_contents_lob b where a.pk1= b.pk1)
        where pk1 in (select al.pk1 from my_contents_lob al where dbms_lob.compare(blob2clob(al.data),a.data) =0 );
    SQL> select dump(dbms_lob.substr(data,4000,1)) from my_contents where pk1 in (select pk1 from my_contents_lob);
    Confirms that we're now working properly.
    To run across all the _LOB tables we've created:
    [oracle@RESTORECLONE ~]$ exp file=all_fixed_lobs.dmp buffer=4000000 userid=my_user/mypass tables=MY_CONTENTS_LOB,MY_FORUM_LOB...
    [oracle@RESTORECLONE ~]$ scp all_fixed_lobs.dmp jboulier@PRODCLONE:/tmp
    And then on PRODCLONE we can import:
    imp file=all_fixed_lobs.dmp buffer=4000000 userid=system/XXXXXXX fromuser=my_user touser=my_user
    Instead of running the above update statement for all the affected tables, we can use a simple stored procedure:
    create or replace procedure fix_us7_CLOBS
      (TABLE_NAME varchar2,
         FIX_COL varchar2 )
        authid current_user
        as
         orig_sql varchar2(1000);
         bak_sql  varchar2(1000);
        begin
        dbms_output.put_line('Creating '||TABLE_NAME||'_PRECONV to preserve the original data in the table');
        bak_sql:='create table '||TABLE_NAME||'_preconv as select pk1,'||FIX_COL||' from '||TABLE_NAME||' where pk1 in (select pk1 from '||TABLE_NAME||'_LOB) ';
        execute immediate bak_sql;
        orig_sql:='update '||TABLE_NAME||' tabnew set '||FIX_COL||'= (select blob2clobasc ('||FIX_COL||') from '||TABLE_NAME||'_LOB taborig where tabnew.pk1=taborig.pk1)
       where pk1 in (
       select a.pk1 from '||TABLE_NAME||'_LOB a,'||TABLE_NAME||' b
          where a.pk1=b.pk1
                 and dbms_lob.compare(blob2clob(a.'||FIX_COL||'),b.'||FIX_COL||') = 0 )';
        -- dbms_output.put_line(orig_sql);
        execute immediate orig_sql;
       end;
    Now we can run the procedure and it fixes everything for our previously-broken tables, keeping the changed rows -- just in case -- in a table called table_name_PRECONV.
    set serveroutput on time on timing on;
    exec fix_us7_clobs('MY_CONTENTS','DATA');
    commit;
    After confirming with the client that the changes work -- and haven't noticeably broken anything else -- the same routines can be carefully run against the actual production database.

    We converted using the database using scripts I developed. I'm not quite sure how we converted is relevant, other than saying that we did not use the Oracle conversion utility (not csscan, but the GUI Java tool).
    A summary:
    1) We replaced the lossy characters by parsing a csscan output file
    2) After re-scanning with csscan and coming up clean, our DBA converted the database to AL32UTF8 (changed the parameter file, changing the character set, switched the semantics to char, etc).
    3) Final step was changing existing tables to use char semantics by changing the table schema for VARCHAR2 columns
    Any specific steps I cannot easily answer, I worked with a DBA at our company to do this work. I handled the character replacement / DDL changes and the DBA ran csscan & performed the database config changes.
    Our actual error message:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '�Error at line 1
    31011. 00000 - "XML parsing failed"
    *Cause:    XML parser returned an error while trying to parse the document.
    *Action:   Check if the document to be parsed is valid.
    Error at Line: 24 Column: 15
    This seems to match the the document ID referenced below. I will ask our DBA to pull it up and review it.
    Please advise if more information is needed from my end.

  • Character Sets - socketchannel read error

    I have made an FTPserver which i have just noticed is having trouble when the file name being requested/send has non-ASCII characters in it such as '�'.
    Since i am using nio i thought it would just be an simple task of just changing the character set from "US-ASCII" to "UTF-8" or "UTF-16".
    but when i changed it to UTF-8 there was no change, that is i still recieve an error:
    java.nio.charset.MalformedInputException: Input length = 1
    java.nio.charset.CoderResult.throwException(CoderResult.java:260)
    java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:763)
    Worker.readCommand(Worker.java:282)
    Worker.handleClient(Worker.java:562)
    Worker.run(Worker.java:144)
    java.lang.Thread.run(Thread.java:536)
    And when i tried UTF-16 no input was ever detected by my server...
    This 'input length = 1' business... does this mean that there must be more than one byte in the recieved byte array or am i going off on a incorrect tangent?
    Any suggestions or theories would be listened to!
    Cheers
    (btw i will not be able to reply to this post for another 11 hrs)

    I can but it will not be of much use as it is more a logical error or a misunderstanding of how to use the character set.
    well here is the relevant code anyway:
    // readCommand: waits until a command is recieved from the client and then returns the command
           String readCommand(InputStream is, PrintStream ps) throws java.io.IOException
             int nread = 0, r = 0;
             String str="";
             Charset charset = Charset.forName("UTF-8");
             CharsetDecoder decoder = charset.newDecoder();
          // Direct byte buffer for reading
             ByteBuffer dbuf = ByteBuffer.allocateDirect(1024);
    //      outerloop:
             do
                dbuf.clear();
                boolean quit=false;
                String message="";
                int bytesRead=0;
    // while there is no input in the buffer            
                while (dbuf.position()==0)
    // read in any bytes that are waiting into the buffer
                   sc.configureBlocking(false);
                   bytesRead=sc.read(dbuf);
                   sc.configureBlocking(true);
    // wait for two milliseconds before trying to read again
                   try
                      wait(2);
                       catch (InterruptedException e)
    // if the client has not yet finished the connection negoiation see if the client has been connected for more than 30 seconds, if so disconnect the client
                   if (account==-1)
                      if (System.currentTimeMillis()>clientStartTime+30000)
                         quit=true;
                   else
    // if the thread has been told to quit the connection set the quit flag
                      if (exitThread)
                           message="421 Quitting...";
                           quit=true;
    // If there is a sessionlimit see if the connection has exceeded the limit, if so set the quit flag
                      if (staticSessionLimit[account]>0)
                         if (System.currentTimeMillis()>clientStartTime+staticSessionLimit[account])
                            message="421 Session Limit Exceeded... please re-connect in 30 seconds";
                            quit=true;
    // if there is an inactivity timout see if the client has not been active for more than the timeout, if so set the quit flag
                      if (staticInactivityTimeout[account]>0)
                         if (System.currentTimeMillis()>lastActiveTime+staticInactivityTimeout[account])
                            message="421 Inactivity Timeout Exceeded... closing connection";
                            quit=true;
    // if the quit flag is set return "QUIT" which tells the thread to close the connection and return to the pool
                   if (quit)
                      ps.print(message);
                      ps.write(EOL);
                      ps.flush();
                      return "QUIT";
                dbuf.flip();
                   CharBuffer cb=null;
    // if there are bytes in the buffer, add the data to the recieved command string (str)
                   if (bytesRead>=0)
                        cb = decoder.decode(dbuf);
                          str=str+cb.toString();
    // repeat while EOL is not detected
             } while (str.length()<2 || !str.substring(str.length()-2).equals("\r\n"));
             str=str.substring(0,str.length()-2);
             FTPServer.log("STRING: "+str);
             lastActiveTime=System.currentTimeMillis();
    // update the last action requested by the client in the governor
             governor.addAction(str,this);
             return str;
          }

  • Problem inserting XML doc (character set)

    Hi all,
    I'm having trouble trying to insert XML either "posting" it (xsql) or "putting" it
    (OracleXML putXML).
    The error that I get: "not supported
    oracle-character-set-174".
    The environment is:
    Oracle 8i 8.1.5
    (NLS_CHARACTERSET EL8MSWIN1253 for greek)
    JDK 1.2.2
    Apache Web Server 1.3.11
    Apache JServ 1.1
    XSQL v 0.9.9.1 and
    XMLSQL, XML parser v2 that comes with it.
    I had dropped all java classes and reloaded
    them using oraclexmlsqlload batch file.
    But still getting the same error.
    The thing that is that I am
    able to insert XML doc that was generated
    with an authoring tool called W4F that extracts data from HTML pages and map them to
    XML document, even with greek characters
    in it. But when XML is generated using
    an editor or the servlet like the following:
    newschedule.xsql like
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="latestschedules.xsl"?>
    <page connection="dtv" xmlns:xsql="urn:oracle-xsql">
    <xsql:insert-request date-format="DD'/'MM'/'YYYY" table="schedule_details_view"
    transform="request-to-newschedule.xsl"/>
    <xsql:query table="schedule"
    tag-case="lower" max-rows="5" rowset-element="latestschedules"
    row-element="schedule">
    select *
    from schedules
    order by schedule_id desc
    </xsql:query>
    </page>
    request-to-newschedule.xsl like
    <?xml version = '1.0'?>
    <ROWSET xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xsl:version="1.0">
    <xsl:for-each select="request/parameters">
    <ROW>
    <SCHEDULE_ID><xsl:value-of select="Schedule_id_field"/></SCHEDULE_ID>
    <DESCRIPTION><xsl:value-of select="Description_field"/></DESCRIPTION>
    <DETAILS>
    <DETAILS_ITEM>
    <STARTING_TIME><xsl:value-of select="Starting_Time_field_1"/></STARTING_TIME>
    <DURATION><xsl:value-of select="Duration_field_1"/></DURATION>
    </DETAILS_ITEM>
    <DETAILS_ITEM>
    <STARTING_TIME><xsl:value-of select="Starting_Time_field_2"/></STARTING_TIME>
    <DURATION><xsl:value-of select="Duration_field_2"/></DURATION>
    </DETAILS_ITEM>
    <DETAILS_ITEM>
    <STARTING_TIME><xsl:value-of select="Starting_Time_field_3"/></STARTING_TIME>
    <DURATION><xsl:value-of select="Duration_field_3"/></DURATION>
    </DETAILS_ITEM>
    <DETAILS_ITEM>
    <STARTING_TIME><xsl:value-of select="Starting_Time_field_4"/></STARTING_TIME>
    <DURATION><xsl:value-of select="Duration_field_4"/></DURATION>
    </DETAILS_ITEM>
    <DETAILS_ITEM>
    <STARTING_TIME><xsl:value-of select="Starting_Time_field_5"/></STARTING_TIME>
    <DURATION><xsl:value-of select="Duration_field_5"/></DURATION>
    </DETAILS_ITEM>
    </DETAILS>
    </ROW>
    </xsl:for-each>
    </ROWSET>
    Hope that someone could help me on this ...
    Any advice is highly appreciated.
    Thanks in advance
    Nicos Gabrielides
    email: [email protected]

    Hi,
    How about applying an XSL on the existing XML doc to create another XML doc to filter out the table column not found in the target db table, so that all the columns are matched and then use putXML to load?
    Hope that helps.
    OTN team@IDC

  • ORACLE invoices with a Japanese character set

    We are having trouble printing ORACLE invoices with a Japanese character set.
    the printer we are using is a Dell W5300,do I need to configure the printer or is it something that needs to be configure in the software?????please help......

    We are having trouble printing ORACLE invoices with a
    Japanese character set.
    the printer we are using is a Dell W5300,do I need to
    configure the printer or is it something that needs
    to be configure in the software?????please help......What is the "trouble"? Are you seeing the wrong output? It may not be the printer, but the software that is sending the output to the printer.
    If you are using an Oracle Client (SQL*Plus, FOrms, Reports etc), ensure you set the NLS_LANG to JAPANESE_JAPAN.WE8MSWIN1252 or JAPANESE_JAPAN.JA16SJIS

  • Japanese Character Set - in Safari

    I am considering purchasing a new Mac Mini, and need to access my Hotmail account with Japanesse Character Set. My XP Windows system requires the Japanese Language Pack. How will the Mac handle this? in other words, will Safari support the viewing and editing of Japanese Characters, or is there something I need to download? thanks!

    found out that the Japanese Language Pack requires W7 Ultimate or Enterpise.
    The Windows Japanese Language Pack is for turning the entire OS into Japanese. It has nothing to do with your ability to read Japanese or write Japanese while running your OS in English. I'm sure W7 comes with that installed by default, just like OS X does.
    All browsers, Mac and Windows, automatically adjust to the character set provided in the code of the web page they are viewing, and also provide a way for the user to change it in the View > Text Encoding menu. I don't think you will have any trouble reading Japanese webmail with Safari or FireFox or Opera on a Mac.

  • Client-DB character set conflict

    Hi all,
    I have a 9idb running on wink2. The NLS_CHARACTERSET are:
    DB ='AL32UTF8'
    client NLS_LANG ="American_America.WE8ISO8859P1".
    I had no trouble in reading/loading some characters (western eur.) But the data are corrupt and unreadable when it comes to manipulate eastern eur. characters (czech,slovenia..)
    I did modify the client character set from "American_America.WE8ISO8859P1" to "American_America.AL32UTF8". But i still can not properly read eastern characters.
    Please advise,
    Thks

    Hi there,
    Using sql/loader, i get data in properly by forcing the loading process to use a specific character set.
    I've added "CHARACTER SET UTF8" parameter in the control file.And saved the input file as UTF8.Data get loaded correctly.
    Question: I'm working on a test db with a unicode character set (AL32UTF8).No problem with eastern european characters.
    Our production db has a nls charac = 'WE8ISO8859P1',which supports latin-based writting script but not eastern european characters.I'm getting trouble.
    Shall I migrate the prod db character set from 'WE8ISO8859P1' to 'AL32UTF8', which is a superset in this case ?
    Could you please through some light on how i should be doing this ?
    Thkx,
    Lamine

  • ORA-12712 error while changing nls character set to AL32UTF8

    Hi,
    It is strongly recommend to use database character set AL32UTF8 when ever a database is going to used with our XML capabilities. The database character set in the installed DB is WE8MSWIN1252. For making use of XML DB features, I need to change it to AL32UTF8. But, when I try doing this, I'm getting ORA-12712: new character set must be a superset of old character set. Is there a way to solve this issue?
    Thanks in advance,
    Divya.

    Hi,
    a change from we8mswin1252 to al32utf8 is not directly possible. This is because al32utf is not a binary superset of we8mswin1252.
    There are 2 options:
    - use full export and import
    - Use of the Alter in a sort of restricted way
    The method you can choose depends on the characters in the database, is it only ASCII then the second one can work, in other cases the first one is needed.
    It is all described in the Support Note 260192.1, "Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)". Get it from the support/metalink site.
    You can also read the chapters about this issue in the Globalization Guide: [url http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430]Change characterset.
    Herald ten Dam
    http://htendam.wordpress.com

  • Oracle 8.1.5 install on Linux Redhat 6.0: character set (and other) problem(s)

    I am trying to install Oracle 8i on Linux and it does not work : once the install is finished, I have a message saying that "Character Set not found".
    I am runing a french version of Linux (fr-latin 1) and I try to install Oracle with French and English as languages
    An other problem about this install : Oracle does not seem to recognize that I have 6,9 Giga for it to install, and says that I have not enough space for the install...
    And at the end of the install, it takes for ages (about 15mns) during which nothing seems to happen. On one machine I got out of this phase, but on the other I never saw it finish, it looks as if the computer crashed. Is that normal?
    I went through all the initialization phases, set the correct environment variables...
    thanks
    Solange
    null

    I've been dealing with the same problems in the english version but could bypass thiss by doing the folowing.
    -Just ignore the disk space stuff
    -Ignore the charset message, also
    -When creating a database, choose custom and then select the WE8ISO8859P1 char set. It worked for portuguese, must work for french also.
    -Everyone here recommended, and I do the same, leave the database creation for later, not during instalation.
    Good Luck!

  • The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.

     We developed a SSIS Package to pull the data From Oracle source to Sql Server 2012. Here we used ADO.Net source to pull the records from Source but getting the below error after pulling some 40K records.
      [ADO NET Source [2]] Error: The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.
    [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. 
     The PrimeOutput method on ADO NET Source returned error code 0xC02090F5. 
     The component returned a failure code when the pipeline engine called PrimeOutput(). 
    The meaning of the failure code is defined by the component, 
    but the error is fatal and the pipeline stopped executing. 
     There may be error messages posted before this with more 
    information about the failure.
    Anything that we can do to fix this?

    Hi,
      Tried both....
      * Having schema type as Nvarchar(max). - Getting the same error.
      * Instead of ADO.Net Source used OLEDB Source with driver as " Oracle Provide for OLE DB" Getting error as below.
           [OLE DB Source [478]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
           [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on OLE DB Source returned error code 0xC0202009.  The component returned a failure
    code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the
    failure.
    Additional Info:
       * Here the Source task is getting failed not the conversion or destination task.
    Thanks,
    Loganathan A.

  • Message uses a character set that is not supported by the internet service

    Does any one have any advice on how to fix this problem?
    E-mails sent from my iphone 3G periodically arrive in an unreadable form at the recipient. The body of the e-mail has been replaced with the message "This message uses a character set that is not supported by the internet service...." The problem e-mails also include an attachment that contains an unformatted text file containing the original message surrounded by what appears to be lots of formatting data that is displayed as gibberish.
    This occurs sometimes, but not always, even with the same recipients. I am sending e-mail through a G-mail account that is configured on the iphone using IMAP. I have tried the gmail account to use the two available formatting options for mail, but neither fixes the problem.
    I have also upgraded to 2.01 and restored a few times without impact.

    Hi,
    I got somewhat similar problem with special charecters(German umlaud �,�,�..).
    I create a file with java having special charecters in it. Now if I open this file I am able to view the special charecters in it.But If I attach this file send it using following code then receiver can not see the umlaud charecters in it.They get replaced by _ or ?
    MimeBodyPart mbp2 = new MimeBodyPart();
    FileDataSource fds = new FileDataSource(fileName);
    mbp2.setDataHandler(new DataHandler(fds));
    mbp2.setFileName(output.getName());
    Multipart mp = new MimeMultipart();
    mp.addBodyPart(mbp2);
    msg.setContent(mp);
    Transport.send(msg);
    From you message it looks like you are able to send the mail attachment correctly(by preserving special charecters).
    Can you tell me what might be wrong in my code.
    I appriciate your efforts in advance.
    Prasad

  • Oracle XE and character set

    Hello all,
    I installed Oracle XE on RHEL 4 Linux and I found out that database character set is AL32UTF8. Does anyone know why oracle choose this character set? Maybe because of NLS_LANG env variable? Is it possible to change it to EE8ISO8859P2? Since database is still empty I can drop it and crate new database.
    Do you think it is possible to set some env variables and do new oracle xe instalation including database with iso charset?
    I want to have EE8ISO8859P2 charset because of doing exp/imp from another oracle iso db to oracle xe and it is much easier to do this without charset conversion.
    Any help will be appreciated.
    regards,
    Miha

    When you download XE, you have a choice - take the 'western european' character set download, or the 'unicode' download.
    No other choices.
    Join us over in the XE forum where people have discussed this and found workarounds. Info about finding that forum at Re: Oracle XE Installation failed

  • Character sets and ado

    I have a table with a clob field on an Oracle 8.1.7.4 database. When querying the clob field via odbc and ado the value is truncated. The Oracle server and client are using a WE8ISO8859P1 character set. Has anyone come across this before.
    Thanks.

    I believe the data should be able to be represented by IS0-8859. The data is a long random string of characters that represents a fingerprint image.
    We seem to only get 996 characters back from the database. If I do a getchunk on the data then I get 996 characters of data, then 996 NULLS, then 996 characters of data and so on. The 996 NULLS should be data.
    The data is in the database because I can do a dbms_lob.substr and get the correct info back.

Maybe you are looking for

  • Western Digital External 250 drive...mac and PC use

    I've been using a WD 250 external for the last year and I love it! I just bought another to back up all my files from the first and use the first one while I travel. Since many internet cafes and friends have PCs, I switched the drive to MS-DOS FAT b

  • Issue using microsoft tfs as version control system with jdeveloper 11g

    Hi , I am using TeamFoundationServer as source control system and trying to use Jdeveloper with TeamVCS Extension. Software Details : Jdeveloper Version : Studio Edition Version 11.1.1.6.0 Build JDEVADF_11.1.1.6.0CLOUD_GENERIC_121118.1600.6229 Versio

  • Standard reports issue

    Dear All, We are having an issue we have english as a base language and other language is arabic when we run any standard report in arabic it is suppose to appear RTL (RIGHT TO LEFT) but its appearing LTR (LEFT TO RIGHT), please let me know how can t

  • Iphone update not successful and restore does not work

    I tried to install the latest iphone update and received a message that it was unsuccessful and I had to restore my iphone. Now, restore is unsuccessful. My phone displays an image of a cable disconnected from itunes and CD. What next? I'm dead in th

  • SapScript Paragraph

    I am having an issue with my defined character/ paragraph formats set in the alphanumeric form painter not being recognized in the graphical form painter. When I set a defined format and then try to switch to graphical mode I get a system message say