Identify special characters in oracle 9i

HI,
I want to identify the special characters in a table.Right now i am using oracle 9i version.
Please help us.

You can use following pl/sql block for this purpose. It will check if there is any special character in a field(item description here) and will display that display the position and ascii value of that special character. Later you can write another query(if needed) to remove those special characters.
Modify the query as needed.
declare
l_desc VARCHAR2(90);
l_length NUMBER;
l_cnt NUMBER := 1;
l_char VARCHAR2(20);
l_spc_char NUMBER := 0;
CURSOR c1 is select segment1, description, length(description) length1 from mtl_system_items_b where 1=1 rownum < 10000  and segment1 = '00000942304A330'
and organization_id = 156;
begin
FOR c_rec IN C1
LOOP
l_cnt := 1;
l_spc_char := 0;
WHILE l_cnt <= c_rec.length1
LOOP
l_char := SUBSTR(c_rec.description,l_cnt,1);
IF (ascii(l_char) < 32 or ascii(l_char) > 126) then
DBMS_OUTPUT.PUT_LINE('Character: '||l_char||' Position: '||l_cnt||' Ascii Value: '||ascii(l_char));
l_spc_char := l_spc_char + 1;
end if;
l_cnt := l_cnt + 1 ;
END LOOP;
IF l_spc_char > 0 THEN
DBMS_OUTPUT.PUT_LINE('Item: '||c_rec.segment1||' Description: '||c_rec.description);
END IF;
END LOOP;
end;

Similar Messages

  • How to save Special Characters in oracle?

    Is there any way to enter special characters such as ºC ? i am using J2EE and Oracle 9 i.
    When i try to enter 2ºC after updating the datbase it is converted to 2ºC when it is displayed in HTML. All special characters are prefixed with Â. Pls suggest any way to use special characters with oracle ..

    This has nothing do to with NLS_LANGUAGE. In general, character set processing depends on NLS_LANG setting (which is an OS setting and not a instance initialization parameter) and database character set. To understand NLS_LANG see OTN NLS_LANG FAQ http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm.
    However, I think that JDBC is an exception and does not use the character set defined by NLS_LANG. See last answer in following discussion:
    Re: When is NLS_LANG used ?

  • Sample code to identify special characters in a string

    Hi,
    I need to identify special characters in a string.... could anybody send me some code please.......
    Thanks,
    Best regards,
    Karen

    data: str(100) type c.
    data: str_n type string.
    data: str_c type string.
    data: len type i.
    data: ofst type i.
    str = '#ABCD%'.
    len = strlen( str ).
    do.
      if ofst = len.
        exit.
      endif.
      if str+ofst(1) co sy-abcde.
        concatenate str_c str+ofst(1) into str_c.
      else.
        concatenate str_n str+ofst(1) into str_n.
      endif.
      ofst = ofst + 1.
    enddo.
    write:/ str.
    write:/ str_c.
    write:/ 'spacial chracter',20 str_n.
    Function module  <b>SF_SPECIALCHAR_DELETE</b> <b>DX_SEARCH_STRING</b>
    l_address1 = i_adrc-street.
    CHECK NOT L_ADDRESS1 IS INITIAL.
    len = STRLEN( l_address1 ).
    do len times.
    if not l_address1+l(1) ca
    'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 '.
    if i_adrc-street+l(1) CO sy-abcde.
    elseif i_adrc-street+l(1) CO L_NUMCHAR.
    exit.
    endif.
    l = l + 1.
    enddo.
    data : spchar(40) type c value '~!@#$$%^&()?...'etc.
    data :gv_char .
    data:inp(20) type c.
    take the string length .
    len = strlen (i/p).
    do len times
    MOVE FNAME+T(1) TO GV_CHAR.
    IF gv_char CA spchar.
    MOVE fnameT(1) TO inp2T(1).
    ENDIF.
    T = T + 1.
    enddo.
    REPORT ZEX4 .
    PARAMETERS: fname LIKE rlgrap-filename .
    DATA: len TYPE i,
    T TYPE I VALUE 0,
    inp(20) TYPE C,
    inp1(20) type c,
    inp2(20) type c,
    inp3(20) type c.
    DATA :gv_char.
    data : spchar(20) type c value '#$%^&*()_+`~'.
    START-OF-SELECTION.
    CONDENSE fname.
    len = strlen( fname ).
    WRITE:/ len.
    DO len TIMES.
    MOVE FNAME+T(1) TO GV_CHAR.
    IF gv_char ca spchar.
    MOVE fnameT(1) TO inpT(1).
    ENDIF.
    T = T + 1.
    ENDDO.
    CONDENSE INP.
    write:/ 'Special Characters :', inp.
    Rewards if useful..........
    Minal

  • Reading and writing Special Characters to Oracle DB

    Hi All,
    I need to insert data from CSV to Oracle DB and then use the same data for creating XML file in UTF-8 format.
    I have few fields in the CSV file which has � and � special characters. I'm able to read � and write in UTF-8 , but the same procedure is resulting in some other ascii character for �.
    While reading data from CSV file :
    Reader l_fileReader = new InputStreamReader(p_in,"ISO-8859-1");
    Can anyone help me.
    Thanks,
    Ramki.

    Does anyone has some pointers or clues?

  • Special Characters in Oracle

    Hi,
    We have a requirement where we need to load data into Oracle . We are creating a CSV file and loading data to Oracle tables using SQL Loader scripts. There are certain records in the CSV Files that have special characters. For example 20°/ 60°/ 85°. The data is coming fine when in the CSV file, but when it is loaded in Oracle tables it is displayed as 20¿/ 60¿/ 85¿. Please share if you have any info on this.
    Character set value:
    SELECT value
    FROM nls_database_parameters
    WHERE parameter ='NLS_CHARACTERSET';
    Result:AL32UTF8
    Regards
    AM

    Hi,
    where do you load those files? On the server or on the client? You have to look at the NLS_LANG setting on the OS. If it is not properly set, then it uses ASCII. So set NLS_LANG properly, or use the CHARACTER keyword in the controlfile of SQLLOADER. More information in the manual: http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/ldr_control_file.htm#i1005287
    Herald ten Dam
    http://htendam.wordpress.com

  • Load special characters in oracle by using informatica

    Hi All,  I'm trying to load data from flat file to oracle databse table using Informatica power center 9.1.0 and I have some special characters in source file. Data are loaded sucessfully without any errors but these special characters are loaded different way like 1) Planner – loaded as Planner ��������2) Háiréch  loaded as Hair��������ch  While same flatfile loaded into another flatfile,data loaded correctly including special characters.So,I am unable to understand problem with database or informatica.   SourceFlat File - comma ',' delimtedCode page is defined as UTF-8 encoding of Unicode Relational connectionI have tried by changing the code page while creating relational connection in Informatica to UTF-8, ISO 8859-1 Western European,  MS Windows Latin 1 etc.,didn't work. InformaticaIntegration Service and Repo code page is defined as UTF8Data movement code page of Integration server was set to UNICODE   TargetOracle databaseNLS_NCHAR_CHARACTERSET: AL16UTF16NLS_CHARACTERSET: AL32UTF8   ThanksSai

    Hi All,  I'm trying to load data from flat file to oracle databse table using Informatica power center 9.1.0 and I have some special characters in source file. Data are loaded sucessfully without any errors but these special characters are loaded different way like 1) Planner – loaded as Planner ��������2) Háiréch  loaded as Hair��������ch  While same flatfile loaded into another flatfile,data loaded correctly including special characters.So,I am unable to understand problem with database or informatica.   SourceFlat File - comma ',' delimtedCode page is defined as UTF-8 encoding of Unicode Relational connectionI have tried by changing the code page while creating relational connection in Informatica to UTF-8, ISO 8859-1 Western European,  MS Windows Latin 1 etc.,didn't work. InformaticaIntegration Service and Repo code page is defined as UTF8Data movement code page of Integration server was set to UNICODE   TargetOracle databaseNLS_NCHAR_CHARACTERSET: AL16UTF16NLS_CHARACTERSET: AL32UTF8   ThanksSai

  • ### Problem in retrieving special characters with Oracle 9i JDBC drivers

    hi,
    We are having some problem with retrieving special characters like '�' from the database.
    Our application is using JDK1.3.1 with Oracle 9i at the back end(Version: 9.0.1.0.0). We are using oracle 9i thin drivers (classes12.zip) for database interaction.
    To relieve the data from database we are using PreparedStatement in two ways
    1. Creating a preparedstatement from connection object without any parameters and then retrieve the
    data using it. This gives the results in correct format i.e. special characters like '�'
    2. Create the preparedstatement by passing the following parameters.
    i) ResultSet.TYPE_SCROLL_INSENSITIVE
    ii) ResultSet.CONCUR_READ_ONLY
    In this case we are not able to retrieve the special character like '�' correctly. Instead the ResultSet
    returns 'h'
    I think this is the problem with Oracle drivers. Does anyone have any information about the mentioned problem.
    rgds

    I don't know exactly (because I am using JDK 1.4 with ojdbc14.jar where these problems seem to be rare...) but you may consider this:
    1. Add nls_charset12.zip to your classpath to ensure that the encoders are present (may or may not help)
    2. Swith to JDK 1.4, and do this:
    Instead of String s = getString(column)
    use
    byte[] bytes = getBytes(column);
    ByteBuffer bb = ByteBuffer.wrap(bytes); // in package java.nio
    CharBuffer cb = Charset.forname("ISO-8859-x").decode(bb);
    String s = cb.toString();
    The latter method allows you to perform the encoding/decoding manually.
    3. Change the character encoding in the database to unicode upon database setup.
    4. Try playing with NLS parameters (alter session ...)

  • How to insert & # special characters into oracle table?

    I have a text value which contains special characters such as & and #.
    After I pass from one page to another,
    TEST& becomes "TEST&amp;"
    I put " " on the above test, otherwise amp; will be truncated by OTN).
    TEST# becomes TEST (# is truncated).
    Actually the value is saved in table like this: "TEST&amp;"
    How to solve this problem?
    How to insert & into table without &amp;
    Thank you.
    Edited by: user628655 on Jul 27, 2009 9:47 AM
    Edited by: user628655 on Jul 27, 2009 9:49 AM
    Edited by: user628655 on Jul 27, 2009 10:39 AM

    Avoid doing that through a link. If this is a page item then submit the page and redirect to the target page using a branching. On the target page, create a computation to compute the target item using the original page item as the source. If you are talking about a report, use the id to pass through the link and fetch the text column in an on load computation.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    ------------------------------------------------------------------------------

  • Identify special characters that are not supported by an embeded font

    Hi!
    I'm useing embeded fonts as CFF in my flex 4.5.1 application and I have problems with the special charcters  like Ă, Â, Î, Ș, Ț or arabic text, that are not included in my embeded font. This are displayed in my RichEditableTect component useing the default font (Arial).
    Is there any way to block the user when he tries to add such characters? 
    Or can  I identify them before saveing , in order to format the text like <Text Font="ExoticFont"...>Hello<Text font-family="Arial">ë</Text></Text> ?

    I think you want to use Font.hasGlyphs.  If you are using the @font-face directive it is hard to get to the Font class so you may wish to switch to using the directive.

  • Handling Special Characters in Oracle JDBC et al

    Hi all
    I am writting a programme to do the following ..
    a)Download xml data from the internet... by means of URL openConnection etc.
    b) Insert parts of the xml data into a oracle database ...
    Quite simple .. but ;)
    How ever there is a integerity constrain of NOT NULL and UNIQUE on one of the columns on the database ...
    if the word Galen exists and the code tries to insert G�len ( note � is a character on its own NOT a with ' )
    its gives a Integrity violation.
    I tried the following
    String name = "G�len";/// comming from xml after parsing etc ...
    Statement stmt = conn.createStatement();
    stmt.executeUpdate("INSERT INTO ABC VALUES("+name+")" and also tried
    String name = "G�len";/// comming from xml after parsing etc ...
    PreparedStatement ps = conn.prepareStatement();
    ps.setString(1,name);
    ps.executeUpdate("INSERT INTO ABC VALUES( ? )"); and few other variations to the above procedure ....
    Can any one tell me what could be the reason one possible cause could that Oracle does a transparent character set conversion on all data that it is about to update / insert ... If so what is the work around cause "Galen" and "G�len" are definatly two different names..
    Also interestingly I tries to execute the following query
    SELECT name FROM ABC WHERE name = 'G�len' vie jdbc using the above methods and it returns a emty result set.
    So while doing a query oracle refuses to accept G�len as a existing word where as when the time comes to insert G�len it issues a integrity constraint violation :(
    O/S for client code and Oracle server : Compaq Tru64 UNIX
    Characters set for Compaq Tru64 UNIX V5.0A : ISO8859_1
    Character set for Oracle is US7ASCII

    You should use Prepared statements, it's easier and faster.
    In your code you have mixed it up a bit. You wrote:
    String name = "G�len";/// comming from xml after parsing etc ...
    PreparedStatement ps = conn.prepareStatement();
    ps.setString(1,name);
    ps.executeUpdate("INSERT INTO ABC VALUES( ? )");
    It should be
    String name = "G�len";/// comming from xml after parsing etc ...
    PreparedStatement ps = conn.prepareStatement("INSERT INTO ABC VALUES( ?)");
    ps.setString(1,name);
    ps.executeUpdate();
    In other words, you prepare your statements and then you just use the set methods and execute. If you're looping over the XML-file you should just prepare the staments onece and then use it over and over again. This is one of the strengths with PreparedStatements.
    Let's assume you have all your names in an array called name, use the following code instead:
    PreparedStatement ps = conn.prepareStatement("INSERT INTO ABC VALUES( ?)");
    for(int i=0; i < names.length();i++){
    ps.setString(1, names);
    ps.executeUpdate();
    Of course, you might use ArrayList or some other Collection instead, this is just to show how to reuse PreparedStatements.
    /Fredrik

  • Insert Unicode Characters Into Oracle 8.1.5

    Hello,
    First off, here are the specs:
    Oracle 8.1.5
    JDK 1.2.1
    Oracle8i 8.1.6.2.0 JDBC Drivers for use with JDK 1.2.x for Solaris
    I'm running into a problem with insert Unicode characters into Oracle via the JDBC driver. As you can see above, I am using the Oracle 8.1.6.2.0 JDBC driver because it is the first driver with supports the JDK 1.2.x. So I think I should be okay.
    I can retrieve data with special characters from Oracle by calling the getBytes() method from the ResultSet with all special characters being intact. I am using getBytes because calling getString() would throw the following exception: "java.sql.SQLException(): Fail to convert between UTF8 and UCS2: failUTF8Conv". However, with that value that I just retrieved, or any other data with special characters (unicode) in which I try to insert into Oracle does not get converted properly.
    What appears to be happening is that data with special characters (unicode), are not being treated as a single double byte character, but rather two single byte characters. Thus, R|ckschlagventil becomes RC<ckschlagventil once it is inserted. (Hopefully, my example will be rendered properly).
    According to all documentation that I have found, the JDBC driver should not have any problem with converting UCS2 Java Strings to Oracle's UTF8 character set.
    I have set Oracle's NLS_NCHAR_CHARACTERSET to UTF8. I am also setting the environment variable NLS_LANG to AMERICAN_AMERICA.UTF8. Perhaps there is some other environment setting in which I am missing?
    Any help would be appreciated,
    Christian
    null

    Import has a lot of options, so it depends on what you want to do.
    C:\> imp help=y
    will show you all possible options. An example of full import :
    C:\> imp <username>/<password>@<TNS alias> file=<DMP file> full=y log=<LOG file>
    Message was edited by:
    Paul M.
    ...and there is always [url http://download-uk.oracle.com/docs/cd/F49540_01/DOC/index.htm]The documentation

  • Special Characters Check in OBPM

    Is this code good to check for special characters in OBPM 10GR3?
    Pattern p = Pattern.compile("/[a-zA-Z0-9]/g");
         Matcher m = p.matcher("This is a string");
         boolean matched = m.matches();
         logMessage("--Identifying Special Characters--"+matched);
         if (matched == true)
              logMessage("--Contains Special Character--");
         }

    Here's what I use to fix file names to ensure that they do not have special characters or international characters:
    String beforeConversion = "àÀâÂäÄáÁéÉèÈêÊëËìÌîÎïÏòÒôÔöÖùÙûÛüÜçÇ’ñ";
    String afterConversion = "aAaAaAaAeEeEeEeEiIiIiIoOoOoOuUuUuUcC'n";
    // does it contain international characters?
    if (originalFileNameArg.match(regexp : '/[àÀâÂäÄáÁéÉèÈêÊëËìÌîÎïÏòÒôÔöÖùÙûÛüÜçÇ’ñ]+/').length() > 0) {
        int i = 0;
        String @char;
        while (i < beforeConversion.length()) {
            originalFileNameArg = originalFileNameArg.replace(from : beforeConversion.charAt(position : i),
                                                              @to : afterConversion.charAt(position : i));
            i = i + 1;
    String[] m = originalFileNameArg.split(regexp : '/[^a-zA-Z0-9_,']+/');
    int i = 0;
    boolean first = true;
    foreach (item in m) {
        if (! item.empty) {
            if (first) {
                retValue = item;
            else {
                retValue = retValue + " " + item;
        first = false;
        i = i + 1;
    return retValue.trim();Dan

  • 10g won't recognize ellipsis and other special characters.

    I'm trying to insert data into both VARCHAR2 and CLOB columns that contains characters such as ellipsis and other special characters. Oracle doesn't recognize the characters and just converts it to garbage. Is there a way to make Oracle easily recognize these characters?
    For example, it won't recognize the following ellipsis in the string:
    "Here we are … right now."

    My bad...I did look at the national character set instead.
    When I performed theq query you posted, I got the following:
    NLS_CHARACTERSET
    WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET
    AL16UTF16
    I hope this new information helps. Also, the tool involved is SQL *Plus, but I'm currently concentrated on getting the insert to work via JDBC. Thanks in advance.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Find and replacing special characters

    we have just discovered that after database unicode conversion, some special characters are starting to appear in the database after users copy and paste into the database.
    For example from a table, we can identify it with:
    SQL> select count(*) from table where column_name like '%' || chr(25) || '%';
    COUNT(*)
    15
    This is for a table.
    How can this be done to identify the characters for all tables because we are not sure of all the tables
    Thanks

    Hi,
    Please try below pl/sql block to identify special characters in tables with respect to columns
    set serveroutput on size 1000000
    declare
    procedure gooey(v_table varchar2, v_column varchar2) is
    type t_id is table of number;
    type t_dump is table of varchar2(20000);
    type t_data is table of varchar2(20000);
    l_id t_id;
    l_data t_data;
    l_dump t_dump;
    cursor a is
    select distinct column_name
    from dba_tab_columns
    where table_name = v_table
    and data_type = 'VARCHAR2'
    and column_name not in ('CUSTOMER_KEY','ADDRESS_KEY');
    begin
    for x in a loop
    l_id := null;
    l_data := null;
    l_dump := null;
    execute immediate 'SELECT ' || v_column || ', ' || x.column_name || ', ' ||
    'dump(' || x.column_name || ')'
    || ' FROM ' || v_table
    || ' WHERE RTRIM((LTRIM(REPLACE(TRANSLATE(' || x.column_name ||
    ',''ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789@#$%^&*()_+
    -=,!\`~{}./?:";''''[ ]'',''A''), ''A'', '''')))) IS NOT NULL'
    bulk collect into l_id, l_data, l_dump;
    if l_id is not null then
    for k in 1..l_id.count loop
    dbms_output.put_line(v_table || ' - ' || x.column_name || ' - ' ||
    to_char(l_id(k),'999999999999'));
    dbms_output.put_line(l_data(k));
    dbms_output.put_line(l_dump(k));
    dbms_output.put_line('*********************');
    end loop;
    end if;
    end loop;
    end gooey;
    begin
    gooey('table1','coln1');
    gooey('table1','coln2');
    gooey('table2','coln3');
    end;
    Like this you can get special characters for all the columns/particular column from table
    Thanks,
    Nitin

  • MySql / Oracle special characters

    Searched all around, my apologies if this is a redundant thread.
    I have an Oracle (10.2.0.3) database set up to connect to MySql (4.1) database using HSODBC. Using the 3.51 MySql driver. Has a couple bugs. Distributed transactions, erroneous record counts, etc -- but I know from reading other threads that the fixes are coming. There is one problem I can't seem to get around...
    In MySql, a certain field is defined as CHAR(40). I would think any content short of 40 characters is padded with spaces. But when I pull pipeline characters concatenated on both sides of the values (MySql query browser), there are no extra spaces. It's behaving more like a VARCHAR on the MySql end:
    [select concat( '|', field, '|') from table]
    returns '|value|'
    NOT '|value |'
    Now, when I pull this data through to Oracle, there are special characters padding the content to the length of the column. When I pull the ASCII code value, they're 0 NUL (null) characters. Every CHAR field in every table that doesn't fill the column is coming across the database link padded with these NUL characters, which look like little hollow squares (in SqlDeveloper and JDeveloper). Since they're not spaces, I can't TRIM them out. I can remove them with TRANSLATE, but I would have to identify every CHAR field in every table and manually code a TRANSLATE into views against the DB_LINK, and then always pull from the views. There must be an easier way to get around this.
    Has anybody had a similar experience? Is this the version of the MySql driver (3.51) that I'm using? I see they have a version 5 out now. Is this addressed by a 10.2.0.4 patchset? Is that available yet? (Redhat) Is this fixed with 11g?
    Oracle - AL32UTF8, UTF8
    MySql - uses both UTF8 and Latin1, but the default on the source table is Latin1 / Latin_swedish_ci

    The problem is a MySQL ODBC driver issue.
    You have to enable ODBC option 512 (pad char to full length):
    Default behaviour of the MySQL ODBC is not to PAD spaces:
    SQL> select dump( "col1") from "counter"@mysql;
    DUMP("COL1")
    Typ=96 Len=20: 72,101,108,108,111,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    As you can see the char column is padded with 0 instead of spaces (32).
    As soon as you add OPTION 512 to the ODBC.INI (please make sure if you already have a value set for OPTION to add 512 on top of your value to get your dedicated value). So the ODBC.INI might look like:
    [mysql]
    Description = MySQL database test
    Driver = /usr/lib/libmyodbc3.so
    #Driver = MySQL ODBC 3.51 Driver
    OPTION = 512
    and then MySQL behaves correctly:
    SQL> select dump( "col1") from "counter"@mysql;
    DUMP("COL1")
    Typ=96 Len=20: 72,101,108,108,111,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32

Maybe you are looking for

  • Mail won't even open since latest update

    Mail was loading fine before the latest Java update, in fact, I checked my email right before doing the update. Ran the update, mail won't open. It loads like it's going to, it shows up in the menu bar, but nothing actually opens. I can click the Mai

  • Deploy Error for Web Dynpro

    I have got the error message on Deploying the project of web dynpro, they are: Deployment aborted 2006年7月6日 上午10:21:13 /userOut/deploy (com.sap.ide.eclipse.sdm.threading.DeployThreadManager) [Thread[Deploy Thread,5,main]] ERROR: [002]Deployment abort

  • HT1918 How do I get rid of a 7 cent balance?

    I need to get rid of my balance before Apple will let me change my region from the Canada to the US. But I can't figure it out... and therefore I can't buy any apps. Please help!

  • Information Regarding DMS

    I want help in Understanding Document management system in SAP.Any useful link and docs are acceptable.

  • Billing Document Simulation

    VF04 u2013 We can simulate unposted billing document on-line.  Clicking on the Collective billing/online button will you bring to VF01, screen 104, from there you view sales items to be billed.  There are more item details in our billing document tha