How to query for extended ASCII characters in column value

Hi All
Sorry if this has been answered before. I tried searching but none of them seems to work for me.
I am trying to search for inverted ? in my column.
I am using the following query
select *
from table_name
where regexp_like (description , '¿' )

Did you try:
like '%¿%'
create table table_name (description varchar2(100))
insert into table_name values ('jh¿¿gagd')
insert into table_name values ('jhga345gd')
insert into table_name values ('j1231232hgagd¿')
select *
from table_name
where regexp_like (description , '¿' )
DESCRIPTION
jh¿¿gagd
j1231232hgagd¿
select * from table_name
where description like '%¿%'
DESCRIPTION
jh¿¿gagd
j1231232hgagd¿Edited by: user130038 on Sep 8, 2011 12:29 PM

Similar Messages

  • Contains query fails for extended ascii characters

    I have an Oracle 9.2 instance whose characterset is WE8MSWIN1252. I'm using the same characterset on my client. If I have a LONG column that contains extended-ascii characters (the example I'm using has the Euro character '€', but I've seen the same problem with other characters), and I'm using the Intermedia service to index that column, then this select statement returns no records even though it should find several:
    select id from table1 where (contains(long_col,'€',1) > 0);
    However, the same select statement looking for something else, like 'e', works just fine.
    What am I doing wrong? I can do a "like" query against a VARCHAR2 column with a Euro character, and it works correctly. I can do a "dbms_lob.instr" query against a CLOB column with a Euro character, and it also works. It's just the "contains" query against a LONG column that fails.

    There are a number of limitations in using Long datatypes. If you check the SQL Reference you will see: "Oracle Corporation strongly recommends that you convert LONG columns to LOB columns as soon as possible. Creation of new LONG columns is scheduled for desupport.
    LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."

  • Single query for displaying all but 1 column values for all tables

    Hi,
    All the tables have SYS_CREATION_DATE column.
    But I dont want to display this column value
    Can someone suggest some way in which i could achive this?
    Oracle version:11gR1
    OS:SunOS
    Cheers,
    Kunwar
    Edited by: user9131570 on Jul 6, 2010 7:57 PM

    user9131570 wrote:
    @Tubby
    I *want to display table-wise the values of all but 1(SYS_CREATION_DATE) columns in my database.*
    I need this in order to compare it to another database for all these values .Let me make a wild guess at what you are getting at.
    Given these two tables
    create table emp
       (empid number,
        empname varchar2(15),
        empaddr   varchar2(15),
        sys_creation_date date);
    create table dept
       (deptid number,
        deptmgr varchar2(10),
        sys_creation_date date);you want to somehow combine
    select empid,
             empname,
             empaddr
    from emp;with
    select deptid,
             deptmgr
    from dept;into a single sql statement?

  • Problem convertting certain extended ascii characters

    I'm having problems with the extended ascii characters in the range 128-159. I'm working with SQL server environment using java. I originally had problems with characters in the range 128-159 when I did a 'select char_col from my_table' I always get junk when I try to retreive it from the ResultSet using the code 'String str = rs.getString(1)'. For example char_col would have the ascii character (in hex) '0x83' but when I retrieved it from the database, my str equaled '0x192'. I'm aware there is a gap in the range 128-159 in ISO-8859-1 charset. I've tracked the problem to be a charset issue converting the extended ascii characters in ISO-8859-1 into java's unicode charset.
    I looked on the forum and it said to try to specify the charset when I retreived it from the resultset so I did 'String str = new String(rs.getBytes(), "ISO-8859-1")' and it was able to read the characters 128-159 correctly except for five characters (129, 141, 143, 144, 157). These characters always returned the character 63 or 0x3f. Does anyone who what's happening here? How come these characters didn't work? Is there a workaround this? I need to use only use java and its default charsets and I don't want to switch to the windows Cp1252 charset cuz I'm using the java code in a unix environment as well.
    thanks.
    -B

    Normally your JDBC driver should understand the charset used in the database, and it should use that charset to produce a correct value for the result of getString(). However it does sometimes happen that the database is created by programs in some other language that ignore the database's charset and do their own encoding, bypassing the database's facilities. It is often difficult to deal with that problem, because the custodians of those other programs don't have a problem, everything is consistent for them, and they will not allow you to "repair" the database.
    I don't mean to say that really is your problem, it is a possibility though. You are using an SQL Server JDBC driver, aren't you? Does its connection URL allow you to specify the charset? If so, try specifying that SQL-Latin1 thing and see if it works.

  • Display extended ascii characters as question mark in xml file

    I am creating a XML file with encoding as UTF-8. Some tag values contain some extended ascii characters. When i run the java program to create the file in windows, the extended ascii characters are display correctly. But in linux it is displaying as ?(question mark).
    i am not able to rectify this. can anyone help me....
    Its urgent
    Thanks in advance.
    Message was edited by:
    Rosy_Thomas@Java

    Probably the locale is not set for the shell you are running in. The default 'C' locale uses the ASCII encoding which defines only 128 characters. See if giving the commandexport LC_CTYPE=en_US.UTF-8before starting the program fixes the issue.

  • Would like to use/see extended ascii characters in...

    I've got an E72 and I love it, but I would like to be able to see and use extended ascii characters.  Here is an example: σ__σ .  It looks like an upside-down, mirror-image capital Q on either side of the regular underline character.  To get the character I use ALT-229 on my Windows keyboard.  I can type this into my sms's or emails using ctrl-229, but it looks like a small "a" with a little "u" over it.  It looks the same way when it gets to my Windows email, so the character is not being sent properly from my E72.
    Am I just using the wrong ascii code?  Help!

    I did a little testing.
    SMS:
    when I create a text message to send to myself as a test, I can use ctrl-963 and on my original text message it looks like the lower case sigma, the character I want.
    When I get the text message, the character looks like an upper case sigma (the "M" on its side).
    The character is not supported at all in the regular Nokia Messaging app.
    EMAIL:
    When I attempt to use the character in creating a message, it just gives me a question mark character.
    When I get an email with the character, again it just gives me a question mark character.
    It appears that this is indeed a matter of software.  I-sms will support showing me that character, but not receiving it.  Profimail (the mail app I am using) doesn't support it either way.
    This isn't critical.  I will keep it in the back of my mind when evaluating this kind of app, and try suggesting to the programmers of both apps that supporting the extended character set would be handy, particularly in text messaging.

  • How to write extended ASCII characters to a file

    Here is a distilled fragment from some larger script. Its purpose is to create a text file containing some characters from the extended ASCII character set.
    $String = "Test" + [char]190 + [char]191 + [char]192
    echo $String | out-file d:\test.txt -Encoding ascii
    What I want in the target file is exactly the 7 characters that make up $String. The above method fails to deliver this result. How can I do it?

    Hi,
    Try using Add-Content or Set-Content instead:
    $String = "Test" + [char]190 + [char]191 + [char]192
    echo $String | Set-Content .\test.txt
    Don't retire TechNet! -
    (Don't give up yet - 13,225+ strong and growing)

  • How to query for unacceptable characters

    Hi there. I can query for acceptable characters but would like to query for unacceptable characters. Please see the below thread. thanks in advance!
    How to find Special Characters in a table ?
    ~Darby

    Sorry, missed it in the other post (already saturday here). So without using regular expressions. I'm a little confused now I must say.
    You want to search for unacceptable characters (I assume those characters are known this time):
    length(the_column) - length(replace(translate(the_column,'UNACEPTBL','*'),'*','')) > 0
    Regards
    Etbin

  • SQL Developer, UTF8 Oracle DB, extended ascii characters appear as blocks

    I have this value stored on the database:
    (Gestion Económica o Facturaci
    Notice the second word has an extended ascii character in it. When I use SQL Developer on my windows machine to view the data, I get a box in place of the o, kinda like this:
    (Gestion Econ�mica o Facturaci
    If I log on to the AIX server where the oracle database in question is and run sqlplus from there, I see things properly. I also managed to regedit oracle home to get sql plus on my windows machine to display this properly. I still cannot get sql developer to work though...
    Details about sql developer:
    font: arial Unicode MS
    environment encoding: UTF-8
    NLS Lang: American
    NLS Territory: America
    windows regional options:
    English (United States)
    Location: United States
    Database NLS settings:
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     mm/dd/yyyy hh24:mi:ss
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_CHARACTERSET     UTF8
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    Any ideas on how I can fix this. I'd rather NOT log onto the server to run queries.... thanks in advance for your thoughts!
    Edited by: user10939448 on Jan 31, 2012 1:51 PM

    user10939448 wrote:
    This problem is quite strange in that when I've been able to manually set American_america.utf8, things work.Sorry to say, but it seems you may have an incorrect setup.
    In general, you should set char set part of NLS_LANG to let Oracle know the code page used by the client. With win-1252, NLS_LANG should include .WE8MSWIN1252.
    The display from sqlplus was "lying", due to incorrectly stored data coupled by incorrect nls_lang setting (char set part). The pass-through or gigo scenario can be dangerous this way. Search the Globalization forum for the term 'pass-through' for previous discussions on the theme.
    The setting on AIX servers may be incorrect as well, but it depends how you use it (e.g. for database export or data load with utf-8 encoded files it may be correct).
    The output of the query you recommended looks odd to me:
    (Gestion Econ�mica o Facturaci     Typ=1 Len=30 CharacterSet=UTF8:
    28,47,65,73,74,69,6f,6e,20,45,63,6f,6e,f3,6d,69,63,61,20,6f,20,46,61,63,74,75,72,61,63,69;This is the telling part. The 0xF3 is not legal in UTF8. Actually, the code units for ó, U+00F3 Latin small letter o with acute, are C3 B3. So instead of f3 you should have expected c3,b3 from the dump output.
    >
    So it looks like what's under the covers is correct, but I'm still not seeing the correct character in sql developer.The opposite is true. Data is incorrectly stored and SQL Developer is correctly showing you this. Sqlplus is not the best tool in Unicode environments, SQL Developer is better.
    >
    ACP according to my windows registry is 1252. OEMCP is 437Also, if you use database clients in console mode (such as sqlplus), NLS_LANG should include .US8PC437 to properly indicate that code page in use is 437.

  • Need to find out extended ASCII characters in database

    Hi All,
    I am looking for a query that can fetch list of all tables and columns where there is a extended ASCII character (from 128 to 256). Can any one help me?
    Regards
    Yadala

    yadala wrote:
    Hi All,
    I am looking for a query that can fetch list of all tables and columns where there is a extended ASCII character (from 128 to 256). Can any one help me?
    Regards
    YadalaThis should match your requirement:
    select t.TABLE_NAME, t.COLUMN_NAME from ALL_TAB_COLUMNS t
    where length(asciistr(t.TABLE_NAME))!=length(t.TABLE_NAME) 
    or length(asciistr(t.COLUMN_NAME))!=length(t.COLUMN_NAME);The ASCIISTR function returns an ASCII version of the string in the database character set.
    Non-ASCII characters are converted to the form \xxxx, where xxxx represents a UTF-16 code unit.
    The CHR function is the opposite of the ASCII function. It returns the character based on the NUMBER code.
    ASCII code 174
    SQL> select CHR(174) from dual;
    CHR(174)
    Ž
    SQL> select ASCII(CHR(174)) from dual;
    ASCII(CHR(174))
                174
    SQL> select ASCIISTR(CHR(174)) from dual;
    ASCIISTR(CHR(174))
    \017DASCII code 74
    SQL> select CHR(74) from dual;
    CHR(74)
    J
    SQL> select ASCII(CHR(74)) from dual;
    ASCII(CHR(74))
                74
    SQL> select ASCIISTR(CHR(74)) from dual;
    ASCIISTR(CHR(74))
    J

  • Printing extended ascii characters

    How can i print an extended ascii character through a java program if i know its ascii value?I tried this :
    for (int i=0;i<256;i++)
    System.out.print((char)i);
    and i got the character '?' printed on the console for some of the characters.

    How can i print an extended ascii character through a
    java program if i know its ascii value?I tried this
    for (int i=0;i<256;i++)
    System.out.print((char)i);
    and i got the character '?' printed on the console
    for some of the characters.According to this site: http://www.pantz.org/html/symbols-hex/htmlcodes.shtml
    [[[HTML 4.01, ISO 10646, ISO 8879, Latin extended A and B]]]
    The ASCII value 8240 == �
    According to this site: http://www.idevelopment.info/data/Programming/ascii_table/PROGRAMMING_ascii_table.shtml
    [[[ISO 10646, ISO 8879, ISO 8859-1 Latin alphabet No. 1]]]
    The ASCII value of 137 == �
    It seems like it's a Windows ISO 8859-1 issue.
    From everything I've read it appears as though there are no characters from DEC value of 128-159 inclusive. DEC value 127 is DEL delete and DEC value 160 is � in the Latin-1 encoding or ISO-8859-1.
    I have a program that writes ASCII values to the screen. If I try to force it to print a value in the 150's it fails, returning the symbol --->>> ?
    However it can print every other ASCII value that I've tired (in the output file, the � symbol prints correctly but when it's posted on the forum it doesn't show up...). Here's an input sample:












    Output:
    [�]          [40]          [1]               [402]
    [�]          [41]          [1]               [381]
    [�]          [42]          [1]               [8364]
    [�]          [43]          [1]               [171]
    [�]          [44]          [1]               [182]
    [�]          [45]          [1]               [174]
    [�]          [47]          [1]               [8240]
    [�]          [49]          [1]               [255]
    [�]          [50]          [1]               [214]
    [�]          [51]          [1]               [220]
    [�]          [52]          [1]               [162]
    [�]          [53]          [1]               [163]
    Trying to force (char)155
    out.write( (char)155 + lineSep );
    Prints: ?

  • How to query for foreign encoded text via REST service

    I am using APEX 4.2.1.00.08 to publish RESTful web services that query a table of place names. The place names include foreign encoded alternates. My search field has type of NVARCHAR2 and the db has NLS_CHARACTERSET of AL32UTF8. Querying for foreign encoded names works fine from other applications (e.g. TOAD, ArcGIS).
    In APEX under 'SQL Workshop | RESTful Services' I have created a module, resource template, and resource handler. When I use 'Set Bind Variables' to test the service, it works for 'English' names with no problem (e.g. 'London').
    However, when I query for a foreign equivalents like 'ロンドン', the Japanese version of 'London', I get the following error:
    400 - Bad Request
    The request path contains illegal characters
    How do I get foreign encoded place names to work as the bind variable value of a REST service?

    Does this query help?
    select /*+ parallel(4) */
    pk.table_name parent_table_name, pk.constraint_name pkey_constraint,
    fk.table_name child_table_name, fk.constraint_name fkey_constraint, fk.r_constraint_name
    from
    user_constraints pk,
    user_constraints fk
    where
    pk.constraint_name = fk.r_constraint_name
    and pk.constraint_type='P'
    and fk.constraint_type='R'

  • How to Query for Hidden Chars

    Hello -
    I have a large number of E-mail addresses that were corrupted with hidden characters. I have a bunch of question marks, squares, etc. mixed in with the E-mail addresses. Is there a way to query for these values?
    Thanks in advance for guidance.

    Assuming you can identify the numeric range of the invalid characters (i.e. ASCII 0-31) and the numeric range of the valid characters, you have some options. There is an example of replacing all the invalid characters in a string using the TRANSLATE command here
    http://www.ddbcinc.com/askDDBC/topic.asp?TOPIC_ID=216
    (note that you have to be able to define functions that return the invalid characters and the valid characters). You can find out where there are invalid characters by comparing the length of the raw column to the length of the translated column
    SELECT *
      FROM <<table>>
    WHERE LENGTH(email) != LENGTH( TRANSLATE( email, safeCharacters || nonPrintingCharacters, safeCharacters ) )Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Validation for non-ASCII characters

    Hi all,
    Requirement: I have to apply a validation on on fields like Name and Address in applicationdefination.xml. When a user types non-ASCII characters and navigates to next page then it should display the error message. Thus, I have to restrict my user to ASCII values only.
    Present Situation: I'm using regular expression for this problem. In Jheadstart there is an option regular expression under the heading Validation. I have written following values in regular expression and Regular Expression Error Message options.
    Regular Expression
    ^\s*[\w\.\,\-\_\(\)\#\'\/\\\ u0022\u0026\*\;\:\s]+\s*$
    Regular Expression Error Message
    It is important to note that foreign characters are not accepted on our system. Please ensure only standard English letters are entered
    Since, i was getting error in jspx page due to double quotes(") and ampercent(&), So i have replaced the double quotes(") and amprecent(&) by their unicodes. Thus, the expression has become like ^\s*[\w\.\,\-\_\(\)\#\'\/\\\u0022\u0026\*\;\:\s]+\s*$.
    This expression is validating many characters like Ã,µ,Ç,Ï,Ö,§,¥,{,} but not all non ASCII characters like ѓ є ѕ ї Њ Щ Ώ Ω Ϊ Ά Ή Θ Λ Ξ Π τ ẫ ờ Ỡ Ứ Ỷ ự Ẁ ỹ ị Ọ ň ũ ť ţ Έ Ϊ ﻍ. Thus, its not fulfilling the requirement.
    Please suggest some valid solution to this problem. It’s very urgent.

    Hi,
    The validation seems to be performed in Java or Javascript depending on the layout (I'm sorry I can't remember the exact details). The expression suggested above by theEternalStudent works very well in Java, but not in Javascript.
    We came up with an expression which works in both. It rejects strings which contain &# by doing a lookahead before the main pattern - you might want to expand this to look for &#nnn; but for our purposes &# is enough.
    Here is the "platform neutral" solution:
    (?!.*\u0026#.*)^[\w\.\,\-\_\(\)\#\'\/\\\u0022\u0026\*\;\:\s]+$
    I think in future we will write a javascript function and amend the templates to call it directly.
    thanks,
    Michael

  • How to query for two tables

    Hi,
    I have this problem, How can query a two table?
    Table A ->  Table B
    id               table-a_id
    name          table_b_name
    the relationship is one-to-many
    How can I get the result?
    Hope my question make sense
    cheers.
    thanks a lot.

    I bet you have more luck looking for an answer in a SQL forum.

Maybe you are looking for