Problem convertting certain extended ascii characters

I'm having problems with the extended ascii characters in the range 128-159. I'm working with SQL server environment using java. I originally had problems with characters in the range 128-159 when I did a 'select char_col from my_table' I always get junk when I try to retreive it from the ResultSet using the code 'String str = rs.getString(1)'. For example char_col would have the ascii character (in hex) '0x83' but when I retrieved it from the database, my str equaled '0x192'. I'm aware there is a gap in the range 128-159 in ISO-8859-1 charset. I've tracked the problem to be a charset issue converting the extended ascii characters in ISO-8859-1 into java's unicode charset.
I looked on the forum and it said to try to specify the charset when I retreived it from the resultset so I did 'String str = new String(rs.getBytes(), "ISO-8859-1")' and it was able to read the characters 128-159 correctly except for five characters (129, 141, 143, 144, 157). These characters always returned the character 63 or 0x3f. Does anyone who what's happening here? How come these characters didn't work? Is there a workaround this? I need to use only use java and its default charsets and I don't want to switch to the windows Cp1252 charset cuz I'm using the java code in a unix environment as well.
thanks.
-B

Normally your JDBC driver should understand the charset used in the database, and it should use that charset to produce a correct value for the result of getString(). However it does sometimes happen that the database is created by programs in some other language that ignore the database's charset and do their own encoding, bypassing the database's facilities. It is often difficult to deal with that problem, because the custodians of those other programs don't have a problem, everything is consistent for them, and they will not allow you to "repair" the database.
I don't mean to say that really is your problem, it is a possibility though. You are using an SQL Server JDBC driver, aren't you? Does its connection URL allow you to specify the charset? If so, try specifying that SQL-Latin1 thing and see if it works.

Similar Messages

  • Contains query fails for extended ascii characters

    I have an Oracle 9.2 instance whose characterset is WE8MSWIN1252. I'm using the same characterset on my client. If I have a LONG column that contains extended-ascii characters (the example I'm using has the Euro character '€', but I've seen the same problem with other characters), and I'm using the Intermedia service to index that column, then this select statement returns no records even though it should find several:
    select id from table1 where (contains(long_col,'€',1) > 0);
    However, the same select statement looking for something else, like 'e', works just fine.
    What am I doing wrong? I can do a "like" query against a VARCHAR2 column with a Euro character, and it works correctly. I can do a "dbms_lob.instr" query against a CLOB column with a Euro character, and it also works. It's just the "contains" query against a LONG column that fails.

    There are a number of limitations in using Long datatypes. If you check the SQL Reference you will see: "Oracle Corporation strongly recommends that you convert LONG columns to LOB columns as soon as possible. Creation of new LONG columns is scheduled for desupport.
    LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases."

  • Display extended ascii characters as question mark in xml file

    I am creating a XML file with encoding as UTF-8. Some tag values contain some extended ascii characters. When i run the java program to create the file in windows, the extended ascii characters are display correctly. But in linux it is displaying as ?(question mark).
    i am not able to rectify this. can anyone help me....
    Its urgent
    Thanks in advance.
    Message was edited by:
    Rosy_Thomas@Java

    Probably the locale is not set for the shell you are running in. The default 'C' locale uses the ASCII encoding which defines only 128 characters. See if giving the commandexport LC_CTYPE=en_US.UTF-8before starting the program fixes the issue.

  • Would like to use/see extended ascii characters in...

    I've got an E72 and I love it, but I would like to be able to see and use extended ascii characters.  Here is an example: σ__σ .  It looks like an upside-down, mirror-image capital Q on either side of the regular underline character.  To get the character I use ALT-229 on my Windows keyboard.  I can type this into my sms's or emails using ctrl-229, but it looks like a small "a" with a little "u" over it.  It looks the same way when it gets to my Windows email, so the character is not being sent properly from my E72.
    Am I just using the wrong ascii code?  Help!

    I did a little testing.
    SMS:
    when I create a text message to send to myself as a test, I can use ctrl-963 and on my original text message it looks like the lower case sigma, the character I want.
    When I get the text message, the character looks like an upper case sigma (the "M" on its side).
    The character is not supported at all in the regular Nokia Messaging app.
    EMAIL:
    When I attempt to use the character in creating a message, it just gives me a question mark character.
    When I get an email with the character, again it just gives me a question mark character.
    It appears that this is indeed a matter of software.  I-sms will support showing me that character, but not receiving it.  Profimail (the mail app I am using) doesn't support it either way.
    This isn't critical.  I will keep it in the back of my mind when evaluating this kind of app, and try suggesting to the programmers of both apps that supporting the extended character set would be handy, particularly in text messaging.

  • Need to find out extended ASCII characters in database

    Hi All,
    I am looking for a query that can fetch list of all tables and columns where there is a extended ASCII character (from 128 to 256). Can any one help me?
    Regards
    Yadala

    yadala wrote:
    Hi All,
    I am looking for a query that can fetch list of all tables and columns where there is a extended ASCII character (from 128 to 256). Can any one help me?
    Regards
    YadalaThis should match your requirement:
    select t.TABLE_NAME, t.COLUMN_NAME from ALL_TAB_COLUMNS t
    where length(asciistr(t.TABLE_NAME))!=length(t.TABLE_NAME) 
    or length(asciistr(t.COLUMN_NAME))!=length(t.COLUMN_NAME);The ASCIISTR function returns an ASCII version of the string in the database character set.
    Non-ASCII characters are converted to the form \xxxx, where xxxx represents a UTF-16 code unit.
    The CHR function is the opposite of the ASCII function. It returns the character based on the NUMBER code.
    ASCII code 174
    SQL> select CHR(174) from dual;
    CHR(174)
    Ž
    SQL> select ASCII(CHR(174)) from dual;
    ASCII(CHR(174))
                174
    SQL> select ASCIISTR(CHR(174)) from dual;
    ASCIISTR(CHR(174))
    \017DASCII code 74
    SQL> select CHR(74) from dual;
    CHR(74)
    J
    SQL> select ASCII(CHR(74)) from dual;
    ASCII(CHR(74))
                74
    SQL> select ASCIISTR(CHR(74)) from dual;
    ASCIISTR(CHR(74))
    J

  • SQL Developer, UTF8 Oracle DB, extended ascii characters appear as blocks

    I have this value stored on the database:
    (Gestion Económica o Facturaci
    Notice the second word has an extended ascii character in it. When I use SQL Developer on my windows machine to view the data, I get a box in place of the o, kinda like this:
    (Gestion Econ�mica o Facturaci
    If I log on to the AIX server where the oracle database in question is and run sqlplus from there, I see things properly. I also managed to regedit oracle home to get sql plus on my windows machine to display this properly. I still cannot get sql developer to work though...
    Details about sql developer:
    font: arial Unicode MS
    environment encoding: UTF-8
    NLS Lang: American
    NLS Territory: America
    windows regional options:
    English (United States)
    Location: United States
    Database NLS settings:
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     mm/dd/yyyy hh24:mi:ss
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_CHARACTERSET     UTF8
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    Any ideas on how I can fix this. I'd rather NOT log onto the server to run queries.... thanks in advance for your thoughts!
    Edited by: user10939448 on Jan 31, 2012 1:51 PM

    user10939448 wrote:
    This problem is quite strange in that when I've been able to manually set American_america.utf8, things work.Sorry to say, but it seems you may have an incorrect setup.
    In general, you should set char set part of NLS_LANG to let Oracle know the code page used by the client. With win-1252, NLS_LANG should include .WE8MSWIN1252.
    The display from sqlplus was "lying", due to incorrectly stored data coupled by incorrect nls_lang setting (char set part). The pass-through or gigo scenario can be dangerous this way. Search the Globalization forum for the term 'pass-through' for previous discussions on the theme.
    The setting on AIX servers may be incorrect as well, but it depends how you use it (e.g. for database export or data load with utf-8 encoded files it may be correct).
    The output of the query you recommended looks odd to me:
    (Gestion Econ�mica o Facturaci     Typ=1 Len=30 CharacterSet=UTF8:
    28,47,65,73,74,69,6f,6e,20,45,63,6f,6e,f3,6d,69,63,61,20,6f,20,46,61,63,74,75,72,61,63,69;This is the telling part. The 0xF3 is not legal in UTF8. Actually, the code units for ó, U+00F3 Latin small letter o with acute, are C3 B3. So instead of f3 you should have expected c3,b3 from the dump output.
    >
    So it looks like what's under the covers is correct, but I'm still not seeing the correct character in sql developer.The opposite is true. Data is incorrectly stored and SQL Developer is correctly showing you this. Sqlplus is not the best tool in Unicode environments, SQL Developer is better.
    >
    ACP according to my windows registry is 1252. OEMCP is 437Also, if you use database clients in console mode (such as sqlplus), NLS_LANG should include .US8PC437 to properly indicate that code page in use is 437.

  • Printing extended ascii characters

    How can i print an extended ascii character through a java program if i know its ascii value?I tried this :
    for (int i=0;i<256;i++)
    System.out.print((char)i);
    and i got the character '?' printed on the console for some of the characters.

    How can i print an extended ascii character through a
    java program if i know its ascii value?I tried this
    for (int i=0;i<256;i++)
    System.out.print((char)i);
    and i got the character '?' printed on the console
    for some of the characters.According to this site: http://www.pantz.org/html/symbols-hex/htmlcodes.shtml
    [[[HTML 4.01, ISO 10646, ISO 8879, Latin extended A and B]]]
    The ASCII value 8240 == �
    According to this site: http://www.idevelopment.info/data/Programming/ascii_table/PROGRAMMING_ascii_table.shtml
    [[[ISO 10646, ISO 8879, ISO 8859-1 Latin alphabet No. 1]]]
    The ASCII value of 137 == �
    It seems like it's a Windows ISO 8859-1 issue.
    From everything I've read it appears as though there are no characters from DEC value of 128-159 inclusive. DEC value 127 is DEL delete and DEC value 160 is � in the Latin-1 encoding or ISO-8859-1.
    I have a program that writes ASCII values to the screen. If I try to force it to print a value in the 150's it fails, returning the symbol --->>> ?
    However it can print every other ASCII value that I've tired (in the output file, the � symbol prints correctly but when it's posted on the forum it doesn't show up...). Here's an input sample:












    Output:
    [�]          [40]          [1]               [402]
    [�]          [41]          [1]               [381]
    [�]          [42]          [1]               [8364]
    [�]          [43]          [1]               [171]
    [�]          [44]          [1]               [182]
    [�]          [45]          [1]               [174]
    [�]          [47]          [1]               [8240]
    [�]          [49]          [1]               [255]
    [�]          [50]          [1]               [214]
    [�]          [51]          [1]               [220]
    [�]          [52]          [1]               [162]
    [�]          [53]          [1]               [163]
    Trying to force (char)155
    out.write( (char)155 + lineSep );
    Prints: ?

  • Email can not display Extended Ascii Characters

    Hi every One,
    In my Project there is a One option in that i can send Product Information as well as Customer Address to specify Email id , In Customer Address Have some Ascii Characters .After Sending a Mail when i Check Email that Ascii Character replace by ? .
    I am trying that one , still i dont know where is Going wrong, Please any one Know Regading this Problem let me Know
    with Regards
    Asif

    I sure hope you get that sorted out. sounds annoying
    right, back to the java.....

  • How to write extended ASCII characters to a file

    Here is a distilled fragment from some larger script. Its purpose is to create a text file containing some characters from the extended ASCII character set.
    $String = "Test" + [char]190 + [char]191 + [char]192
    echo $String | out-file d:\test.txt -Encoding ascii
    What I want in the target file is exactly the 7 characters that make up $String. The above method fails to deliver this result. How can I do it?

    Hi,
    Try using Add-Content or Set-Content instead:
    $String = "Test" + [char]190 + [char]191 + [char]192
    echo $String | Set-Content .\test.txt
    Don't retire TechNet! -
    (Don't give up yet - 13,225+ strong and growing)

  • Reg Extended Ascii Characters...

    Hi,
    I have state name data with Ascii extended characters, the examples of which are given below:
    Bouches-du-Rhône
    Corrèze
    Côte-d''Or
    Côtes-d''Armor
    Finistère
    Hérault
    Isère
    Lozère
    Nièvre
    Puy-de-Dôme
    Pyrénées-Atlantiques
    Pyrénées (Hautes)
    Pyrénées-Orientales
    Rhône
    Saône (Haute)
    Saône-et-Loire
    Sèvres (Deux)
    VendéeI need to :
    1) insert these data in a table.. i think i have to use ASCII codes for it or is it any other way?
    2) How will a user be able to search the above states.. I mean what will be the behaviour of above states when searched with LIKE operator or = search...?
    3) will indexes on state column be used in the above cases..?
    4) any other things that i need to keep in mind while working with such data..
    Thx

    What is your database character set?
    What application(s) will you be using to modify the data?
    What are the NLS_LANG settings on the client machine(s)?
    Assuming that the database character set supports the characters in the first place, and that the client supports them, they should behave just like any other character. Searching, indexing, etc. will all continue to work as normal.
    Justin

  • Cant echo extended ASCII characters to xsetroot -name

    Ok, I'll try to explain this without getting confused.
    I'm using a font with custom glyphs to display little "icons" in my dwm status using xsetroot.
    I got it to work, but I'd like to know what's going on.
    The only way that I could get the glyphs into my script was to:
    1. Open urxvt and type
    echo -e '\xEF' #wifi status glyph
    2. Highlight the resulting character
    3. "Middle click it" into my editor for .xinitrc
    For example, some of the .xinitrc code
    wifi(){
    if iwgetid > /dev/null
    then echo -e "\x03ï\x01"
    else echo -e "\x04ï\x01"
    fi
    xsetroot -name "`wifi`"
    That 'i' thing, in most fonts, is ASCII pagecode 0xEF, my wifi status glyph.
    What I don't understand is that if I simply make
    then echo -e "\x03ï\x01"
    read
    then echo -e "\x03\xEF\x01"
    the glyph will not show up.
    A 7bit character such as 'E' will show up, however, if I make the line
    then echo -e "\x03\x45\x01"
    I tested to see if
    echo -e '\xEF'
    would show up in uxterm, and it didn't. However, I could "middle click" the character right into uxterm from urxvt.
    What is going on and how can I code for the glyph in a non-unicode environment?
    TL;DR: if I run the following code in uxterm it shows boxes, if I run it in urvxt it shows glyphs
    echo -e `echo \\\x{{0..9},{a..f}}{{0..9},{a..f}}`
    Last edited by Slax (2010-09-16 22:42:25)

    Solved this a few days ago.
    I had to set my locacle to ISO-8859-1 rather than Unicode, as Unicode will not print ASCII beyond 128.

  • How to query for extended ASCII characters in column value

    Hi All
    Sorry if this has been answered before. I tried searching but none of them seems to work for me.
    I am trying to search for inverted ? in my column.
    I am using the following query
    select *
    from table_name
    where regexp_like (description , '¿' )

    Did you try:
    like '%¿%'
    create table table_name (description varchar2(100))
    insert into table_name values ('jh¿¿gagd')
    insert into table_name values ('jhga345gd')
    insert into table_name values ('j1231232hgagd¿')
    select *
    from table_name
    where regexp_like (description , '¿' )
    DESCRIPTION
    jh¿¿gagd
    j1231232hgagd¿
    select * from table_name
    where description like '%¿%'
    DESCRIPTION
    jh¿¿gagd
    j1231232hgagd¿Edited by: user130038 on Sep 8, 2011 12:29 PM

  • Extended ASCII changing to UNICODE in Oracle9i?

    Hello,
    We're just getting to verifying support for our applications against Oracle9i database. Historically, we've been supporting Oracle8 and Oracle8i, and they work just peachy.
    On some of our tables, we have a varchar column that is 255 characters long. We often import data that is exactly 255 chars in length. With 9i, if the data is 255 chars long, and contains any extended ASCII chars (such as degree symbol or plus/minus symbol, both of which we use), that row will fail to be imported. My personal impression is that it is being converted to UNICODE, which of course means that it becomes a two-byte character, and that means that this 255 char string is now 256 chars (bytes, actually, but you know what I mean), and can't be loaded into a varchar(255).
    We are willing to change our schema, but cannot do so until our next release. We need to get this release working on 9i, without changing the schema.
    Is it possible to import (using sqlldr) extended ASCII characters without changing them into Unicode characters?
    I have tried changing my NLS_LANG settings to US7ASCII (which is definitely wrong, it changes the extended chars into zeros) and I have tried WE8MSWIN1252, which does preserve the symbols, but does not preserve the ASCII encoding...
    I have tested the application against a changed schema ( just extended the varchar(255) to varchar(265)), so I know it works, but we've alreacy frozen this release, so I can't include the new schema...
    I am totally open to any suggestion that does not involve schema changes...
    Thank you,
    William

    My previous post is not really relevant to your problem.
    What character sets are you using in Oracle 8, Oracle 8i
    and Oracle 9i?
    For example:
    SQL> select * from nls_database_parameters
    2 where parameter = any('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');
    PARAMETER VALUE
    NLS_CHARACTERSET WE8ISO8859P15
    NLS_NCHAR_CHARACTERSET AL16UTF16
    According to Oracle's documentation,
    up to three character set conversions may be required for data definition language
    (DDL) during an export/import operation:
    1. Export writes export files using the character set specified in the NLS_LANG
    environment variable for the user session. A character set conversion is
    performed if the value of NLS_LANG differs from the database character set.
    2. If the export file's character set is different than the import user session
    character set, then Import converts the character set to its user session character
    set. Import can only perform this conversion for single-byte character sets. This
    means that for multibyte character sets, the import file's character set must be
    identical to the export file's character set.
    3. A final character set conversion may be performed if the target database's
    character set is different from the character set used by the import user session.
    To minimize data loss due to character set conversions, ensure that the export
    database, the export user session, the import user session, and the import database
    all use the same character set.

  • Convert smart quotes and other high ascii characters to HTML

    I'd like to set up Dreamweaver CS4 Mac to automatically convert smart quotes and other high ASCII characters (m-dashes, accent marks, etc.) pasted from MS Word into HTML code. Dreamweaver 8 used to do this by default, but I can't find a way to set up a similar auto-conversion in CS 4.  Is this possible?  If not, it really should be a preference option. I code a lot of HTML emails and it is very time consuming to convert every curly quote and dash.
    Thanks,
    Robert
    Digital Arts

    I too am having a related problem with Dreamweaver CS5 (running under Windows XP), having just upgraded from CS4 (which works fine for me) this week.
    In my case, I like to convert to typographic quotes etc. in my text editor, where I can use macros I've written to speed the conversion process. So my preferred method is to key in typographic letters & symbols by hand (using ALT + ASCII key codes typed in on the numeric keypad) in my text editor, and then I copy and paste my *plain* ASCII text (no formatting other than line feeds & carriage returns) into DW's DESIGN view. DW displays my high-ASCII characters just fine in DESIGN view, and writes the proper HTML code for the character into the source code (which is where I mostly work in DW).
    I've been doing it this way for years (first with GoLive, and then with DW CS4) and never encountered any problems until this week, when I upgraded to DW CS5.
    But the problem I'm having may be somewhat different than what others have complained of here.
    In my case, some high-ASCII (above 128) characters convert to HTML just fine, while others do not.
    E.g., en and em dashes in my cut-and-paste text show as such in DESIGN mode, and the right entries
        &ndash;
        &mdash;
    turn up in the source code. Same is true for the ampersand
        &amp;
    and the copyright symbol
        &copy;
    and for such foreign letters as the e with acute accent (ALT+0233)
        &eacute;
    What does NOT display or code correctly are the typographic quotes. E.g., when I paste in (or special paste; it doesn't seem to make any difference which I use for this) text with typographic double quotes (ALT+0147 for open quote mark and ALT+0148 for close quote mark), which should appear in source code as
        &ldquo;[...]&rdquo;
    DW strips out the ASCII encoding, displaying the inch marks in DESIGN mode, and putting this
        &quot;[...]&quot;
    in my source code.
    The typographic apostrophe (ALT+0146) is treated differently still. The text I copy & paste into DW should appear as
        [...]&rsquo;[...]
    in the source code, but instead I get the foot mark (both in DESIGN and CODE views):
    I've tried adjusting the various DW settings for "encoding"
        MODIFY > PAGE PROPERTIES > TITLE/ENCODING > Encoding:
    and for fonts
        EDIT > PREFERENCES > FONTS
    but switching from "Unicode (UTF-8)" to "Western European" hasn't solved the problem (probably because in my case many of the higher ASCII characters convert just fine). So I don't think it's the encoding scheme I use that's the problem.
    Whatever the problem is, it's caused me enough headaches and time lost troubleshooting that I'm planning to revert to CS4 as soon as I post this.
    Deborah

  • How can I convert ASCII characters to ISO8859?

    Hi All,
    I have written a little application that renames a TV episode by scraping a TV listing site for the episode name. It is written in SWT and works great apart from on small problem. When getting the html back from the site, it sometimes contains special ASCII characters that are not in the ISO8859 (Windows filesystem) character set.
    For example, this is the line that I have to parse:
    <td style='padding-left: 6px;' class='b2'><a href='/Prison_Break/episodes/569183/03x01'>Orientaci��n</a></td>When viewing it in a browser, it is:
    <td style="padding-left: 6px;" class="b2"><a href="/Prison_Break/episodes/569183/03x01">Orientaci�n</a></td>Notice that the o in the title has an accent on it. While researching this problem I stumbled across 'HTML Entities to ISO 8859-1 Converter' at http://www.inweb.de/chetan/English/Resources/Java/HTML%202%20ISO.html. This open source project takes in an html entity like & and returns '&'.
    So that is not quite what I want, as my BufferedReader is converting the html entity into the ASCII representation already. I need a way of detecting a non ISO8859 character within an ASCII string, and hopefully replacing its natural 'equivalent' (would be o in this case).
    Does anyone know how I could do it without having to check for every special char and replacing (not really an option unless someone has done it before!!)
    If not that then, perhaps another way to attack the problem?
    Any help greatly appreciated ;)
    Dave

    Hi,
    NZ_Dave wrote:
    For example, this is the line that I have to parse:
    <td style='padding-left: 6px;' class='b2'><a href='/Prison_Break/episodes/569183/03x01'>Orientaci��n</a></td>
    This is coded in UTF-8. If you convert the bytes to a String using the UTF-8 encoding, then you will have the correct characters "Orientaci�n" in the string.
    Check your parser where it converts the bytes (coming from e.g. an InputStream) to characters. Use UTF-8 as the charset when doing that conversion.

Maybe you are looking for

  • OBJECT SKIPPED while transporting the objects from DEV to QA.

    Hi All, While transporting the objects to Quality, I'm getting a warning saying Objects are skipped. Can anyone please explain how to get rid of this error? We are facing this issue almost for all the transports which we are moving and the objects do

  • How to display File Title instead of File Name in Search Results

    Hello All, I want to display the File Title instead of File name in the search results. For example when i do the search for the HTML Pages i get the File Names i.e. Index.html However i want 'HTML Page Title' to be displayed instead of File Name. Pl

  • Numbers "full screen" in Lion doesn't display filename.

    Since Apple is hyping full screen use, isn't that rather important considering versioning and all.  Am I missing something?

  • Cannot add images to existing collection

    I cannot add images to an existing collection. When I am in the Library/navigator modus, and I grab one or more images, I can not drag them to an existing library. I do can however, drag an entire collection to a collection set, but that is not what

  • Latest SP for Solution Manager

    Hi, I have SAP Solution Manager 4.0 SR3 DVD. I am not able to find what SP Stack it is on and what should be the latest I should download now. When I check in System Status of solman server it shows ... SAP BASIS on Patch Level 0012 & SAP ABAP on Pat