Randomly printed output is garbled with non-human characters and symbols

When printing from what we believe to be a pdf display, often, but randomly, the printed output is garbled. This may be the second and subsequent pages, only one page, the first one, etc. This happens often, but not exclusively, when printing from a bank generated pdf on their website but also occures on documents opened/created on our machine locally. I will try to attach a sample page using your 'link' symbole above: can't attach a scanned document (pdf) from my system (now why would I have thought this would be the case?.....) Can anyone help me get this document attached to this email?.......Okay! I've jumped through every hoop and there is no way to 'attach' a scanned document from my computer - are you people real or is this a nightmare? Totally unfunctional.
If anyone (human) gets this please contact me at [email protected] or my cell 480-235-1868, and thank you for trying. I am wasting paper and ink by the dozens of dollars every week on this problem, so any help would be appreciated. bill matney

For others who might have the same problem. You can solve it by quoting the device name and/or printer name.
!p """BÆR""" * """\\print\bærum""/q" * 50 *
This, however, causes the printer name to be '"BÆR"' with the double quotes included in the name. Thus, the job card must contain the unfortunate tripple quotes as well: -z"""BÆR""".
Vegard

Similar Messages

  • Problems with non-ASCII characters on Linux Unit Test Import

    I found a problem with non-ASCII characters in the Unit Test Import for Linux.  This problem does not appear in the Unit Test Import for Windows.
    I have attached a Unit Test export called PROC1.XML  It tests a procedure that is included in another attachment called PROC1.txt. The unit test includes 2 implementations.  Both implementations pass non-ASCII characters to the procedure and return them unchanged.
    In Linux, the unit test import will change the non-ASCII characters in the XML file to xFFFD. If I copy/paste the the non-ASCII characters into the Unit Test after the import, they will be stored and executed correctly.
    Amazon Ubuntu 3.13.0-45-generic / lubuntu-core
    Oracle 11g Express Edition - AL32UTF8
    SQL*Developer 4.0.3.16 Build MAIN-16.84
    Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)
    In Windows, the unit test will import the non-ASCII characters unchanged from the XML file.
    Windows 7 Home Premium, Service Pack 1
    Oracle 11g Express Edition - AL32UTF8
    SQL*Developer 4.0.3.16 Build MAIN-16.84
    Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
    If SQL*Developer is coded the same between Windows and Linux, The JVM must be causing the problem.

    Set the System property "mail.mime.decodeparameters" to "true" to enable the RFC 2231 support.
    See the javadocs for the javax.mail.internet package for the list of properties.
    Yes, the FAQ entry should contain those details as well.

  • Playlists containing songs with titles starting with non-alphabetical characters do not get transferred at all

    I was really happy about how the desktop manager lets me choose the songs from my itunes library by playlists! this is smart! I'm also happy to see the playbook render asian characters properly (unlike my experience on the bold!)
    However, I have a lot of songs which have titles that begin with non-alphabetical characters, ie asian characters. For all playlists which contains such songs, those songs do not get transferred onto the playbook AT ALL. For example, I have one playlist which contains only songs with titles beginning with asian characters - this playlist was transferred but it is shown as containing zero songs! When I browse my song library by artist/album, I can find all those songs that should be in that/those playlists.
    Can this be fixed?
    thanks.
    Solved!
    Go to Solution.

    as it turns out, it kind of got fixed by itself. what i did (i think, since i have not definitively verified) was i went to browse "all songs" wherein i noticed all thoses songs starting with asian characters seemed to show correctly. then i went back to the playlists in question, and viola!
    clearly there is a bug or two in the music player somewhere regarding these title characters. but rim has more urgent things to resolve i guess - like my auto screen lock is still not working!

  • Publishing pages with non-ASCII characters to folder

    There is a weirdness/anachronism in exported file names from iWeb'08.
    I have a page "Mökin katto" (that is Finnish).
    "ls" in Terminal shows the exported folder ok, like
    Mökin_katto.html
    Mökinkattofiles/
    Tarring and uploading to show on my Apache web server did not work so good, though. I changed the server to operate in UTF-8 locale and also use UTF-8 as the default character set. Still no success. At this point, I wanted to check the exported folder in the source (my MacBook). Tab completion didn't work:
    % Mö<TAB>
    didn't give any results. At this point, I wrote a simple Python script to dump the filenames. I created pages with different Scandinavian characters and capitalizations to demonstrate (I have cleaned the output a bit to remove unnecessary filenames):
    % ls
    MäkiÄn_katto.html MökiÖn_katto.html feed.xml
    MäkiÄnkattofiles/ MökiÖnkattofiles/ index.html
    MåkiÅn_katto.html Mökin_katto.html
    MåkiÅnkattofiles/ Mökinkattofiles/
    % ~/repos/scripts/misc/dirdump.py
    MäkiÄn_katto.html 'M(4d)' 'a(61)' cc88 'k(6b)' 'i(69)' 'A(41)'cc 88 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '.(2e)' 'h(68)' 't(74)' 'm(6d)' 'l(6c)'
    MäkiÄnkattofiles 'M(4d)' 'a(61)' cc88 'k(6b)' 'i(69)' 'A(41)'cc 88 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '_(5f)' 'f(66)' 'i(69)' 'l(6c)' 'e(65)' 's(73)'
    MåkiÅn_katto.html 'M(4d)' 'a(61)' cc8a 'k(6b)' 'i(69)' 'A(41)'cc 8a 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '.(2e)' 'h(68)' 't(74)' 'm(6d)' 'l(6c)'
    MåkiÅnkattofiles 'M(4d)' 'a(61)' cc8a 'k(6b)' 'i(69)' 'A(41)'cc 8a 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '_(5f)' 'f(66)' 'i(69)' 'l(6c)' 'e(65)' 's(73)'
    Media 'M(4d)' 'e(65)' 'd(64)' 'i(69)' 'a(61)'
    Mökin_katto.html 'M(4d)' 'o(6f)' cc88 'k(6b)' 'i(69)' 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '.(2e)' 'h(68)' 't(74)' 'm(6d)' 'l(6c)'
    Mökinkattofiles 'M(4d)' 'o(6f)' cc88 'k(6b)' 'i(69)' 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '_(5f)' 'f(66)' 'i(69)' 'l(6c)' 'e(65)' 's(73)'
    MökiÖn_katto.html 'M(4d)' 'o(6f)' cc88 'k(6b)' 'i(69)' 'O(4f)'cc 88 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '.(2e)' 'h(68)' 't(74)' 'm(6d)' 'l(6c)'
    MökiÖnkattofiles 'M(4d)' 'o(6f)' cc88 'k(6b)' 'i(69)' 'O(4f)'cc 88 'n(6e)' '_(5f)' 'k(6b)' 'a(61)' 't(74)' 't(74)' 'o(6f)' '_(5f)' 'f(66)' 'i(69)' 'l(6c)' 'e(65)' 's(73)'
    Scripts 'S(53)' 'c(63)' 'r(72)' 'i(69)' 'p(70)' 't(74)' 's(73)'
    Apparently Ö and ö are translated to the sequences '"O" 0xcc 0x88' and '"o" 0xcc 0x88', Ä and ä to '"A" 0xcc 0x88' and '"a" 0xcc 0x88' and finally Å and å to the sequences '"A" 0xcc 0x8a' and '"a" 0xcc 0x8a'. Looking into this a bit more, those ("0xcc 0x88" and "0xcc 0x8a") are the UTF-8 encodings of COMBINING DIAERESIS (U+308) (http://www.fileformat.info/info/unicode/char/0308/index.htm) and COMBINING RING ABOVE (U+030A) (http://www.fileformat.info/info/unicode/char/030a/index.htm).
    The generated links in the pages are in the short (Latin1 equivalent) form, but the filenames are in this format. The meaning of the strings is the same, but e.g. Apache doesn't internally canonicalize the paths, which results in broken URLs. I think iWeb should export the filenames and the URLs in identical UTF-8 strings (which quite likely should be the Latin1-equivalent Unicode code points). Especially as e.g. Terminal and bash only work with the short forms.
    Is there an option to make iWeb behave this way?

    I tried checking whether changing my keyboard layout (I've created the current one with Ukelele) to one which wrote decomposed characters changed things; it didn't. The filenames and generated html-files contained identical entries; I could use tab-completion for the generated filenames, though. (Bash was otherwise a bit confused about the change, so I can't recommend this)
    By canonicalizing the filenames with a script on the server (a Debian GNU/Linux system) end I can now make it work; the root cause of the discrefence between the file name and the URLs in the generated HTML is still a mystery. This would've worked if iWeb wrote decomposed UTF-8 characters to the URLs, also.

  • Filling clob with non ascii characters

    Hello,
    I have had some problems with clobs and usage of german
    umlauts (����). I was'nt able to insert or update
    strings containing umlaute in combination with string
    binding. After inserting or updating the umlaut
    characters were replaced by strange (spanish) '?'
    which were upside down.
    However, it was working when I did not use string bindung.
    I tried varios things, after some time I tracked
    the problem down to to oracle.toplink.queryframework.SQLCall.java. In the
    prepareStatement(...) you find something
    like
    ByteArrayInputStream inputStream = new ByteArrayInputStream(((String) parameter).getBytes());
    // Binding starts with a 1 not 0.
    statement.setAsciiStream(index + 1, inputStream,((String) parameter).getBytes().length);
    I replaced the usage of ByteArrayInputStram with CharArrayReader:
    // TH changed, 26.11.2003, Umlaut will not work with this.
    CharArrayReader reader = new CharArrayReader(((String) parameter).toCharArray());     
    statement.setCharacterStream(index + 1, reader, ((String) parameter).length() );
    and this worked.
    Is there any other way achieving this? Did anyone
    get clobs with non ascii characters to work?
    Regards -- Tobias
    (Toplink 9.0.3, Clob was mapped to String, Driver was Oracle OCI)

    I don't think the console font is the problem. I use Lat2-Terminus16 because I read the Beginner's Guide on the wiki while installing the system.
    My /etc/vconsole.conf:
    KEYMAP=de
    FONT=Lat2-Terminus16
    showconsolefont even shows me the characters missing in the file names; e.g.: Ö, Ä, Ü

  • Hyperlink with the following characters: ù, ò, è, à and ì

    Hello All,
    I have problems with PDF files and Adobe Reader 11.0.2: the hyperlink with the following characters: ù, ò, è, à and ì, are not working.
    This problem looks like the "CR" in file name with versione X (http://forums.adobe.com/message/3791868), that's a strange mistery... :-(
    Using older versions of Adobe Reader the links work correctly.
    Thanks in advance for the support,
    Andrea

    These characters are not permitted in a URL. This rule is not made by Adobe, but by the people who invented URLs. They might work as you wish but the behaviour is undefined (that is to say, there is no correct behaviour, and giving an error or rejection is one possibility). I suspect that recent versions of Adobe Reader forbid the forbidden characters for security reasons.

  • Cannot rename file with non-ASCII characters when using the

    My application moves files from one directory to another by calling File[] srcFiles = srcDir.listFiles() to get a list of files in the source directory, and then calling srcFiles.renameTo(destFile) to rename each file.
    This does not work (renameTo returns false and the file is not moved) under the following circumstances:
    - the file's leaf name contains non-ASCII characters, for example "�"
    - the OS is Solaris 9
    - the LANG and LC_* environment variables are unset, i.e. the C locale is being used
    If I set the LANG environment variable to, for example, en_GB.UTF-8 then the rename succeeds.
    I have tried calling srcFiles[index].getName().getBytes("UTF-8") and the non-ASCII characters are being replaced with ? (0x3f) characters when LANG is unset.
    Is this a bug in the JRE? I would argue that since my code does not actually manipulate the filename (I just use the File object that File.listFiles() gives me) then the rename should succeed. Of course I would not expect the file name to be displayed correctly if I printed it out.
    I have reproduced this behaviour with JDK 1.4.2_05 and 1.5.0_04 on Solaris 9.
    Francis

    Thanks for the info Alan.
    I considered setting the locale in the environment (this sounds like the "correct" fix to me and we might implement it later), but this application shares a WebLogic server with many other applications so we would have to do a huge amount of testing to make sure that the locale change wouldn't break the other apps. In the end I worked around the problem by making the code that generates the filenames in the first place strip out any non-ASCII characters (the names of the files are not critically important).
    Looking forward to JSR-203, in the meantime perhaps a note about this behaviour in the java.io.File javadoc would be useful.

  • Initial Print jobs 'communication error with device'. Pause and restart job

    So, I have an odd variant of the Leopard print problem; on 3 separate systems.
    When I print to an HP Photosmart C7180, which setup fine, which I can scan to fine, the print job throws an error :error communicating with device, switch on and off' and so forth. It does that from any app, and consistently. I am printing via bonjour, the printer is networked, but on a static IP.
    To fix this EVERY-TIME I PAUSE printer, then start printer again. It immediately starts working, consistently until I sleep, log out, or don't print for a while.
    This seems to be some timeout issue or such. I tried IP printing, but no difference.
    Resetting the print system also did not help.
    It's not catastrophic but sooo annoying. Any input appreciated!
    Thx,
    Dan

    Hello SunnygirlQ. Welcome to the Apple Discussions!
    Unfortunately, not all USB printers are compatible with AirPort base stations. In addition, the AirPort's USB port does not support the "advanced" printer functions, like scanning, copying or faxing, of multi-function printers.
    To see if your printer is compatible, take a look at this iFelix Unofficial AirPort Printer Compatibility link.
    If your printer isn't listed, it doesn't necessarily mean it won't work, but simply that it has not been verified. iFelix also provides the following workaround for printers not on the list that would certainly be worth a try.
    Also you can try this Apple Tech Support article to see if it will help:
    o Printer troubleshooting for AirPort Extreme and AirPort Express
    I assume that this printer works just fine when it is connected directly to your Mac ... correct? If so, has it worked when connected to the AX in the past ... or has it always had this problem? If it did work correctly before, did you do any updates, especially to the AX, recently?

  • Can't get the attachment filename out of a Part (with non ascii characters)

    Hello, all and happy new year :)
    My issue is with non ascii filename in attachments... Yes i've read the FAQ : http://www.oracle.com/technetwork/java/faq-135477.html#encodefilename
    I can't get the filename out of the BodyPart for those kind of attachments
    here's my unit test :
         * contains various parts from various mailer encoded in different ways...
         private enum EncodedFileNamePart{
              OUTLOOK("Content-Type: text/plain;\n name=\"=?iso-8859-1?Q?c'estd=E9j=E0no=EBl=E7ac'estcool.txt?=\" \nContent-Transfer-Encoding: 7bit\nContent-Disposition: attachment;\n filename=\"=?iso-8859-1?Q?c'estd=E9j=E0no=EBl=E7ac'estcool.txt?=\" \n\nnoel 2010\n","c'estdéjànoëlçac'estcool.txt"),
              GMAIL("Content-Type: text/plain; charset=US-ASCII; name=\"=?ISO-8859-1?B?ZOlq4G5v62znYWNlc3Rjb29sLnR4dA==?=\"\nContent-Disposition: attachment; filename=\"=?ISO-8859-1?B?ZOlq4G5v62znYWNlc3Rjb29sLnR4dA==?=\"\nContent-Transfer-Encoding: base64\nX-Attachment-Id: f_giityr5r0\n\namluZ2xlIGJlbGxzIQo=\n","déjànoëlçacestcool.txt"),
              THUNDERBIRD("Content-Type: text/plain;\n name=\"=?ISO-8859-1?Q?d=E9j=E0no=EBl=E7acestcool=2Etxt?=\"\nContent-Transfer-Encoding: 7bit\nContent-Disposition: attachment;\n filename*0*=ISO-8859-1''%64%E9%6A%E0%6E%6F%EB%6C%E7%61%63%65%73%74%63%6F;\n filename*1*=%6F%6C%2E%74%78%74\n\njingle bells!\n","déjànoëlçacestcool.txt"),
              EVOLUTION("Content-Disposition: attachment; filename*=ISO-8859-1''d%E9j%E0no%EBl.txt\nContent-Type: text/plain; name*=ISO-8859-1''d%E9j%E0no%EBl.txt; charset=\"UTF-8\" \nContent-Transfer-Encoding: 7bit\n\njingle bells\n","déjànoël.txt"),
              String content=null;
              String target=null;
              private EncodedFileNamePart(String content,String target){
                   this.content=content;
                   this.target=target;
              public Part get(){
                   try{
                   ByteArrayInputStream bis = new ByteArrayInputStream(this.content.getBytes());
                   Part part = new MimeBodyPart(bis);
                   bis.close();
                   return part;
                   catch(Throwable e){
                        return null;
              public String getTarget(){
                   return this.target;
         @Test
         public void testJavamailDecode() throws MessagingException, UnsupportedEncodingException{
              System.setProperty("mail.mime.encodefilename", "true");
              System.setProperty("mail.mime.decodefilename", "true");
              for(EncodedFileNamePart part : EncodedFileNamePart.values())
                   assertEquals(part.name(),MimeUtility.decodeText(part.get().getFileName()),part.getTarget());     
    I take a NullPointerExcepion in the decodeText because getFileName() return null for the EVOLUTION case, and work well with OUTLOOK, THUNDERBIRD and GMAIL.
    Evolution's content type is "Content-Disposition: attachment; filename*=ISO-8859-1''d%E9j%E0no%EBl.txt" wich doesn't look like the other (looks like the RFC 2616 or RFC5987 to do it.)
    How can i handle this situation except by writting my own decoder?
    Thanks for your answers!
    Edited by: user13619058 on 4 janv. 2011 07:44

    Set the System property "mail.mime.decodeparameters" to "true" to enable the RFC 2231 support.
    See the javadocs for the javax.mail.internet package for the list of properties.
    Yes, the FAQ entry should contain those details as well.

  • How to print the HEX values of non-displayable characters!

    I am trying to tokenize a string that contains non-displayable characters i.e. EBCDIC or ASCII. How can I print the HEX values of the non-displayable characters. e.g.
    "StartTEST1TEST2TEST3TEST4TEST5TEST6TEST7TEST812END"
    How can print the HEX values of in the above string. Any help is appreciated.
    Thanks

        char ch = 28; // or whatever character you want to look at.
        String hexString = Integer.toHexString(ch);
        System.out.println(hexString);

  • Upload text files with non-english characters

    I use an Apex page to upload text files. Then i retrieve the contents of files from wwv_flow_files.blob_content and convert them to varchar2 with utl_raw.cast_to_varchar2, but characters like ò, à, ù become garbage.
    What could be the problem? Are characters lost when files are stored in wwv_flow_files or when i do the conversion?
    Some other info:
    * I see wwv_flow_files.DAD_CHARSET is set to "ascii", wwv_flow_files.FILE_CHARSET is null.
    * Trying utl_raw.cast_to_varchar2( utl_raw.cast_to_raw('àòèù') ) returns 'àòèù' correctly;
    * NLS_CHARACTERSET parameter is AL32UTF8 (not just english ASCII)

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • [AS] Problem with non English characters in file path

    I wrote a script that exports a pdf file from ID, rasterizes it in PS, applies an action, saves it as another pdf file, and finally creates a Mail message, and attaches the file to it (the last part is written in AppleScript).
    The problem is that it doesn't work when the path to this file contains non English characters.
    This works:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Tetard/Test.pdf"}
    but this doesn't:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Têtard /Test.pdf"}
    I remember vaguely that I read somewhere that AppleScript can work with Unicode — in other words with such characters — starting from some version, don't remember which exactly, but it seems to me — Leopard.
    I am on Mac OS X 10.4.11 right now. Will updating solve this problem? Does anybody know any solution to this problem: a scripting addition, some hidden setting, etc.
    I made a little test: used a Russian character — ё and it works, but when I use — ê (Dutch) it doesn't. May it have something to do with the Region setting in International panel?
    Thanks in advance,
    Kasyan

    Kasyan, as of Leopard AppleScript treats all text as Unicode pre this you can specify 'as Unicode text'. Try a test with these.
    -- Leopard
    set x to POSIX path of (path to desktop)
    -- Pre Leopard
    set x to POSIX path of (path to desktop as Unicode text)
    -- Leopard
    set x to POSIX path of (choose file without invisibles)
    -- Pre Leopard
    set x to POSIX path of ((choose file without invisibles) as Unicode text)

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • Printing Adobe Reader files with a DeskJet 6540 and OS X.6

    Ever since I upgraded to Mac OS X.6, I've had problems printing .pdf files.  No matter what I teil the printer, it only prints one copy of one page of the document.  I've upgraded to Adobe Reader 9.2.0.  Is this a driver problem?  OS X.6 is supposed to take care of that.  Adobe Reader is the only software that causes this problem, and of course Adobe does not provide support.  Any ideas?

    Hi There...I'm having the same problem, although I'm using XP Pro with Service Pack 3 and an OfficeJet 6500 , e709n ...and
    it was working until this past couple of weeks.  I can't print ANY PDF, doesn't matter if it's old or new...  If I "do" a preview, all I see is a line of print across the top of the page.
    Anything else will print other than PDF's, so of course it's driving me nuts! I've even reinstalled the driver for the printer; but it didn't make any difference except for my time and frustration. I hope someone can give us an answer soon...there are important docs I need to print out.

  • Naming files with non English characters.

    I'm using filemaker to creat PDF's through Acrobat 10.1.12. I need to use Polish, Hungarian, Czech and Slovakian characters in the file name but the characters are not recognised and so the file name will not create. This is for Windows, the problem does not occur on a mac.

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

Maybe you are looking for