Funky Characters

We convert many Word 2003 documents to PDF, every month.
Lately, strange characters have been appearing in the PDF docs,
like # or &, or @. These characters are not present in the Word
document, so I dont think that the problem is font related.
Is there a way in Adobe Acrobat 8 to find all the #
characters in one document and replace them with nothing.
Thank you
Liz

I have been having the same problems. There are several painful solutions, but Tom has the best... or maybe second best. I found out from 2 different providers that they hesitate to change the 'legacy' settings which is the best solution.
The second and ultimately easiest solution is to create a text file with the following single line in it:
AddDefaultCharset utf-8
That's it, nothing more. NO EXTENSION!
Save the file with no extension (i created mine in BBEdit). FTP this file into your root directory on you're server. This is the directory which CONTAINS your public_html folder. You may also see other system files such as .bash_profile, .login, et. al. (don't mess with them) If you see the public_html folder and nothing else you may to turn on the 'show hidden files' feature.
This will solve your problem and you should be able to do this without 1) hacking any code (my first solution) and 2) talking to service providers that don't necessarily even know what's going on.
This solution was suggested by Tom, but I still had to research details to know how and where to create and put the file on the server. I will soon have a actual .htaccess file posted and announce on my blog that you can download if you need to, but it's really simple...
feed://rss.mac.com/gene.deal/iWeb/Gene%20Deal/Blog/rss.xml
I should have this posted later tonight.

Similar Messages

  • Funky characters replacing ligatures during import from PM files

    I inherited four long, book files (meant for one book) originally laid out in Pagemaker, to be updated and repurposed for PDF. I opened the file(s) in InDesign CS3 and subsituted a standard body copy serif font (Bembo) for the body copy, since I didn't have the original font on my system. Even though ligatures were enabled in the style sheet, I still have single character capital Vs and Xs in the place of typical ligatures, like 'ff' and ffi....
    I thought I could eradicate them with find and replace, but it's become a collosal problem: I now have instances where a single word has both Bembo normal and Bembo Oldstyle fonts in the same word...
    When I search for the capital letters, ie. W which has replace the 'fi' ligature, I can't really figure out how to assign the correct ligature to it; moreover, I can't figure out how to 'find' for a capitol letter within a word, using 'whole word' function..
    anyone have any ideas? I would be most grateful...!
    todd

    Pagemaker didn't (doesn't) support Unicode or OpenType, and while some ligatures were accessible in the ISO character set under Mac (but not under Windows), the person who created the file obviously just used the associated expert set. (Expert and small caps sets were separate fonts that contained the small caps, ligatures, old style figures, etc that there wasn't room for in the standard font.) In Adobe expert sets, entering capital V gave you the ff ligature; cap W, fi; cap X, fl, cap Y, ffi; and cap Z, ffl.
    So, search for these capitals in Bembo expert and replace them with ff, fi, fl, ffi, and ffl, and ID will turn them into ligatures (as long as they're enabled and in the OT font you have).

  • IMPDP SQLFILE : multibyte characters in constraint_name leads to ORA-00972

    Hi,
    I'm actually dealing with constraint_name made of multibyte characters (for example : constrain_name='VALIDA_CONFIRMAÇÃO_PREÇO13').
    Of course this Bad Idea® is inherited (I'm against all the fancy stuff like éàù in filenames and/or directories on my filesystem....)
    The scenario is as follows :
    0 - I'm supposed to do a "remap_schema". Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
    1 - The scott schema is exported via datapump
    2 - I do an impdp with SQLFILE in order to get all the DDL (table, packages, synonyms, etc...)
    3 - I do some sed on the generated sqlfile to change every occurence of SCOTT to NEW_SCOTT (this part is OK)
    4 - Once the modified sqlfile is executed, I do an impdp with DATA_ONLY.
    (The scenario was imagined from this thread : {message:id=10628419} )
    I'm getting some ORA-00972: identifier is too long at step 4 when executing the sqlfile.
    I see that some DDL for constraint creation in the file (generated at step#2) is written as follow :ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...Obviously, the original name of the constraint with cedilla and tilde gets translated to something else which is longer than 30 char/byte...
    As the original name is from Brazil, I also tried do add an EXPORT LANG=pt_BR.UTF-8 in my script before running the impdp for sqlfile. This didn't change anything. (the original $LANG is en_US.UTF-8)
    In order to create a testcase for this thread, I tried to reproduce on my sandbox database... but, there, I don't have the issue. :-(
    The real system is an 4-nodes database on Exadata (11.2.0.3) with NLS_CHARACTERSET=AL32UTF8.
    My sandbox database is a (nonRAC) 11.2.0.1 on RHEL4 also AL32UTF8.
    The constraint_name is the same on both system : I checked byte by byte using DUMP() on the constraint_name.
    Feel free to shed any light and/or ask for clarification if needed.
    Thanks in advance for those who'll take on their time to read all this.
    I decided to include my testcase from my sandbox database, even if it does NOT reproduce the issue +(maybe I'm missing something obvious...)+
    I use the following files.
    - createTable.sql :$ cat createTable.sql
    drop table test purge;
    create table test
    (id integer,
    val varchar2(30));
    alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
    select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
    from user_constraints where table_name='TEST';- expdpTest.sh :$ cat expdpTest.sh
    expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test- impdpTest.sh :$ cat impdpTest.sh
    impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=testThis is the run :
    [oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> @createTable
    Table dropped.
    Table created.
    Table altered.
    CONSTRAINT_NAME                  LB       LC
    DMP
    VALIDA_CONFIRMAÇÃO_PREÇO13             29         26
    Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
    ,80,82,69,195,135,79,49,51
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh
    Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    . . exported "SCOTT"."TEST"                                  0 KB       0 rows
    Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
      /home/oracle/scott_dir/testNonAscii.dmp
    Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh
    Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
    Starting "SCOTT"."SYS_SQL_FILE_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
    [oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql
    -- CONNECT SCOTT
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: TABLE_EXPORT/TABLE/TABLE
    CREATE TABLE "SCOTT"."TEST"
       (     "ID" NUMBER(*,0),
         "VAL" VARCHAR2(30 BYTE)
       ) SEGMENT CREATION DEFERRED
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
      TABLESPACE "MYTBSCOMP" ;
    -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;I was expecting to have the cedilla and tilde characters displayed incorrectly....
    Edited by: Nicosa on Feb 12, 2013 7:13 PM

    Srini Chavali wrote:
    If I understand you correctly, you are unable to reproduce the issue in the test instance, while it occurs in the production instance. Is the "schema move" being done on the same database - i.e. you are "moving" from SCOTT to NEW_SCOTT on the same database (test to test, and prod to prod) ? Do you have to physically move/copy the dmp file ? Hi Srini,
    On the real system, the schema move will be to and from different machines (but same DBversion).
    I'm not doing the real move for the moment, just trying to validate a way to do it, but I guess it's important to say that the dump being used for the moment comes from the same database (the long story being that due to some column using object datatype which caused error in the remap, I had to reload the dump with the "schema rename", drop the object column, and recreate a dump file without the object_datatype...).
    So Yes, the file will have to move, but in the current test, it doesn't.
    Srini Chavali wrote:
    Obviously something is different in production than test - can you post the output of this command from both databases ?
    SQL> select * from NLS_DATABASE_PARAMETERS;
    Yes Srini, something is obviously different : I'm starting to think that the difference might be in the Linux/shell side rather than on the impdp as datapump is supposed to be NLS_LANG/CHARSET-proof +(when traditional imp/exp was really sensible on those points)+
    The result on the Exadata where I have the issue :PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              11.2.0.3.0the result on my sandbox DB :PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              11.2.0.1.0------
    Richard Harrison .  wrote:
    Hi,
    Did you set NLS_LANG also when you did the import?Yes, that is one of the difference between the Exadata and my sandbox.
    My environnement in sandbox has NLS_LANG=AMERICAN_AMERICA.AL32UTF8 where the Exadata doesn't have the variable set.
    I tried to add it, but it didn't change anything.
    Richard Harrison .  wrote:
    Also not sure why you are doing the sed part? Do you have hard coded scheme references inside some of the plsql?Yes, that is why I choose to sed. The (ugly) code have :
    - Procedures inside the same package that references one another with the schema prepended
    - Triggers with PL/SQL codes referencing tables with schema prepended
    - Dynamic SQL that "builds" queries with schema prepended
    - Object Type that does some %ROWTYPE on tables with schema prepended (that will be solved by dropping the column based on those types as they obviously are not needed...)
    - Data model with object whose names uses non-ascii characters
    +(In France we use to call this "gas power plant" in order to tell how a mess it is : pipes everywhere going who-knows-where...)+
    The big picture is that this kind of "schema move & rename" should be as automatic as possible, as the project is to actually consolidate several existing databases on the Exadata :
    One schema for each country, hence the rename of the schemas to include country-code.
    I actually have a workaround yet : Rename the objects that have funky characters in their name before doing the export.
    But I was curious to understand why the SQLFILE messed up the constraint_name on one sustem when it doesn't on another...

  • Can't parse "bad" characters - need help

    Hey everyone, this should be simple. I am trying to parse (DOM parser) and XML string that has some funky characters in it. I am getting the following error:
    Illegal XML character: & # x 1 d ; (had to use spaces so that it would print out properly...otherwise: &#x1d;
    I have a findAndReplace method that I can use to replace the bad characters with their proper values (copy paste from Word to notepad of single quotes really screws things up), but I can't seem to get it to work. What should I pass in to be replaced? I know that this is a hexadecimal for '29', but I can't seem to get my findAndReplace to work by passing in &#x1d, & # x 1 d ;, or java.lang.String.valueOf(29). Please help, thanks!
    FYI...here's my findAndReplace method.
    public static String findAndReplace(String original, String replaceThis, String withThis) {
    if (original==null || replaceThis==null || withThis==null) {
    return null;
    int i = original.indexOf(replaceThis);
    if (i == -1) {
    return original;
    int replaceThisSize = replaceThis.length();
    int withThisSize = withThis.length();
    while (i != -1) {
    String beforeString = original.substring(0, i);
    String afterString = original.substring(i + replaceThisSize);
    original = beforeString + withThis + afterString;
    i = original.indexOf(replaceThis, i + withThisSize);
    return original;

    warnerja, you're right. That character should not be
    there. However, I can not guarantee that it will not
    be there as it comes from a different system and
    people are pasting Word docs into this system all the
    time. You can train people all you want, but they
    don't always listen. So, I need to make sure my code
    doesn't blow up when the do do this. Hence my issue.I'd follow the GIGO (garbage in, garbage out) principle. Are you also going to need to code it to read their minds when they provide no sensible input whatsoever, like:
    <hey, figure this>junk<out/>man</hey, figure this>

  • Garbage characters appearing in help topics converted from RH6 to RH7

    We have a couple of FlashHelp projects that we're converting
    to RoboHelp 7. Documents were in imported from Word to create
    topics and although the true code showed an asci character that
    looked like a thick vertical bar for dashes (em dashes), ellipses,
    and smart quotes, everything displayed fine in RoboHelp 6.
    We just started converting these same RH6 projects to
    RoboHelp 7 and now, we're getting garbage characters in place of
    smart quotes, em dashes, etc. when we display the compile help
    topics. What's weird is that the characters look fine when your in
    WYSIWYG mode -- you have to display the topic to see the funky
    characters.
    With 900 topics in one project and 400+ in another, going
    through the true code to find these funky characters is going to be
    time consuming and painful. Did it for a couple of small projects
    (less than 100 topics) and it took me a couple of hours --
    switching back &amp; forth between WYSIWYG and true code.
    I tried using the search and replace tool to fix the smart
    quotes throughout the project and it caught some but not all.
    Seemed to capture the opening quote but not the end quote.
    Any suggestions on how to quickly fix these problems without
    going topic by topic through the project file checking the true
    code.

    See Item 21 on this page.
    http://www.grainge.org/pages/authoring/rh7/using_rh7.htm

  • XSQL exception displaying funky chars:Cannot map Unicode to Oracle characte

    Hi.
    I'm using XSQL Servlet to serve XML from a 9.2 database. The varchar columns I'm trying to display have non-ASCII characters in them (Spanish enye, curly quotes, etc.). The database's character encoding is WE8ISO8859P1, which handles these characters fine. Running a simple "select * from..." query, I get this error:
    oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode to Oracle character
    which seems odd considering it ought to be mapping an Oracle character to a Unicode character, not the other way around. Additionally, what's the problem? Unicode supports a large superset of WE8ISO8859P1.
    Any idea how I can get XSQL Servlet to play nice with these funky characters?
    Thanks,
    Andrew

    Update: still stuck...

  • .xml file not recreating

    I'm trying to use iPhoto and sync to an iTunes playlist, but it turns out that my iTunes Music Library.xml file hasn't modified itself since December. That means it no longer matches my real library and iPhoto can't use it. I dragged the iTunes Music Library.xml file to my desktop, restarted iTunes, made a new playlist and added a song. When I quite iTunes, I briefly saw a temp file pop up in the iTunes folder, but no new .xml file was written. The same thing happens if I try to export my library to the desktop. There's a little blip of a file and then it's gone. Since the .xml file is old, I don't want to use it as a starting point to rebuild my real library, especially since it has all my iPad info on it.
    I trashed iTunes and reinstalled it and repaired permissions.
    Any other ideas? Thanks! - j

    Found an old post that helped me out.
    I looked at the creation time of the last .xml file and it was Dec. 6 @ 10:50. I checked my music files by "date added" and found a number of Xmas albums added at that time. There didn't appear to be any funky characters in their names, so I just deleted the 3 that I added right then. Nothing I listened to anyways. I looked back in the finder and "kaching!", a new .xml file was created.
    Turns out one bad apple did spoil the whole bunch. Problem solved! - j

  • Can not read or write to mac hard drive error

    When attempting to transfer music files from an external usb harddrive formatted in windows into Itunes, I can only successfully import a small fraction of the music, and then its stops with a error, that the Mac HD can not read or write files.

    It's possible the file names have some 'funky' characters in them that the Mac doesn't like ?
    What format is the external drive ?

  • Scanning of 2D barcodes

    Hi,
    I have created a 2D barcode via a SMARTFORM by concatenating several fields into one barcode field. Each individual field is suffixed by '\&'. The barcode appears to print O.K. but will not scan. I am using the same scanner which I use for scanning linear barcodes and it works fine. Do I need a specific scanner to scan 2D barcodes please. The 2D symbology used is PD417. Also, is the method of concatenating the fields correct or do I need to include other control characters, please.
    Your help is much appreciated.
    Kind regards
    Ken Harrison

    I posted the PDF to show the funky characters; here's the text.  I'm wondering if there's not something different when you paste it in versus when it's scanned in.  Keep in mind that this crashes LabVIEW even if the VI isn't running.  This makes me think that something changed between versions 8.5 and 8.5.
    Another interesting thing to consider is that I see question marks where you show decimal points.
    Message Edited by jcarmody on 03-11-2009 10:21 AM
    Jim
    You're entirely bonkers. But I'll tell you a secret. All the best people are. ~ Alice
    Attachments:
    barcode.txt ‏1 KB
    Example_VI_FP.png ‏3 KB

  • How to insert unicode charater from SQLPLUS

    Hi all,
    My problem is following :
    I want to store unicode character in a column of table so I create table with command : CREATE TABLE PERSON( ID NUMBER(4) NOT NULL ,NAME NVARCHAR2(64) NOT NULL).
    NLS_CHARACTERSET is set in DB :
    SQL> SELECT * FROM NLS_DATABASE_PARAMETERS;
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    PARAMETER VALUE
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION 8.1.7.0.0
    18 rows selected.
    when I insert data into above table (PERSON) with none unicode character is OK, but I failed with unicode character.
    Can anyone help me to solve this proplem?
    Thank you so much for any hints
    Pls email me at : [email protected]

    NLS_CHARACTERSET WE8ISO8859P1 is a western eurpoean character set (without the Euro symbol and about !# other funky characters being supported - use NLS_CHARACTERSET WE8ISO8859P1% if you want the Euro symbol, sorry just a sidenote). Anyways, this will need to be changed to UTF* before it can store Unicode characters. There are various ways to do this, but the best option is probably to export the entire database, create scripts to recreate all tablespaces and datafiles, drop the database, recreate the database in UTF8, then import the export file. There's a lot of little 'gotchas' involved so be prepared for it not to work the first time. But if it doesn't work, you can always create the database again with NLS_CHARACTERSET WE8ISO8859P1 and reimport so everything is back to how it was.

  • A-B driver problem, still unsolved

    This problem got marked as solved, but is not. Any NI techs familar with this driver and/or A-B KF2?
    I am having a problem with the Allen Bradley driver. The help file says it "enables serial comm via a KE, KF or KF2 module". I am using a KF2 device. I can verify everything with RSLinx, see my program, RS Who shows everything, but when I try lookout I always get the error "STS error 10 reading xxx illegal command or format". Any help would be appreciated, thanks.
    Other details: DF1, half duplex, 19200 N81. It seems like it wants to momentarily read device data in Lookout before I get the above error message. 

    Below is some pasted text from the serial log file. I am trying to communicate with a SLC 5/04 through a KF-2, half duplex. I get the same results when trying a DataLink DL3500 (which os supposed to be the equivalant to the KF-2). RS Linx works fine through both devices, Lookout does not. Notice the funky characters inserted in the lines between the "[   ]" data.
    10:56:29.2 - AB1 ->
    [10][01][04][10][02][04][00][0F][00][0B][00][A2][0​2][07][89][00][00][10][03][17][D4]
    10:56:29.2 - AB1 <-
    [10][06]
    10:56:29.2 - AB1 ->
    [10][05][04][FC]
    10:56:29.3 - AB1 <-
    [10][02][04][04]O[10][10][0B][00][10][03]$[89]
    10:56:29.3 - AB1 ->
    [10][06]
    10:56:35.2 - AB1 ->
    [10][01][04][10][02][04][00][0F][00][0C][00][A2][0​2][07][89][00][00][10][03]1[E4]
    10:56:35.2 - AB1 <-
    [10][06]
    10:56:35.2 - AB1 ->
    [10][05][04][FC]
    10:56:35.3 - AB1 <-
    [10][02][04][04]O[10][10][0C][00][10][03][95]H
    10:56:35.3 - AB1 ->
    [10][06]
    10:56:41.2 - AB1 ->
    [10][01][04][10][02][04][00][0F][00][0D][00][A2][0​2][07][89][00][00][10][03]<t

  • Motion wont read my FCP clips

    I am trying to send my sequence from FCP to Motion. Every time the checkered red boxes show up and a message comes up saying it can not locate the original footage. My footage is all on an external HD. FCP reads it off of the HD just fine. But every time I go to motion it looses it. When I try to search for it manually after the warning pops up it wont let me select the clips from the HD even though it is the exact same path it says the footage used to be on.

    Don't have any funky characters in the file path - "/~#@*%" - etc...
    post back,
    Patrick

  • Solved my Timecapsule problem.

    To recap:
    Short name with no funky characters.
    Backed up G4 Tower wirelessly. After that...wireless no longer works.
    Backed up G4 Titaniaum Powerbook using ethernet cable connected to Timecapsule LAN. After that....no more wireless connection.
    MacBook Pro. Cannot back up to Timecapsule using Time Machine wirelessly. Cannot back up using ethernet cable connected to LAN. Cannot back up with ethernet connected to ethernet port. Cannot back up at all.
    I don't need a $500 airport extreme that only works with my MacBook. If I have to back up to an external harddrive, I don't need the Timecapsule at all.
    My solution? It's being returned. I don't need this POS I've wasted a week of my life on and it still doesn't work.
    I've had to reinstall 10.3.9 on my Powerbook. I've lost my accounts so now I have to restore that. I'm hoping when I'm done I'll have wireless again.
    I've re-connected my old snow airport basestation again.
    After dumping the Timecapsule's Airport Utility, reinstalling Leopard and fiddling for an hour I'm finally back online again.
    Never again. Never again. If I wanted crap I'd buy Microsoft and GM.

    I wanted to reply to your question about other sound cards. Seems Razer is coming out with a sound card soon and as soon as it does I'll be buying it. I've been dealing with CL now for over a year? and still no solution to the "SCP" issue and I've?done everything they asked me to and now my warranty is over. Razer's new sound card look's promising and they make a great mouse too. Best of luck!.

  • "SQL Query in HTTP Request" (5474:0)

    Hi,
    The IDS signature "SQL Query in HTTP Request" (5474:0) does not recognize all malicious SQL selects. Currently, the reg exp looks like [%]20|[=]|[+])[Ss][Ee][Ll][Ee][Cc][Tt]([%]20|[+])[^\r\n\x00-\x19\x7F-\xFF]+([%]20|[+])[Ff][Rr][Oo][Mm]([%]20|[+] . We noticed that subselects does not trigger the signature. For example, "...(select%20something%20from%20somethingmore%20where%20variable%20=%20(select%20....." which could be malicious. Is there any possibility to include "(" in the regexp to detect subselects?
    Regards,
    /Ola

    hmmm...That should actually match just fine. Let's break it down:
    ([%]20|[=]|[+]) <--"%20","=",or "+"
    [Ss][Ee][Ll][Ee][Cc][Tt] <-- "SELECT"
    ([%]20|[+]) <--"%20" or "+"
    [^\r\n\x00-\x19\x7F-\xFF]+ <-- NOT one or more ascii control or extended chars
    ([%]20|[+]) <-- "%20" or "+"
    [Ff][Rr][Oo][Mm] <-- "FROM"
    ([%]20|[+]) <-- "%20" or "+"
    The only reason I can think that it wouldn't match is if there some funky characters between the first SELECT and the first FROM (i.e. carriage return/line feed, etc). Also remember that a %20 or = or + must precede the SELECT and that a %20 or + must follow the FROM.

  • Problems with RandomAccessFile

    Hello!
    I have a problem with reading data from file which was written with RandomAccessFile
    I write data with writeChars() method and then try to read what i wrote with readLine() method.
    But if i wrote ie 3333
    when i read it i have *3*3*3*3
    * = looks like small square ( it seems to me that it is just a free space)
    Does anybody know why this happens?
    Look please at this code with my explanations :
    public static void main(String[] args){
       try{
            RandomAccessFile outputStream = new  RandomAccessFile("a/new.data", "rw");
            outputStream.seek(0);
            outputStream.writeChars("3333" + "\n");
            outputStream.writeChars("test");
            outputStream.seek(0);
            String s = outputStream.readLine();
            String s1 = outputStream.readLine();
            System.out.println(s + " s");// here i receive instead 3333     *3*3*3*3  ( * is a small square)
            System.out.println(s1  + " s1" ); // the same thing
            outputStream.close();
        }catch(Exception exc){System.out.println("exception");}
    }Thank you for any ideas.
    Timur.

    The funky characters you are seeing are bytes with value of 0 read by readString(), converted into Java chars.
    Your problem is as follows: you write data with writeChars() which writes every character using writeChar() [which in turn writes every char as a 2-byte value, high byte first] but you read data using readLine() which uses read() in a loop. read() reads every single byte of data from the file and converts it into a 16-bit Java char.
    Thus, you double the length of your "string" by inserting zeros (for an ASCII string) when you read it back in your code.
    There are several ways to make sure you read back correct data:
    - write it using writeBytes() -- but be aware that this will discard the high byte of every char and this is only good for ASCII etc [i.e. your code will not be Unicode-friendly]
    - write the string using writeUTF() and read it via readUTF(). This will be Unicode-friendly but will not allow you to use readLine(). However, that method is documented as not being Unicode-firiendly, see
    http://java.sun.com/j2se/1.3/docs/api/java/io/RandomAccessFile.html#readLine()
    - do not use a RandomAccessFile at all. Instead, use a FileWriter to write your data and a BufferedReader wrapping a FileReader to read your data.
    Vlad.

Maybe you are looking for