Double character escaping???

My problem is the following:
The value of a column in the database called 'Titel' is: 'Oracle & J2EE'
When doing a query using the oracle.jbo.html.databeans.XmlData object the output comes as following in the XML stream: <Titel>Oracle &#38;amp; J2EE</Titel>
I'm not able to get this back to the original text again.
<xsl:value-of select="Titel" /> gives 'Oracle &;#38;amp; J2EE' with displays as 'Oracle &#38;amp; J2EE' in HTML
<xsl:value-of select="Titel" disable-output-escaping="yes" /> gives 'Oracle &#38;amp; J2EE' with displays as 'Oracle &; J2EE'.
Does any body know how to get this to display correctly?
The same applies when HTML tags as stored in database fields. They get escaped over and over.
(The database field is an VARCHAR2( n ) )
Thanks in advance,
Robert Willems of Brilman

Sorry... the examples are mixed up a little.... I had the wrong data at hand:
The Database value is:
'Bla bla bla<br>Bla bla bla'
We want to get this data out of the Database using the XMLData object and transform it to HTML using an XSL Stylesheet. However we cannot seem to get the tags to the brower.
With disable-output-escaping="yes" it will be output as 'Bla bla bla<br>Bla bla bla'.
With disable-output-escaping="no" it will be output as 'Bla bla bla&&#38;lt;br&#38;gt;Bla bla bla'
How can we get <br> to the browser?
Thank you in advance,
Robert Willems of Brilman

Similar Messages

  • Double character input error when entering data in numbers using Bluetooth keyboard.

    Using a bluetooth keyboard (Logitech) sometimes causes a double character input when entering data in numbers.
    For example if wish to enter the number "22" or the word "in" they become "2222" and "iinn". Pressing enter of tab also causes a double jump
    This problem is not always solved by turning keyboard off and on or by closing down and restarting numbers.
    This problem does not occur in any other app.
    I have used 2 different Logitech keyboards (ultra-slim and the recently new folio keyboard) and both caused the same problem.
    The error is sporadic but frequent and has lead to me often having to switch back to touch input.
    Very frustrating considering these excellent keyboards are ideal for using in numbers when working correctly!
    ANy advice would be much appreciated

    Zombie,
    Not much help I'm afraid, but I just saw your post & wanted to let you know that I have the same problem, but for me it manifests itself in many apps, and it happens with the Apple bluetooth keyboard
    . The only thing I've noticed is that it only seems to happen when I first start typing after not using the keyboard for a while (I'm not sure how long a while is, but maybe 30 seconds or so). Then I'll type & the first few letters will be fine, and then one letter double-types, and then it's fine again until I stop using it for whatever the threshold amount of time is. I'm guessing that it has something to do with the iPad's battery management & that it's shutting off the connection to the keyboard or something & once it picks it back up, then it double-types that letter.
    At least in Pages, it isn't as big of a deal because I know it's coming, I can correct it & then type for a long time. In Numbers, my keyboard use is much less consistent, so I need to be more cognisant of it.

  • XML Character Escaping Issue

    Hello all,
    At my current project we are interfacing between SAP and PeopleSoft.
    The target system peoplesoft expects an xml exactly formed as follows.
    So within the CDATA area "<" and ">" are expected. NOT "&lt ;" and "&gt ;" AND NOT  "&#60 ;" and "&#62 ;"
    <?xml version="1.0"?>
    <IBRequest>
         <ContentSections>
              <ContentSection>
                   <Data>
    <![CDATA <?xml version="1.0"?><SNS_UPDATE_DEB><MsgData><Transaction>
    <EXTERNAL_SYSTEM class="R"><EMPLID IsChanged="Y">500000005</EMPLID>
    <EFFDT IsChanged="Y">2008-01-22</EFFDT><EXTERNAL_SYSTEM_ID>SAP99003</EXTERNAL_SYSTEM_ID>
    </EXTERNAL_SYSTEM></Transaction></MsgData></SNS_UPDATE_DEB>]>
                                                    </Data>
              </ContentSection>
         </ContentSections>
    </IBRequest>
    However the sending system does send the "<" and ">" within the CDATA are in 'escaped mode'.
    Like below.
    <?xml version="1.0"?>
    <IBRequest>
         <ContentSections>
              <ContentSection>
                   <Data>
    <!CDATA &lt ;SNS_UPDATE_DEB&gt ;&lt ;MsgData&gt ;&lt ;Transaction&gt ;>
    </Data>
              </ContentSection>
         </ContentSections>
    </IBRequest>
    Trying an XSLT with: disable-output-escaping="yes" gave no results. Furthermore a UDF with a FindAndReplace function also doesn't work.
    Does anyone have an idea how to fix this problem?

    Hi Ramon,
    I assume that the UDF is no help because XI escapes the XML after it's passed through the mapping again. Have you tried a Module in the adapter engine that reverts the escaping right at the end of processing? This at least should make sure that no further modifications are done by anybody afterwards.
    On the other hand, this is technically more challenging and a bit harder to monitor.
    What adapter are you using to commnunicate with your target system?
    regards,
    Peter

  • Above downloading double character file

    Hello experts:
    i use the
      CALL METHOD cl_gui_frontend_services=>gui_download
        EXPORTING
          filename                = a_full_filename
          filetype                = a_filetype
         TRUNC_TRAILING_BLANKS   = SPACE
         DAT_MODE                = SPACE
         CONFIRM_OVERWRITE       = SPACE
         NO_AUTH_CHECK           = SPACE
         CODEPAGE                = SPACE
         IGNORE_CERR             = ABAP_TRUE
         REPLACEMENT             = '#'
        IMPORTING
          filelength              = l_file_length
        CHANGING
          data_tab                = at_stream.
    to download file. 
    the parameter a_filetype = 'ASC' and the at_stream
    likes following
    1     [VARIANT]                     #[DESCRIPTION]#I_LAND1#I_MOLGA#I_SPRAS1#I_SPRAS2#I_WAERS#I_KTEXT_1#I_KTEXT_2#
    2     *                             ###################
    3     *ECATTDEFAULT                 ###################
    4     HOME                          #Home (ZH)#CN##28#EN#ZH#CNY#Yuan#人民币#Chinese Yuan (international)#中国人民币#
    why the Chinese word in downloaded file cannot be displayed
    Best Regards,
    Kevin

    Hi,
    Since that chinese language is not installed in ur application server, it is unable to recognise the chinese character and giving output in the form of # . so check with ur Basis guy in order to resolve this problem..
    Rgds.,
    subash

  • Include-owa and character escaping

    Hi.
    We are using include-owa to return user entered databse text within well formatted tags.
    Sometimes text contains & and other XML characters.
    We then "get XML returned from PLSQL agent was not well-formed error"
    How can we easily escape all text values returned?
    Thanks
    Lionel

    Normally you can enclose the text in the CDATA element.

  • Character escaping

    I'm trying to escape all apostrophes in a String so that it can be used inside a javascript function as a parameter. However to replace all "'" with "\'" it seems I need to write the following:
    str = str.replaceAll("'", "\\\\'");I don't understand why I need 4 backslashes to get one in the output. Thanks

    The parameter is becoming a regex argument, and in regex, the backslash is a special char and needs to be escaped by another backslash.

  • Double charactering on Q10 keyboard

    I have a Q10 keyboard that often produces double characters. This seems to be an issue that I have seen in the past on other phones, but I didn't see a sensible solution posted here. A search here also doesn't produce much more that 'rants'. Is there a known solution to this problem? I think my Q10 has been experiencing this problem almost from the first time I bought it (18 months ago) but it's never been much of problem. Recently though, it seems to have gotten a bit worse. Is there any known solution?

    Here is some video of it.
    I don't think its a faulty sensor\screen as it's consistent and everything else works perfectly.
    A single finger won't do it - nor does three - it has to be two fingers at exactly the same time
    https://onedrive.live.com/redir?resid=322862D7399BE469!8710&authkey=!AOJbCo7JSv5euO8&ithint=video%2c.wmv
    Dave

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • Does the . character a reserved character?

    Hi,
    I would like to search the following information in a full text index: *10.1139/f91-051* I am then submit the following query:
    select * from mytable where contains(my_column,'"10.1139/f91-051" WITHIN doi') > 0
    but it does not return any data. However, when I replace the *.* character by a _*, I can retrieve my document:
    select * from mytable where contains(my_column,'"10_1139/f91-051" WITHIN doi') > 0
    Does the *.* character a reserved character? But when I try to escape it, it does not work neither:
    select * from mytable where contains(my_column,'"10\.1139/f91-051" WITHIN doi') > 0
    What did I do wrong?
    In the same way, what query should I submit to find all the I.B.M. words from my index for exemple?
    Kind regards,
    Fred
    Edited by: user503159 on 31 juil. 2009 02:29

    I'm unable to reproduce this exactly as you've stated.
    A few things to note:
    - The double quotes in your query have no effect. There is no "phrase" operator in Oracle Text, tokens that follow each other are a phrase by default
    - The "-" character is a special operator, and needs to be escaped to be treated as a literal (you can precede it with a backslash or put braces (curly brackets) round the whole search term)
    - "\" and "/" are break characters by default.
    - "." is a join character ONLY within numbers. It's a break character within normal alphabetic text.
    Run the folliowing:
    -- drop table t;
    create table t (x varchar2(2000));
    insert into t values ('<doi>10.1139/f91-051</doi>');
    create index ti on t(x) indextype is ctxsys.context parameters ('section group ctxsys.auto_section_group');
    select token_text, decode(token_type, 0, 'Normal', 2, 'Zone Section') as "Token Type" from dr$ti$i order by token_type;The last query shows us what tokens are indexed. We should see:
    TOKEN_TEXT                                                       Token Type
    051                                                              Normal
    F91                                                              Normal
    10.1139                                                          Normal
    DOI                                                              Zone SectionSo we can run a query without the break characters
    SQL> select * from t where contains (x, '10.1139 f91 051 within doi') > 0;
    X
    <doi>10.1139/f91-051</doi>or we can run a query with the break characters in it, but with the "-" character escaped:
    SQL> select * from t where contains (x, '10.1139/f91\-051 within doi') > 0;
    X
    <doi>10.1139/f91-051</doi>The query that you claimed worked does not work for me because the "-" operator is not escaped (I'm on 11g - it's possible the behaviour is different in earlier versions, but unlikely)
    SQL> select x from t where contains(x,'"10_1139/f91-051" WITHIN doi') > 0;
    no rows selectedAs for your question about searching for "I.B.M.", you can search for "I.B.M." or "I B M" and it will find it. You can also set "." to be a SKIPJOINS character, in which case you can search for "I.B.M." or "IBM" (but not "I B M").
    Note there is a gray area for tokens which are partly numeric - "1.23ABC" will probably be indexed as a single token, whereas "ABC1.23" will probably be indexed as two tokens "ABC1" and "23". This is because for performance reasons the lexer does not backtrack - when it hits the "." in the first example it thinks it is handling a pure number and therefore treats the "." as a numeric join character.

  • Keyword Escape Sequences: Big Problem

    Is there any character escape sequence for LR 1.1 that will allow the use of restricted characters in keywords?
    In PSE/CS, if you want to use a comma as part of a keyword, you put the whole phrase inside double quotes.
    "Harmon, Anthony" shows up in cataloging apps and others as:
    Harmon, Anthony
    Well, under LR 1.1 that no longer works and thousands of keywords/files and cross-references need to be redone unless I can find a solution.
    Any über-experts out there?

    Preferences will allow you to use . as a Keyword separator.
    Richard Earney
    http://inside-lightroom.com

  • Double space after apostrophe in iMessage

    Ever since I updated to Mavericks, iMessage on my MacBook Pro (mid 2012) adds a double space after I use an apostrophe! It also doesn't save my background color preference (but it does save font color, font, and other user's font and color). It only shows up for me, not for the person I send the message to.
    It looks like this:

    Hi,
    The Original Poster newer came back with any more info on the subject.
    This also left her Balloon (or Background) colour unresolved as well.
    In Messages 8 (Mavericks) the Smiley function now uses the Emoji font to create the Smilies.
    In iChat versions and Messages 7 you could show the Smiley drop down and mousing over the Smilies would reveal the text keystrokes that would be needed to create it if typing them.
    This does not happen with the Emoji Font.
    However most of the icons are created by double character info which may mean a keystroke might just be invoking it.
    Playing with these settings may provide something.
    9:04 pm      Saturday; December 21, 2013
      iMac 2.5Ghz 5i 2011 (Mavericks 10.9)
     G4/1GhzDual MDD (Leopard 10.5.8)
     MacBookPro 2Gb (Snow Leopard 10.6.8)
     Mac OS X (10.6.8),
     Couple of iPhones and an iPad

  • Double-byte parser

    Hey,
    I need to load in 400 character string of data into text using the FM SAVE_TEXT in chucks of 60 characters.  If the user logs in Japanese, they could be loading in a mix of single byte and double characters.  I need to be able to check if I break my string of data at the 60th character, will I be corrupting a double character. 
    Does anyone know how to perform this test in ABAP?
    Thanks for the feedback! Bill

    Hi Bill,
    If you are on version 6.2+ you could try using the NUMOFCHAR( ) command.
    I guess you could do something like:
    * First determine the number of actual
    * characters in 60th and 61st bytes.
    w_num_of_chars = NUMOFCHAR( w_text+60(2) ).
    * If there is only 1, then split on 59.
    IF w_num_of_chars = 1.
    * Otherwise, we have two characters so we can split at 60.
    ELSE.
    ENDIF.
    I must admit I'm a complete novice when it comes to double byte languages and their handling, so that may just be complete gobbledegook.
    Hope that helps.
    Cheers,
    Brad

  • Curious double charaters while typing

    As soon as my iMac loaded and I began looking at all the cool stuff, I started to send Mail (eMail) and noticed a strange problem as I typed. Foor some reason, I experience random double character strikes like the one at the beginning of this sentence. At first I thought it might be a preference setting gone wild, but after dialing the setting upp to a faster repeat rate or down for the slower one, there doesn''t seem to be a difference. Sometime punctuation is affected too. Do I have a bad keyboard? Is the keyboard software confused perhaps, or is this a ham-fisted user problem? FYI- this post is mild compared to email text I type -- which is usually full of double strikes. This happens everywhere i.e. it involves all uses of thee keybooard in all applications.. In my 17+ years of using a Mac, I've never seen this one before.
    Does this look familiar to anybody?
    iMac   Mac OS X (10.4.9)  

    "I've beeen having this exaact same problem. Did any of the suggestions here work for you?"
    After I moved, I also had the exact same problem, and discovered it had to do with the new location of my computer. Since I simultaneously began to have sharp pains in both wrists, I simply could not tolerate the situation for any length of time at all, and continually made changes until I discovered the solution - at least for me.
    http://www.apple.com/about/ergonomics/index.html
    Search "ergonomics" in Support, or google "typing ergonomics" for more information on how to properly position oneself for better productivity and less risk of pain and/or injury when using a typewriter or computer keyboard.
    I originally was sitting at a chair seat height that was more than 12" below the height at which the keyboard sat. As long as I sit no more than 9" below the height at which the keyboard sits, I have no typing accuracy problem, and no wrist pain. Measurements are taken while seated, so as to include any compression of the chair seat height.
    When I now return myself to that original faulty height configuration, my typing once again makes it look like I'm just learning to spell. I originally got myself into that bad situation by quickly setting up my computer at the first empty desk, rather than first giving any thought to the positioning.
    Unfortunately, those measurements are only what works for me, based on my height/arm length/keyboard height/seat height. Each person using a keyboard needs to properly fit themselves into their individual best position. Since correcting desktop height can be nearly impossible, it might be easier to set the keyboard at a different level than the display, or use a different chair.

  • Escaping ' and " characters in XML message

    Hello,
    I have the following issue with XML character escaping. The following characters need to be escaped: < > & ' and ".
    When I receive the message in the integration engine &lt and &gt are already escaped as expected. However ' and " are not.
    I have implemented a Java Mapping to replace " and ' with it escape characters & quot; and & apos;
    Problem however is that ALL " are escaped (including the XML prolog xml version = "1.0" )
    The XML then results in a parsing error when executing a test for the mapping.
    Do you have any idea how I can resolve this?`
    Thank you very much!
    Edited by: Florian Guppenberger on Jan 20, 2010 10:49 PM
    Edited by: Florian Guppenberger on Jan 20, 2010 10:49 PM

    When the message arrives everything is escaped, however after executing the XSLT mapping &apos; and &qot; are de-escaped again.
    I think both the SAX parster or the regex are a good approach to solve the problem.
    Christophe, do you have any sample code available for the replacement with a regex? Guess that I have to find the relevanr substrings first, and then having a replace all run on that. But it would be very helpful to see a code sample there....
    Thank you!
    Edited by: Florian Guppenberger on Jan 21, 2010 2:20 PM
    Edited by: Florian Guppenberger on Jan 21, 2010 4:02 PM

  • Regular expressions in Workshop 8.1

    Hello,
    I'm posting this question here because I don't see a "jdk" subcategory in this
    newsgroup and it might be problem peculiar to Workshop.
    I'm trying to use the Pattern and Matcher classes in java.util.regex (JDK 1.4.2)
    in BEA Workshop 8.1, but I'm getting "ERROR: Unknown escape code" (red squiggly
    line appears under the regex and this message is the screen tip) whenever I try
    to use the backslash to escape a special character in the Pattern.compile() and
    the Pattern.matches() methods.
    For example, it doesn't allow "\d" to mean "any digit". For this particular one,
    I can get around the problem by specifying "[0-9]", but in the case of the period
    character, I'm stuck. I cannot use "\." However, the JDK API doc (http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html)
    says the backslash is to be used for this purpose, if I'm reading it correctly.
    Is this a problem with Workshop, and is there a workaround? I need to specify
    that I require one and exactly one period.
    Any help would be most appreciated!
    Thanks.

    Yes, I had read the Java doc, but I guess I hadn't fully understood it. Now I
    do! Thanks!!
    David
    Josh Eckels <[email protected]> wrote:
    This isn't particular to Workshop, but you'll need to use two
    backslashes in your source code. Inside a string, backslash is used to
    escape the next character so that you can enter special characters like
    newlines ('\n'), tabs ('\t'), etc.
    So, in order to enter a backslash character into your string, you need
    to escape it, like '\\'.
    There's a small section on this in the java.util.regex.Pattern JavaDoc,
    under the "Backslashes, escapes, and quoting" header:
    Backslashes within string literals in Java source code are interpreted
    as required by the Java Language Specification as either Unicode escapes
    or other character escapes. It is therefore necessary to double
    backslashes in string literals that represent regular expressions to
    protect them from interpretation by the Java bytecode compiler. The
    string literal "\b", for example, matches a single backspace character
    when interpreted as a regular expression, while "\\b" matches a word
    boundary. The string literal "\(hello\)" is illegal and leads to a
    compile-time error; in order to match the string (hello) the string
    literal "\\(hello\\)" must be used.
    Josh
    David Chang wrote:
    Hello,
    I'm posting this question here because I don't see a "jdk" subcategoryin this
    newsgroup and it might be problem peculiar to Workshop.
    I'm trying to use the Pattern and Matcher classes in java.util.regex(JDK 1.4.2)
    in BEA Workshop 8.1, but I'm getting "ERROR: Unknown escape code" (redsquiggly
    line appears under the regex and this message is the screen tip) wheneverI try
    to use the backslash to escape a special character in the Pattern.compile()and
    the Pattern.matches() methods.
    For example, it doesn't allow "\d" to mean "any digit". For this particularone,
    I can get around the problem by specifying "[0-9]", but in the caseof the period
    character, I'm stuck. I cannot use "\." However, the JDK API doc
    (http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html)
    says the backslash is to be used for this purpose, if I'm reading itcorrectly.
    Is this a problem with Workshop, and is there a workaround? I needto specify
    that I require one and exactly one period.
    Any help would be most appreciated!
    Thanks.

Maybe you are looking for

  • Problems reading my backed up data on hard drive after using Time Machine

    I have a MacBookPro and had to take it in for servicing. I backed up all my data using Time machine. They had to replace my battery, hard drive, and logic board, basically wiping my slate clean. When I got my computer back and plugged in my hard driv

  • Flat panel monitor failure

    My G4 moitor has failed. Does anyone know how I can use a networked mac to see both partitions on the G4 hard drive and and what software is available & necessary to copy the old hard drive contents to a new machine. If there is no way, I am in deep

  • Top up wipes out previous credit

    Hi all So I have noticed this a few times, but was not entirely sure if it was happening. Ok, I jumped the gun, I will explain the problem first. I only use prepaid top up cards on my account, why this is. Well my Xbox account was hacked and £900 wen

  • My iTouch can only be restored thru iTunes after update

    I updated my iTunes to 10.5.1.42 last night, and then updated the iTouch software to IOS 5.0.1. Now, when I plug my iTouch into my computer to sync and open up iTunes, the iTouch is found and recognized, but the only option I have is to set up as a n

  • Where can I buy more synths?

    Are there synth packs for Garageband besides the Jam Packs? I've been looking on the web and there's just too much information to sort through. I don't want to have to download different programs and export and blah blah blah. Thanks!