CF8 MySQL character encoding problems

Having worked with a MySQL 5 database (latin1 charset) with
CF5 with no problems for a long time.
After migrating to CF8 I get question marks ("?") instead of
extended characters (like umlauts ÄÖÜ) when printing
database content to my web page.
The following commands are set in my Application.cfm:
<cfset SetEncoding("form", "ISO-8859-1")>
<cfset SetEncoding("url", "ISO-8859-1")>
<cfcontent type="text/html; charset="iso-8859-1">
My HTML content starts as follows:
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Transitional//EN" "
http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang="de" xml:lang="de" xmlns="
http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="content-type"
content="text/html;charset=iso-8859-1" />
Environment is WinXP, CF8, default JVM, MySQL3 driver coming
with CF8. Tables are MyISAM format with latin1 character set and
collation.
I cannot use the MySQL4/5 driver so far, since this driver
seems to enforce the STRICT database mode, which collides with some
database content (NULL values got saved as 0000-00-00 etc. as this
project has startet as a MySQL3 database many years ago). (BUT:
MySQL4/5 driver seems to work fine with above settings
I've already tried several combinations of
useUnicode=false;characterEncoding=iso-8859-1;characterSetResults=iso-8859-1
parameters in the "connection string" box of the data source (with
either ";" or "&" as separator) - to no avail.
Any chance I get it up running with the MySQL3 driver?
thanks for any help
Bernhard

quote:
Originally posted by:
brahms_x01
We have the same problem since upgrading to CF 8. I have
found out, that it seems to depend on the encoding type of the
form. <cfform action="rezensent_update.cfm" method="post">
works fine but as soon as we use <cfform
action="rezensent_update.cfm" method="post"
enctype="multipart/form-data"> we get the same problem.
while searching for a solution to my problem came across a
possible solution to your problem: Adobe somewhere writes that you
have to add some encoding information to the enctype parameter. I
don't know the URL of the article anymore, but IIRC it was an
official bulletin of Adobe for internationalization of ColdFusion
projects. Have a google for it.
Bernhard

Similar Messages

  • About mysql character encoding problem

    I have a system with mysql database, in the system all encoding is UTF-8, i want to know if in client which is use utf-8 encoding and then send the form data to server side process. The server will do some process like insert data to database, if i no set the mysql encoding, default is latin1, the data from client side to server and then insert to database, the data in database is utf8 or others? thanks

    If you do not change the MySQL encoding, MySQL will use the default encoding which is latin1. All data inserted to the database will then be in latin1.

  • Character encoding problem by Gmail set up as exchange account on IOS mail

    It was maybe asked before but I could not find any solution. If someone can help I would appreciate it...
    If I set up my gmail, which I use very heavily, as Gmail on the IOS mail installation it would be set up as IMAP and does not push the emails. If I choose it to install as exchange account via m.google.com everything is OK and I get my mails instantly. But the way over exchange disturbs because of the character encoding problem. IOS mail does not show me the turkish or some german characters  like "ü" and the mail is almost not readable... By version 1 also as IMAP there is no problem with the characters but I must start IOS mail everytime to see if I have new mail...
    Does anyone know a solution for it?
    Thanks...
    Mel

    But I cannot set the the gmail app as standart email program, or? I mean if I want to send a page on safari and click send via email, the mail.app would start and not the gmail.app.... I am using Iphone since a few weeks and as far as I know a customisation is not possible.

  • Character encoding problem using XSLT and Tomcat on Linux

    Hi,
    I have an application running on a Tomcat 4.1.18 application server that applies XSLT transformations to DOM objects using standard calls to javax.xml.transform API. The JDK is J2SE 1.4.1_01.
    It is all OK while running on a development enviroment (which is Windows NT 4.0 work station), but when I put it in the production enviroment (which is a red hat linux 8.0), it comes up with some kind of encoding problem: the extended characters (in spanish) are not shown as they should.
    The XSL stylesheets are using the ISO-8859-1 encoding, but I have also tried with UTF-8.
    These stylesheets are dynamicly generated from a JSP:
    // opens a connection to a JSP that generates the XSL
    URLConnection urlConn = (new URL( xxxxxx )).openConnection();
    Reader readerJsp = new BufferedReader(new InputStreamReader( urlConn.getInputStream() ));
    // Gets the object that represents the XSL
    Templates translet = tFactory.newTemplates( new StreamSource( readerJsp ));
    Transformer transformer = translet.newTransformer();
    // applies transformations
    // the output is sent to the HttpServletResponse's outputStream object
    transformer.transform(myDOMObject, new StreamResult(response.getOutputStream()) );Any help would be appreciated.

    Probably you need to set your LANG OS especific environment variable to spanish.
    Try adding this line:
    export LANG=es_ES
    to your tomcat/bin/catalina.sh, and restart tomcat.
    It should look like this:
    # OS specific support.  $var _must_ be set to either true or false.
    cygwin=false
    case "`uname`" in
    CYGWIN*) cygwin=true;;
    esac
    export LANG=es_ES
    # resolve links - $0 may be a softlink
    PRG="$0"
    .(BTW, keep using ISO-8859-1 encoding for your XSL)
    HTH
    Un Saludo!

  • c:import character encoding problem (utf-8)

    Aloha @ all,
    I am currently importing a file using the <c:import> functionallity (<c:import url="module/item.jsp" charEncoding="UTF-8">) but it seems that the returned data is not encoded with utf-8 and hence not displayed correctly. The overall file header is:
    HTTP/1.1 200 OK
    Server: Apache-Coyote/1.1
    Set-Cookie: JSESSIONID=E67F9DAF44C7F96C0725652BEA1713D8;
    Content-Type: text/html;charset=UTF-8
    Content-Length: 6861
    Date: Thu, 05 Jul 2007 04:18:39 GMT
    Connection: close
    I've set the file-encoding on all pages to :
    <%@ page contentType="text/html;charset=UTF-8" %>
    <%@ page pageEncoding="UTF-8"%>
    but the error remains... is this a known bug and is there a workaround?

    Partially, yes. It turns out that I created the documents in eclipse with a different character encoding. Hence the entire document was actually not UTF-encoded...
    So I changed each document encoding in Eclipse to UTF and got it working just fine...

  • Character encode problem

    how can i encode turkish character ���� .in j2me. � can connect my xm l file via IIS server by using inputStream class. but when � use readUTF() method in DataInputStream class to read file ,null pointer execption occur but not read method in Data�nputStream. thanks for advance

    here is my code ;
    StreamConnection c=(StreamConnection) Connector.open(URL);
    InputStream is=c.openInputStream();
    StringBuffer buffer = new StringBuffer();
    int ch;
    while ( (ch = is.read()) != -1) {
    buffer.append((char)ch);
    is.close();
    wordsXML=buffer.toString();
    i can not read turkish character truely . can you give sample code to overcome this problem.

  • Character encoding problem in XSLT

    I'm transforming dynamically created XMLs (data read from a MySql database) together with static XSLs (read directly from file system) into HTMLs with Xalan.
    Everything works great on the development machine, but as soon as I put everything up on the production server, all special characters (such as ���) in the XSLs are replaced by '?'. The same characters in the XMLs work fine, only the ones directly in the XSLs are badly encoded.
    I've stated encoding "ISO-8859-1" in the <?xml> as well as the <xsl:output> tags of the XSLs. Development and prodution machines should run the same j2sdk, as well as the same xalan et al libraries.
    Does anyone have any ideas what I'm missing here? Are there any other places where I have to specify encoding? Or is the problem somewhere else? Grateful for any help.
    Tommy Sedin

    I'm sending the HTML directly to the browser.
    There is difference between the output from development and production machines. For instance, when development produces "&auml;" the production system outputs "&#65533;" instead.
    I've been looking through stuff, the only difference between the two systems that I can find are that production runs a slightly older version of Tomcat. Also, the configuration files of the Tomcats aren't the same, but I couldn't find anything in there that should affect this (though I could be mistaken, I'm not very used to messing about with Tomcat).
    It feels like there's just a tiny problem somewhere, like somewhere where I should set the encoding of the output or some such. I'm not sure where that problem is however. Any ideas?
    Thank you,
    Tommy Sedin

  • XML special character/encoding problem

    Hi
    I would like to store XML in a MSSQL database into a column with the datatype xml.
    It seems like the xml datatype in an xMII transaction allways is stored with encoding type UTF-8
    and the MSSQL xml datatype is UTF-16. This gives me some problem with special characters when inserting into the MSSQL database (in the example below is the MSSQL datatype xml):
    INSERT INTO
         VALUES
    The error returned is this:
    "com.microsoft.sqlserver.jdbc.SQLServerException: XML parsing: line 1, character 62, illegal xml character"
    If I replace the 'ä' with a normal 'a' the command executes ok.
    I am currently using a workaround that looks like this, when setting the parameter in my transaction:
    stringreplace(Local.test, " encoding=" & doublequote & "UTF-8" & doublequote, "")
    But I was hoping I could get rid of the stringreplace.
    Is there a solution / recommended way of doing this?
    Best Regards
    Simon Bruun
    Edited by: Simon Bruun on Mar 4, 2011 10:43 AM

    I solved this. I convert to Unicode UTF-8

  • Oracle to MySql character set problem

    Dear Gurus,
    My database is Oracle 11g R2 (11.2.0.1.0) on Sun Solaris 10. To get data from mysql database for reporting purpose, I used DG4ODBC and followed strictly the OMSC note "Detailed Overview of Connecting Oracle to MySQL Using DG4ODBC Database Link [ID 1320645.1]. Here are main configuration steps:
    - Check DG4ODBC 32/64-bit
    - Install and configure ODBC Driver Manager unixodbc-2.2.14
    - Install and configure MyODBC 5.1.8
    - Configure tnsnames.ora and listener.ora
    - Create db links
    Oracle character set is AL32UTF8
    MySQL charactoer set is uft8
    $ODBC_HOME/etc/odbc.ini
    [ODBC Data Sources]
    myodbc5 = MyODBC 5.1 Driver DSN
    [myodbc5]
    Driver = /opt/mysql/myodbc5/lib/libmyodbc5.so
    Description = Connector/ODBC 5.1 Driver DSN
    SERVER = <mysql server ip>
    PORT = 3306
    USER = <mysql_user>
    PASSWORD = ****
    DATABASE = <mysql db name>
    OPTION = 0
    TRACE = OFF
    $ORACLE_HOME/hs/admin/initmyodbc5.ora
    # HS init parameters
    HS_FDS_CONNECT_INFO=myodbc5 # Data source name in odbc.ini
    HS_FDS_TRACE_LEVEL=OFF
    HS_FDS_SHAREABLE_NAME=/opt/unixodbc-2.2.14/lib/libodbc.so
    HS_FDS_SUPPORT_STATISTICS=FALSE
    HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1
    # ODBC env variables
    set ODBCINI=$ODBC_HOME/etc/odbc.ini
    My issue is I can query data from mysql database tables but the output is incorrect in character type columns (VARCHAR columns). It just shows the first character in such columns. I tried to read through some OMSC notes but none is useful. If you experienced on such issues, please share your idea / help me resolve it.
    Thanks much in advance,
    Hieu

    S. Wolicki, Oracle wrote:
    I have little experience with MySQL and ODBC Gateway, but this setting looks weird to me: HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1. Why do you configure WE8ISO8859P1 when both databases are Unicode UTF-8. Shouldn't the setting be AMERICAN_AMERICA.AL32UTF8 instead?
    -- SergiuszIf I set HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8 or without the HS_LANGUAGE setting, the following error will happen.
    SQL> select count(*) from "nicenum_reserve"@ussd;
    select count(*) from "nicenum_reserve"@ussd
    ERROR at line 1:
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    I followed the metalink note "Error Ora-28500 and Sqlstate I Issuing Selects From a Unicode Oracle RDBMS With Dg4odbc To Mysql or SQL*Server [ID 756186.1]" to resolve the above error. The note guides to set HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P1 and resolved the above error.
    The following are the output from original database (MySql) and Oracle via SQLPLUS and TOAD.
    On MySQL database (Sorry because of the output format)
    SQL> select ID, source_msisdn, target_msisdn, comment from nicenum_reserve where ID=91;
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | ID | source_msisdn | target_msisdn | comment |
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | 91 | 841998444408 | 84996444188 | Close reservation becasue of swap activity |
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    SQLRowCount returns 1
    1 rows fetched
    Via Sqlplus on Oracle server:
    SQL> select "ID","source_msisdn","target_msisdn","comment" from "nicenum_reserve"@ussd where "ID"=91;
    ID
    source_msisdn
    target_msisdn
    comment
    91
    8 4 1 9 9 8 4 4 4 4 0 8
    8 4 9 9 6 4 4 4 1 8 8
    C l o s e r e s e r v a t i o n b e c a s u e o f s w a p a c t i v i
    t y
    Via TOAD connected to Oracle server:
    ID source_msisdn target_msisdn comment
    91     8     8     C
    It's likely this issue related to character set settings but I don't know how to set it properly.
    Brgds,
    Hieu

  • Character encoding problem in deployed version

    Hello,
    I developed an ADF/jspx application in JDeveloper. When I'm running it in the embedded OC4J or the stand-alone OC4J, everything works well.
    But when I deploy it into OAS 10.1.3 and run it, the special characters (in my german case for example "ü", "ö" and so on) are not shown. Instead some kind of place holder is displayed.
    I checked the project's settings and parameters in web.xml and the jspx's, and all are set same to windows-1252. The NLS-LANG parameter in the registry of the server is also set to WE8MSWIN1252 (seems to be ok in my opinion).
    Is there any further "place" for me to check the used encoding of OAS when displaying deployed pages?
    Thank you very much!
    Sebastian

    I have just solved the problem.
    In my servlet I had to add :
    request.setCharacterEncoding("UTF-8");And thats it.

  • Character encoding problem

              We are using WebLogic 7.0 SP4 on Solaris 8 w/ jdk131_10. By default we don't specify
              the "encoding" jsp-param of the jsp-descriptor in the weblogic.xml. The BEA documentation
              says "If not set, this parameter defaults to the encoding for your platform."
              Without setting this value, certain characters in our JSPs are converted just
              fine in our test environment but become "weird" characters in our production environment.
              Question 1: Where exactly is the encoding of the "platform" specified?
              So I figured we just had to specify the encoding to be UTF8, right? Well when
              we do that, everything is still fine in test but in production we get the following
              exception when WebLogic tries to compile a JSP.
              weblogic.utils.ParsingException: nested TokenStreamException: antlr.TokenStreamIOException
              at weblogic.servlet.jsp.JspLexer.parse(JspLexer.java:964)
              at weblogic.servlet.jsp.JspParser.doit(JspParser.java:90)
              at weblogic.servlet.jsp.JspParser.parse(JspParser.java:213)
              at weblogic.servlet.jsp.Jsp2Java.outputs(Jsp2Java.java:119)
              at weblogic.utils.compiler.CodeGenerator.generate(CodeGenerator.java:258)
              at weblogic.servlet.jsp.JspStub.compilePage(JspStub.java:356)
              at weblogic.servlet.jsp.JspStub.prepareServlet(JspStub.java:214)
              at weblogic.servlet.jsp.JspStub.prepareServlet(JspStub.java:164)
              at weblogic.servlet.internal.ServletStubImpl.getServlet(ServletStubImpl.java:534)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:364)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:462)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:306)
              at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:5517)
              at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:685)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3156)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2506)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:234)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:210)
              Question 2: Any ideas why this exception is happening?
              

    Conrad Armstrong wrote:
              >
              > We are using WebLogic 7.0 SP4 on Solaris 8 w/ jdk131_10. By default we
              > don't specify the "encoding" jsp-param of the jsp-descriptor in the
              > weblogic.xml. The BEA documentation says "If not set, this parameter
              > defaults to the encoding for your platform." Without setting this value,
              > certain characters in our JSPs are converted just fine in our test
              > environment but become "weird" characters in our production environment.
              >
              > Question 1: Where exactly is the encoding of the "platform" specified?
              >
              > So I figured we just had to specify the encoding to be UTF8, right? Well
              > when we do that, everything is still fine in test but in production we get
              > the following exception when WebLogic tries to compile a JSP.
              >
              > weblogic.utils.ParsingException: nested TokenStreamException:
              > antlr.TokenStreamIOException
              > at weblogic.servlet.jsp.JspLexer.parse(JspLexer.java:964)
              > at weblogic.servlet.jsp.JspParser.doit(JspParser.java:90)
              > at weblogic.servlet.jsp.JspParser.parse(JspParser.java:213)
              > at weblogic.servlet.jsp.Jsp2Java.outputs(Jsp2Java.java:119)
              > at
              >
              weblogic.utils.compiler.CodeGenerator.generate(CodeGenerator.java:258)
              > at weblogic.servlet.jsp.JspStub.compilePage(JspStub.java:356) at
              > weblogic.servlet.jsp.JspStub.prepareServlet(JspStub.java:214) at
              > weblogic.servlet.jsp.JspStub.prepareServlet(JspStub.java:164) at
              >
              weblogic.servlet.internal.ServletStubImpl.getServlet(ServletStubImpl.java:534)
              > at
              >
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:364)
              > at
              >
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:462)
              > at
              >
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:306)
              > at
              >
              weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:5517)
              > at
              >
              weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:685)
              > at
              >
              weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3156)
              > at
              >
              weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2506)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:234)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:210)
              >
              > Question 2: Any ideas why this exception is happening?
              The platform encoding is the one returned by the system property
              file.encoding => System.getProperty("file.encoding")
              Probably you are developing on windows and deploying ('production') on
              solaris? In that case the default file encodings on each os are different
              and that may be causing the problem here. (On windows the default is cp1252
              i think)
              Typically, are all your jsps more or less iso-8859-1? then you can set this
              encoding in weblogic.xml. I believe the default encoding on solaris is
              "ISO-8859-1" (You can check this by writing a simple jsp which prints out :
              Encoding : <%= System.getProperty("file.encoding") %>
              Hope that helps.
              Do let me know what you find
              --Nagesh
              

  • Character encoding problems with weblogic stax implementation?

    Hello all,
    While using Stax to parse some XML, we encounter the following exception when the processor reaches the UTF-8 character C3 B1, ('ñ'):
    Caused by: Error at Line:1, token:[CLOSETAGBEGIN]Unbalanced ELEMENT got:StudentRegistration expected:LastName
    at weblogic.xml.babel.baseparser.BaseParser.parseSome(BaseParser.java:374)
    at weblogic.xml.stax.XMLStreamReaderBase.advance(XMLStreamReaderBase.java:199)
    We suspect that the processor's encoding might somehow be set to ANSI instead of UTF-8. I have read, in other posts, of a startup property related to web services:
    -Dweblogic.webservice.i18n.charset=utf-8
    However, this XML is not a web service request, but rather a file being read from disk after an MDB's onMessage() method is called.
    Could this setting be affecting stax parsing outside of webservices? Any other ideas?
    Thanks!

    As far as I know, we don't support changing outbound message encoding charset in 9.x. Both 8.x and 10.x support it. Check [url http://docs-stage/wls/docs100/webserv/client.html#wp230016]here

  • Character encoding problem (I think)

    Hello everyone! I hope you can help me out with this problem:
    I have noticed just today that my RSS feeds in Safari come out gibberish, almost like the font Helvetica fractions, and changing the encoding does not fix it.
    I thought I can live with that for now, but I noticed the same problem in Mail as well, but only characters in the message window. When I get a mail or when I compose one. Any ideas of what to reset? I ran permissions repair and no fix. I'm totally stumped!
    Thanks

    UPDATE: It is using Helvetica Fractions! The setting in MaiI preferences is the default and ok, but I clicked Command+T and noticed it had Helvetica Fractions selected. I changed it and it is ok for composing messages, has not fixed what the text is in some of my in box messages.
    I use Linotype Font ExplorerX but have not since this problem occured. My RSS feeds are still unreadable, as are some web pages. I have cleared out the old feeds but that did nothing.

  • Character encoding problem. Question mark in a diamond??

    I have gotten this a few times before, but it usually fixes when I switch the encoding. This time absolutely nothing is working. In browsers like Safari, you can see what the black diamond and question mark really is. It's just supposed to be an alt-bullet. If I copy and past it, it turns out fine. I included the url of the site affected. My encoding is currently on Western (ISO-8859-1) and the auto detect is on universal. Anyone have a solution

    I assume hat you mean this character: • (&amp;#x2022;)
    http://en.wikipedia.org/wiki/Bullet_%28typography%29
    If you see a Question mark in a diamond then that means that the font that Firefox uses doesn't map that character.
    It is in a code block with: font-family: courier new,Verdana,Arial;<br />
    You can check those fonts

  • Character encoding problems when using javascript client-sdk for remoting

    Hi,
    I have recently downloaded LCDS to try.  I was interested in using Javascript for remoting.  I have a Java-based web application on the server side, and use HTML + Javascript (dataservices-client.js) to send/receive messages asynchronously in AMF format.
    I can both send and receive data (not only simple types, but objects with several attributes), however when I receive data from the server side that contains special chars (e.g. á, ï), I get some gibberish in my javascript objects. This is not same when I send content to the server: All special characters are received (printed) correctly in Java (server side).
    I inspired my coding with the simple example shown in https://blogs.adobe.com/LiveCycleHelp/2012/07/creating-web-applications-using-html5javascr ipt-remoting-client-sdk-with-livecycle-data-services.html
    Is there any bug on the serialization?
    My software version is Adobe LiveCycle es_data_services_JEE_4_7_all_win
    Java container is WebLogic 11g.
    Thanks
    =======
    Edited Apr 11 2014
    In my attempts, I tried using AMFX serialization so that I could see the message in a more comprehensible format inside my browser (eg using firebug).  After configuring an HTTP channel and destination in the server side, and adjusting accordingly in the client side code, the Javascript API still sends binary!
    Sadly, I concluded that client-SDK isn't mature enough...
    By the way, if you send an String like "&aacute" from the server, in the client you get "á"... instead of the raw "&aacute" ... they forgot escaping.

    hey,
    I had a similar experience. I was interfacing between 4.6 (RFC), PI and ECC 6.0 (ABAP Proxy). When data was passed from ECC to 4.6, RFC received them incorrectly. So i had to send trimmed strings from ECC and receive them as strings in RFC (esp for CURR and QUAN fields). Also the receiver communication channel in PI (between PI and  RFC) had to be set as Non unicode. This helped a bit. But still I am getting 2 issues, truncation of values and some additional digits !! But the above changes resolved unwanted characters problem like "<" and "#". You can find a related post in my id. Hope this info helps..

Maybe you are looking for