Handling multibyte characters

Hi ,
I have created a procedure which sends e-mail using UTL_SMTP.
The procedure has a part in which we add the attachments to e-mail.
Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
Can anyone provide any guidance on this?

First, you should not append 'charset="us-asci"' in this line:
  UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: ' || IN_ATT_MIME_TYPE ||'charset="us-ascii"'||'; name="' || IN_ATT_FILE_NAME || '"' || UTL_TCP.CRLF);
The default IN_ATT_MIME_TYPE has this clause already, hence you would have a duplicate. Moreover, you add it without the required preceding semicolon. Further, in the Content-Type, you should pass the original character set of the file, not "us-ascii". This character set must support characters included in the file.
Second, the NCLOB is not written correctly either. UTL_ENCODE.BASE64_ENCODE expects a RAW value. If you give it an NVARCHAR2 value returned by DBMS_LOB.SUBSTR, then PL/SQL will implicitly apply HEXTORAW.to the value. HEXTORAW fails, if the NCLOB content is not a valid sequence of hex digits. Treating the content of NCLOB as a string of hex digits is obviously not your goal. You should use UTL_I18N.STRING_TO_RAW to convert NVARCHAR2 from DBMS_LOB.SUBSTR to the desired target encoding (the one specified in Content-Type) and cast it to RAW at the same time. UTF-8 (i.e. AL32UTF8) is usually the best choice for the target encoding. You should then apply UTL_RAW.CAST_TO_VARCHAR2 to change the RAW representation of base64-encoded value to VARCHAR2 expected by UTL_SMTP.WRITE_DATA.
Of course, passing DBMS_LOB.SUBSTR result directly to UTL_ENCODE.BASE64_ENCODE would make sense for a BLOB attachment. However, even then the encoded result should be passed to UTL_RAW.CAST_TO_VARCHAR2, not UTL_RAW.CAST_TO_RAW.
Third, if you use UTF-8 as Content-Type encoding, you may want to prepend three bytes (0xEF 0xBB 0xBF) to the NCLOB value before base64 encoding. This three-byte character is the UTF-8 Byte Order Mark. It helps some editors, such as Notepad, to recognize the file as encoded in UTF-8.
Fourth, if the target encoding is UTF-8, l_step should be no more than 8191. This is to avoid intermediate values exceeding 32767 bytes.
Fifth, the whole procedure will not work well on EBCDIC platform. In contrary to what documentation says, UTL_SMTP.WRITE_DATA does not seem to convert data to US7ASCII before sending (unless the package is ported separately by platform vendors). I guess this is not your worry but I thought I will mention this, just in case.
Thanks,
Sergiusz

Similar Messages

  • Time-dependent Vendor Master & Handling Special Characters

    Hi,
    I need to extract time-dependent Vendor Master.
    1. The data source for <b>0VENDOR</b> does not have fields to hold the valid date range.
    2. Does the Master data in R/3 for Vendors will hold the valid date range?
    3. The text for <b>0VENDOR</b> provides time-dependent, but how to map the <b>valid from</b> and <b>valid to</b> fields?
    Handling Special Characters:
    We are trying to extract data from Legacy system via DB Connect. The item text field consists of special characters. Of course in BW customization we can specify all the special characters to consider. But the special character we observed is 'square' symbol i.e. 'new line character' in Oracle. We are updating this to an ODS object. When looked at error log, observed that green light for the number of records transferred and updated, but finally when it load into ODS object and activates popping up the error message saying 'could not recognize special character'.
    Please help me getting the 2 issues resolved.
    Thanks in advance.
    Regards,
    Sudhakar.

    Hi Everyone,
    Thanks for inputs on Special characters issue...
    Finally resolved with below piece of code in the start routine:
    DATA: FLAG,
          OFF TYPE I,
          LEN TYPE I VALUE 1,
          ALLOWED_CHAR(95) VALUE
    '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ`~!@#$%^&*()-_=+ ' &
    'abcdefghijklmnopqrstuvwxyz:;<>,.?/|\{}[]"'''.
    CONSTANTS: C_CHAR VALUE '-'.
      LOOP AT DATA_PACKAGE WHERE NOT /BIC/ZI_DESC IS INITIAL .
        DO.
          IF DATA_PACKAGE-/BIC/ZI_DESC CN  ALLOWED_CHAR.
            REPLACE SECTION OFFSET SY-FDPOS LENGTH LEN OF
                    DATA_PACKAGE-/BIC/ZI_DESC WITH C_CHAR.
            FLAG = SPACE.
          ELSE.
            FLAG = 'X'.
          ENDIF.
          IF FLAG = 'X'.
            EXIT.
          ENDIF.
        ENDDO.
        MODIFY DATA_PACKAGE.
      ENDLOOP.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    I have seen the link sent by 'Eugene Khusainov' today. Thought putting my piece of code that may help others...
    Regards,
    Sudhakar.

  • Multibyte characters not displaying on report

    Hi there,
    I am having a problem displaying multibyte characters on my report (Oracle reports 6i). These characters are needed for barcode encoding. e.g chr(203) . But when I run my report they are missing.
    Also when I do the following sql in oracle11g (the version of the db the report is working against) :-
    select chr(203) from dual;
    I get the error :-
    ORA-29275: partial multibyte character
    though it works fine for oracle 8.
    Any help much appreciated.

    Everything depends on your NLS parameters. If I do this on Oracle 11g I get:
    select chr(203) from dual;
    C
    ËFor bar coding you should use a special bar code font, e.g.:
    http://www.idautomation.com/font-encoders/oracle-reports/

  • IMPDP SQLFILE : multibyte characters in constraint_name leads to ORA-00972

    Hi,
    I'm actually dealing with constraint_name made of multibyte characters (for example : constrain_name='VALIDA_CONFIRMAÇÃO_PREÇO13').
    Of course this Bad Idea® is inherited (I'm against all the fancy stuff like éàù in filenames and/or directories on my filesystem....)
    The scenario is as follows :
    0 - I'm supposed to do a "remap_schema". Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
    1 - The scott schema is exported via datapump
    2 - I do an impdp with SQLFILE in order to get all the DDL (table, packages, synonyms, etc...)
    3 - I do some sed on the generated sqlfile to change every occurence of SCOTT to NEW_SCOTT (this part is OK)
    4 - Once the modified sqlfile is executed, I do an impdp with DATA_ONLY.
    (The scenario was imagined from this thread : {message:id=10628419} )
    I'm getting some ORA-00972: identifier is too long at step 4 when executing the sqlfile.
    I see that some DDL for constraint creation in the file (generated at step#2) is written as follow :ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...Obviously, the original name of the constraint with cedilla and tilde gets translated to something else which is longer than 30 char/byte...
    As the original name is from Brazil, I also tried do add an EXPORT LANG=pt_BR.UTF-8 in my script before running the impdp for sqlfile. This didn't change anything. (the original $LANG is en_US.UTF-8)
    In order to create a testcase for this thread, I tried to reproduce on my sandbox database... but, there, I don't have the issue. :-(
    The real system is an 4-nodes database on Exadata (11.2.0.3) with NLS_CHARACTERSET=AL32UTF8.
    My sandbox database is a (nonRAC) 11.2.0.1 on RHEL4 also AL32UTF8.
    The constraint_name is the same on both system : I checked byte by byte using DUMP() on the constraint_name.
    Feel free to shed any light and/or ask for clarification if needed.
    Thanks in advance for those who'll take on their time to read all this.
    I decided to include my testcase from my sandbox database, even if it does NOT reproduce the issue +(maybe I'm missing something obvious...)+
    I use the following files.
    - createTable.sql :$ cat createTable.sql
    drop table test purge;
    create table test
    (id integer,
    val varchar2(30));
    alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
    select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
    from user_constraints where table_name='TEST';- expdpTest.sh :$ cat expdpTest.sh
    expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test- impdpTest.sh :$ cat impdpTest.sh
    impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=testThis is the run :
    [oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> @createTable
    Table dropped.
    Table created.
    Table altered.
    CONSTRAINT_NAME                  LB       LC
    DMP
    VALIDA_CONFIRMAÇÃO_PREÇO13             29         26
    Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
    ,80,82,69,195,135,79,49,51
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh
    Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    . . exported "SCOTT"."TEST"                                  0 KB       0 rows
    Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
      /home/oracle/scott_dir/testNonAscii.dmp
    Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh
    Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
    Starting "SCOTT"."SYS_SQL_FILE_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
    [oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql
    -- CONNECT SCOTT
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: TABLE_EXPORT/TABLE/TABLE
    CREATE TABLE "SCOTT"."TEST"
       (     "ID" NUMBER(*,0),
         "VAL" VARCHAR2(30 BYTE)
       ) SEGMENT CREATION DEFERRED
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
      TABLESPACE "MYTBSCOMP" ;
    -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;I was expecting to have the cedilla and tilde characters displayed incorrectly....
    Edited by: Nicosa on Feb 12, 2013 7:13 PM

    Srini Chavali wrote:
    If I understand you correctly, you are unable to reproduce the issue in the test instance, while it occurs in the production instance. Is the "schema move" being done on the same database - i.e. you are "moving" from SCOTT to NEW_SCOTT on the same database (test to test, and prod to prod) ? Do you have to physically move/copy the dmp file ? Hi Srini,
    On the real system, the schema move will be to and from different machines (but same DBversion).
    I'm not doing the real move for the moment, just trying to validate a way to do it, but I guess it's important to say that the dump being used for the moment comes from the same database (the long story being that due to some column using object datatype which caused error in the remap, I had to reload the dump with the "schema rename", drop the object column, and recreate a dump file without the object_datatype...).
    So Yes, the file will have to move, but in the current test, it doesn't.
    Srini Chavali wrote:
    Obviously something is different in production than test - can you post the output of this command from both databases ?
    SQL> select * from NLS_DATABASE_PARAMETERS;
    Yes Srini, something is obviously different : I'm starting to think that the difference might be in the Linux/shell side rather than on the impdp as datapump is supposed to be NLS_LANG/CHARSET-proof +(when traditional imp/exp was really sensible on those points)+
    The result on the Exadata where I have the issue :PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              11.2.0.3.0the result on my sandbox DB :PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              11.2.0.1.0------
    Richard Harrison .  wrote:
    Hi,
    Did you set NLS_LANG also when you did the import?Yes, that is one of the difference between the Exadata and my sandbox.
    My environnement in sandbox has NLS_LANG=AMERICAN_AMERICA.AL32UTF8 where the Exadata doesn't have the variable set.
    I tried to add it, but it didn't change anything.
    Richard Harrison .  wrote:
    Also not sure why you are doing the sed part? Do you have hard coded scheme references inside some of the plsql?Yes, that is why I choose to sed. The (ugly) code have :
    - Procedures inside the same package that references one another with the schema prepended
    - Triggers with PL/SQL codes referencing tables with schema prepended
    - Dynamic SQL that "builds" queries with schema prepended
    - Object Type that does some %ROWTYPE on tables with schema prepended (that will be solved by dropping the column based on those types as they obviously are not needed...)
    - Data model with object whose names uses non-ascii characters
    +(In France we use to call this "gas power plant" in order to tell how a mess it is : pipes everywhere going who-knows-where...)+
    The big picture is that this kind of "schema move & rename" should be as automatic as possible, as the project is to actually consolidate several existing databases on the Exadata :
    One schema for each country, hence the rename of the schemas to include country-code.
    I actually have a workaround yet : Rename the objects that have funky characters in their name before doing the export.
    But I was curious to understand why the SQLFILE messed up the constraint_name on one sustem when it doesn't on another...

  • Handling special characters in XML

    Hi,
    I am using Oracle 10g 'XMLType' datatype to store XML files. Before storing I parse the XML document using Java Xerces Parser. If it parses successfuly, then I perform some business rule execution based on XML file which was parsed. So till this stage there is no problems. But when XML file contains some special characters like copy-paste of some description from MS-Word document into XML tags, then Xerces parser will parse such characters with out any exceptions, but while inserting XML document, Oracle database just throws exception saying unable to handle special characters.. So how to avoid such exceptions or silent such exceptions with any specific settings respect to XMLType datatype in 10g DB.
    Please advice!
    Arvind Patil - IN

    Monica--
    In XI 2.0, we've noticed a number of issues processing special characters, primarily caused by the version of JCO that we're running.  It sounds like SAP has spent some time in the past few months focusing on these errors, so make sure you're on the most recent patchlevels of all your middleware components, including any of the middleware libraries that BC uses. In XI, we had to update the 3 files that make up the RFC library and JCO library.  SDM couldn't update the libraries for us -- we had to manually move the files to the right place.
    Escaped XML characters like "&amp;" "&#34;" "&quot;" were fixed as of JCO 2.0.10 (the current patchlevel on AIX/UNIX), the special character "&apos;" is fixed in the next release, JCO 2.0.11, due out in a few weeks (hotfixes are available).  I don't know the equivalent versions on other platforms.  By default, XI 2.0 appears to have shipped with JCO 2.0.5.  I would expect many XI 3.0 users to also be affected.
    This may or may not apply to BC, because I don't know what BC uses to talk to SAP under the covers.
    --Dan King
    Capgemini

  • Oracle Receiver JDBC Adapter - Handling Unicode Characters

    We have an IDOC to JDBC scenario.
    In this IDOC is sending data like -  10/14u2019/P7 After 4 there is special character coming from SAP ( It is not single quote).
    Mapping is going through OK and data is getting saved in Oracle Database as 10/14&#x19;/P7 with & # x 19;
    I came across following solution in forums and SAP Note.
    I am not sure how to modify Oracle JDBC URL to handle Unicode characters properly.
    Or is there any other approach we can follow to achieve this..
    Any input is really appreciated
    Q: I am inserting Unicode data into a database table or selecting Unicode data from a table. However, the data inserted into or retrieved from the table appears garbled. Why doesn't the JDBC Adapter handle Unicode correctly?
    A: While the JDBC Adapter is Unicode-aware, many JDBC drivers and/or database management systems aren't by default and need a codepage or Unicode-awareness to be configured explicitly. For the respective JDBC drivers, this codepage setting is often configured via the driver URL. For details, refer to the documentation of your JDBC driver or database management system.

    Hi Simona,
    1.To start the visual admin, execute "go" file:
    On Windows: Run \usr\sap\<SAPSID>\JC<xx>\j2ee\admin\go.bat
    On UNIX: Run /usr/sap/<SAPSID>/JC<xx>/j2ee/admin/go
    2.supply the credentials to login into visual admin
    3.under "cluster" tab select "server node"
    4.you will find "log viewer" under "services"
    Since you are new, I recommend you to take help from your BASIS team.
    Hope it helps !
    Hi Alwin,
    Just a quick clarification.
    I used the URL you have mentioned, when we were on SP5. After that we upgraded to SP9.
    From SP9, if you try to use the URL http://XISERVER:50000/AdapterFramework then it automatically redirects to a new webpage with the link to the URL i have mentioned.
    Regards,
    Sridhar

  • To Handle Special Characters(Guideu0099 ) in MATMAS IDOC fields

    Need to handle special characters like Guide™, as an attached  superscript in MATMAS02/05 IDOC field . The field name is TDLINE in E1MTXLM segment.
    As a trial run when these special characters are pasted in the TDLINE field, it throws an error that "the input field contains prohibited characters"
    Please let me know if there is any workaround for this.

    hi
    good
    go through these links, i hope these ll help you to solve your problem.
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/CAGTFADMLO/CAGTFADMLO.pdf
    http://www.erphome.net/wdb/upload/forum14_f_2908.doc
    thanks
    mrutyun^

  • Handle special characters in the attribute name

    Hi,
    I am generating different view element in WD application dynamically. How to handle special characters other than '-/ABCDEFGHIJKLMNOPQRSTUVWXYZ_0123456789'  for the attribute name dynamically?
    Thank you, in advance.
    Trupti

    Going with the obvious response - don't use them?
    if you're using dynamic code, there is no reason (other than debug support) to give your created elements any meaningful name.
    Just generate a GUID for each new element and use that.
    If you need to be able to later search for and update the element a simple lookup table of GUID to reference string should work reasonably well.
    Cheers,
    Chris

  • How to handle Multibyte character in Java urgent requirement

    Hi Friends,
    I'm fetching the data from the database(MS SQL Server 2000) and writing into the file.
    This is the snippet of the code i'm using to write into the file.
    File outputFile = new File("C:\\Temp\\SAMPLE.txt");
    BufferedWriter fileWriter = new BufferedWriter(new FileWriter(outputFile));
    //Fetching the values from database
    while (rs.next())
    //colData contains the value from the database which will go into the file
    String colData = rs.getString(fieldNames);
    fileWriter.write(colData);
    The problem is when I'm writing a multibyte� �� character is converting into double quotes,
    i.e. M?�L INC. is written as M?"L INC.
    I need to know how can I write the same value what I'm fetching from the database( irespective it is singlebyte or multibyte characters ).
    Thanks in advance.
    Kind Reards,
    Pallavi

    Hi,
    I changed the default encoding, Windows1252 to UTF-16 using OutputStreamWriter class and it worked out!
    OutputStreamWriter out = new OutputStreamWriter(new BufferedOutputStream(new FileOutputStream("C:\\abc.txt")), "UTF-16");
    Thanks for the suggestions. :)

  • Adobe AIR help; breadcrum navigation doesn't work in multibyte characters?

    Hi there,
    I created Adobe application with RoboHelp 9 (using FrameMaker files,
    which are written in Japanese and English) to find that breadcrum navigation on the top doesn't work.
    Is this a feature that breadcrum navigation doesn't support multibyte characters? Or is this any workaround?
    Many thanks for your kind support in advance,

    See my reply to your other post. You can also test this in the new project and raise it with Adobe Support at the same time as the other problem.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Adobe AIR help; question regarding search criteria with multibyte characters

    Hi,
    I created Adobe AIR application with Robohelp 9 (using FM 10 files as source, and texts are written in Japanese and English),
    and happend to find that search function in AIR application doesn't catch keywords correctly.
    For example,
    1. If you type "文字" and "スタイル" with single byte space in search window, the result appears for both "文字" and "スタイル".
    2. If you type "文字" and "スタイル" with double byte space in search window, the result doesn't match for anything.
    3. If you type "文字スタイル" (in one word) in search window, the result doesn't match for anything.
    Same thing happens for the case "文字種" (literally, "文字"+"種", the meaning is almost the same).
    But, if you type search words which is all in Katakana, the result seems to be fine.
    Is there any limitation for multibyte characters support? Or, this behaviour is a feature??
    If so, how can make AIR application "hit" correct words?
    Thank you very much for your kind help in advance!

    On this one your best course of action is to contact Adobe Support. They will likely require your project and there is one thing I would suggest you do first. Create a new project with just a few topics to prove the problem exists there as well. If it does it will be a simpler upload and you will know the problem is repeatable.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • JSPs, UTF-8 & multibyte characters

              In our project we have a situation where we must output some multibyte characters
              to a JSP page. The data is retrieved from an Oracle database using BEA ELink and
              XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to be ok
              down to the JSP level, because I can output it to a file and it's properly UTF-8
              encoded.
              But when I try to write the data to the final reply (using <%=dataObject.getData()%>
              the results definitely are not UTF-8 encoded. On the client browser they show
              up as garbage, occupying more than twice the actual length of the data. The response
              headers and META-tags are all set to UTF-8 encoding, and the browser is set to
              use UTF-8.
              The funny part is, that the string seems to be encoded twice or something similar
              as is shown by the next example:
              This is the correct UTF-8 byte sequence for the first twice characters (they are
              just generated data for debugging purposes):
              C3 89 C3 A5
              Which translates to Unicode characters 00C9 and 00E5.
              But on the final page that is sent to the client this sequence has been changed
              to:
              C3 83 E2 80 B0 C3 83 C2 A5
              Which just doesn't make sense since it shows up as five different garbage characters.
              Does anyone have any ideas what is causing the problem and any suggestions? What
              are those extra characters in the final encoding?
              .Pete.
              

    It sounds like the Object.toString is coming back already encoded in UTF8,
              and thus the JSP writer encodes that UTF8 using UTF8 again, which is what
              you see. Try making the String value be:
              > ... characters 00C9 and 00E5.
              ... instead of:
              > C3 89 C3 A5
              Then it will be encoded correctly.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              << Tangosol Server: How Weblogic applications are customized >>
              << Download now from http://www.tangosol.com/download.jsp >>
              "Petteri Räisänen" <[email protected]> wrote in message
              news:[email protected]...
              >
              > In our project we have a situation where we must output some multibyte
              characters
              > to a JSP page. The data is retrieved from an Oracle database using BEA
              ELink and
              > XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to
              be ok
              > down to the JSP level, because I can output it to a file and it's properly
              UTF-8
              > encoded.
              >
              > But when I try to write the data to the final reply (using
              <%=dataObject.getData()%>
              > the results definitely are not UTF-8 encoded. On the client browser they
              show
              > up as garbage, occupying more than twice the actual length of the data.
              The response
              > headers and META-tags are all set to UTF-8 encoding, and the browser is
              set to
              > use UTF-8.
              >
              > The funny part is, that the string seems to be encoded twice or something
              similar
              > as is shown by the next example:
              >
              > This is the correct UTF-8 byte sequence for the first twice characters
              (they are
              > just generated data for debugging purposes):
              >
              > C3 89 C3 A5
              >
              > Which translates to Unicode characters 00C9 and 00E5.
              >
              > But on the final page that is sent to the client this sequence has been
              changed
              > to:
              >
              > C3 83 E2 80 B0 C3 83 C2 A5
              >
              > Which just doesn't make sense since it shows up as five different garbage
              characters.
              >
              >
              > Does anyone have any ideas what is causing the problem and any
              suggestions? What
              > are those extra characters in the final encoding?
              >
              > Pete.
              

  • Can actionscript handle special characters/han or chinese characters?

    Hi,
    I am having issue with my created flash, it can't handle chinese characters? is there some way i can handle this thru code? or should there be any font or language pack installed?
    thank so much for the help.

    Hi,
    I already embedded the fonts. And I changed the encoding of my xml to GB2312.
    And placed a chinese characters on the node. It didnot rendered any chinese characters. instead, the movie is not rendered properly.
    Thanks.

  • Is there any other way to handle special characters other than using CDATA?

    I have xerces parser with which i am trying to parse data having special characters. Special characters also include other ascii characters. I tried using CDATA Section but still the problem persists.
    It would be really helpful if anyone can help me in solving this problem.
    Error encountered :
    org.xml.sax.SAXParseException: An invalid XML character (Unicode: 0xf) was found in the CDATA section.
    The XML which I use contains junk characters also . Have a look at the following:
    <?xml version='1.0' encoding='UTF-8' ?>
    <IMAGE_RESPONSE xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:noNamespaceSchemaLocation='ImageResponse.xsd'>
    <IMG_TYPE>PNG</IMG_TYPE>
    <IMG_WIDTH>650</IMG_WIDTH>
    <IMG_HEIGHT>250</IMG_HEIGHT>
    <IMAGE_DATA>
    <IMGKEY>20020827:00000000:100000000010:02:</IMGKEY>
    <IMAGE_INFO>This is image info</IMAGE_INFO>
    <IMG_SOURCE>DCE_CIMS</IMG_SOURCE>
    <FRONT_IMG_FBW><![CDATA[�����&�J�Z�R��]]></FRONT_IMG_FBW>
    <FBW_ERROR>B</FBW_ERROR>
    <FRONT_IMG_FGS>C</FRONT_IMG_FGS>
    <FGS_ERROR>D</FGS_ERROR>
    <BACK_IMG_BBW>E</BACK_IMG_BBW>
    <BBW_ERROR>D</BBW_ERROR>
    <BACK_IMG_BGS>A</BACK_IMG_BGS>
    <BGS_ERROR>Unable to retrieve Back Gray-Scale image</BGS_ERROR>
    </IMAGE_DATA>
    </IMAGE_RESPONSE>

    java.net.URLEncoder.encode( text )
    I've found this to be a pretty easy way to handle invalid characters...

  • How to Handle Special Characters in PI7.1

    Hi Team,
    I need to handle some special characters like <,>,& etc.. from WS Adapter to CRM in PI 7.1.
    http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/9420 [original link is broken]
    By using the above blog i had implemented the Java Code as
    public void execute(InputStream in, OutputStream out){
    try{
    int read_data;
    while((read_data = in.read()) != -1){
    if (read_data == '&'){
    out.write("&amp;".getBytes());
    }else if (read_data == '>'){
    out.write("&gt;".getBytes());
    }else if (read_data == '<'){
    out.write("&lt;".getBytes());
    }else if (read_data == '/'){
    out.write("&frasl;".getBytes());
    }else if (read_data == '\''){
    out.write("&apos;".getBytes());
    else { out.write(read_data);
    out.flush();
    } catch (Exception e){}
    I had added this class file in Operational Mapping.
    It is working  if we have only one IF condition for & in while
    Any suggestion
    Thanks
    Sriram

    Hi Ramesh,
    Thanks for your inputs, I have tried your code but it is not working. The error message stays the same.
    Dear Stephane,
    To describe the error more, the requirement is the payload coming from source through WS Adapter consists of the special characters <, > , & which are basic sematics of XML syntax. I need PI to process this payload with special characters and successfully transfer it to target CRM system. I have created the Java class with code (ref: Blog 9420) as stated earlier and added this class to my existing Operation Mapping. I am expecting the java mapping to replace the special characters in payload like  < with "&gt;" . So as the case with the other characters >,&,'
    After activaton of my operation mapping, I triggered the test message with Soap UI client and I could able to get a successful mapping only When I put the logic for &ampersand symbol only. However when I am trying to add the logic for > or < or ' the mapping is failing . I am using UTF-8 encoding across the source and PI enviroments.
    Sample SOAP message :
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:abcabca.com">
       <soapenv:Header/>
       <soapenv:Body>
          <urn:MT_ABCDEFG_Req>
         <activity>
              <id/>
              <type>ZEMA</type>
              <actionType>C</actionType>
              <firewall>10000003</firewall>
              <subject>small &gt; &lt; attachment test</subject>
              <location/>
              <startDate>2010-07-08T10:53:31.000Z</startDate>
              <endDate>2010-07-08T10:53:31.000Z</endDate>
              <mainClient>1000319</mainClient>
              <mainContact>1000003</mainContact>
              <isConfidential>false</isConfidential>
              <summary/>
              <fullText>test body  - small.txt</fullText>
              <owner>1000021</owner>
              <from>ABCDEDF</from>
              <sendTo>emailaddress</sendTo>
              <copyTo/>
              <keywords/>
              <referenceId/>
              <createdBy>1000021</createdBy>
              <additionalContacts/>
              <additionalClients/>
              <additionalParticipants/>
              <status>A0008</status>
              <attachments>
                   <fileUrl>20100708110053-XXXXXXXXX</fileUrl>
                   <fileName>small.txt</fileName>
              </attachments>
              <attachments>
                   <fileUrl>20100708110053-XXXXXXXXX</fileUrl>
                   <fileName>EMail 2010-07-08.pdf</fileName>
              </attachments>
         </activity>
          </urn:MT_ABCDEFG_Req>
       </soapenv:Body>
    </soapenv:Envelope>
    Output on the SOAP UI  client for the above request:
    <!--see the documentation-->
    <SOAP:Envelope xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/">
       <SOAP:Body>
          <SOAP:Fault>
             <faultcode>SOAP:Server</faultcode>
             <faultstring>Server Error</faultstring>
             <detail>
                <s:SystemError xmlns:s="http://sap.com/xi/WebService/xi2.0">
                   <context>XIAdapter</context>
                   <code>ADAPTER.JAVA_EXCEPTION</code>
                   <text>com.sap.engine.interfaces.messaging.api.exception.MessagingException: com.sap.engine.interfaces.messaging.api.exception.MessagingException: XIServer:NO_MAPPINGPROGRAM_FOUND:
         at com.sap.aii.adapter.soap.ejb.XISOAPAdapterBean.process(XISOAPAdapterBean.java:1160)
         at sun.reflect.GeneratedMethodAccessor342.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at com.sap.engine.services.ejb3.runtime.impl.RequestInvocationContext.proceedFinal(
    What do you think where I am doing the wrong?
    Sriram

Maybe you are looking for

  • Help needed with Boot times and performanc​e.

    Hi everyone, after two weeks of getting nowhere i gave up and here i am. I have an S10-3 which i am struggling with. As the post title indicates perfomance is (how can i put it) slow. We are talking 15/20 minutes to boot and when i finaly get to Wind

  • I accidentally moved my iphoto library to Macintosh HD in Finder, now it's not working anymore. How can I fix this?

    Hello everyone, I hope someone can help me. I wanna copy my pictures to an external hard drive. Therefore I was trying to format my Seagate 1TB external drive (so it works with the Mac), but my Disk utilities is not working as it should (don't know w

  • Error While Transporting of Transformation

    Hello Experts, I am having an issue and hoping that someone can help me. Let me clarify very easyly, I have a transformation in test system which have also keyfigure routines in it( object Routine ) without any syntax error. For the current requireme

  • Excel Formatting in WEBI Report

    Can we able to save the webi report to default excel format. default excel format :- each columns should move to each cell same for rows too. But currently it is placing in proper cells but the blank part is showing in white background without any fo

  • Creating IDOC by Output type

    Hi Xperts, I've experienced cases of creating delivery idoc when sale order is created. This creation of idoc is triggered when an output type for the SO is determined. I want to understand how the configuration of Output type determines the corespon