UTF-8 to CESU-8 conversion

Hi, all.
What is the easiest way to convert UTF-8 data into CESU-8?
I'd like to use the bulk loader (LOAD command in hdbsql) to load my Japanese data into the HANA table.
The iconv utility on SLES 11 SP1 does not seem to support CESU-8.
Thank you,
-mamoru

Hi, colleagues
The following works for me well,
You can also try to use python which is
easy to implement and test. If using unicode function and  represent CESU-8 encoded string as byte stream already encoded with UTF-8 this will work fine. The problem with CESU-9 only comes for Unicode point starting with U+10000 and higher. For those point you can use surrogaite pair which is available on wiki in google, here is the algorithm to get UTF-16 representation for Unicode points higher then FFFF.
v  = 0x64321
v′ = v - 0x10000
   = 0x54321
   = 0101 0100 0011 0010 0001
vh = v′ >> 10
   = 01 0101 0000 // higher 10 bits of v′
vl = v′ & 0x3FF
   = 11 0010 0001 // lower  10 bits of v′
w1 = 0xD800 + vh
   = 1101 1000 0000 0000
   +        01 0101 0000
   = 1101 1001 0101 0000
   = 0xD950 // first code unit of UTF-16 encoding
w2 = 0xDC00 + vl
   = 1101 1100 0000 0000
   +        11 0010 0001
   = 1101 1111 0010 0001
   = 0xDF21 // second code unit of UTF-16 encoding
In other words you get UTF-8 encoded stream which is perfectly understood by HANA and you can store the information perfectly by using your own codec that is compliant with CESU-8.
To get some knowledge about UTF-8 encoding you can refer to utfcpp.sourceforge.net library and the algorithm above can be used to extend it for CESU-8 compatibility.
You do not need to use UTF-16 for python, this will not work for HANA.
Regards,
Vasily Sukhanov

Similar Messages

  • File_To_File: UTF-8 to ASCII format conversion.

    HI Experts,
    I got one requirement File_To_File scenario source file is in UTF-8 format so we need to convet it into ASCII fromat , in this one mapping not required so please can you please help me out. we are using  Pi 7.0 with SP 21.
    Regards,
    Prabhakar.A

    in the communication channel define ASCII as enconding.
    Processing Tab Page
    Processing Parameters
       File Type
    Specify the document data type.
    ○       Binary
    ○       Text
    Under File Encoding, specify a code page.
    The default setting is to use the system code page that is specific to the configuration of the installed operating system. The file content is converted to the UTF-8 code page before it is sent.
    Permitted values for the code page are the existing Charsets of the Java runtime. According to the SUN specification for the Java runtime, at least the following standard character sets must be supported:
    ■       US-ASCII
    Seven-bit ASCII, also known as ISO646-US, or Basic Latin block of the Unicode character set
    ■       ISO-8859-1
    ISO character set for Western European languages (Latin Alphabet No. 1), also known as ISO-LATIN-1
    ■       UTF-8
    8-bit Unicode character format
    ■       UTF-16BE
    16-bit Unicode character format, big-endian byte order
    ■       UTF-16LE
    16-bit Unicode character format, little-endian byte order
    ■       UTF-16
    16-bit Unicode character format, byte order
    Note
    Check which other character sets are supported in the documentation for your Java runtime implementation.

  • UTF-8 encoding problem in HTTP adapter

    Hi Guys,
    I am facing problem in the UTF-8 multi-byte character conversion.
    Problem:
    I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
    in HTTP configuration i have given XML code as UT-8 in the XI payload manipulation block.
    I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
    I have checked the Xml messages in the Message monitoring in XI, i can able to see the chines charaters in XML files. But in the third party system it is showing as junk characters.
    Can you please any one help me regarding this issue.
    Please let me know if you need more info.
    Regards,
    Srini

    Srinivas
    Can you please go through the SAP Notes 856597 Question No.3 which may resolve your issue? Also have you checked SAP Notes 761608,639882, 666574, 913116, 779981 which might help you.
    ---Satish

  • Idoc to file(TXT) scenario

    Hi all,
            i'm working in a idoc-to file scenario. now its working fine and the file has been written in the target directory of the receiver .but its in xml format i need to store it in a TXT format.
    In the receiver communication channel configuration i used file content conversion message protocal and also mentioned the FILETYPE as text and FILEENCODING as UTF-8. i maintained content conversion parameters also.
    but still i'm getting the file in a XML format. it wouldbe very helpul if anyone could solve this one.
    Thanks,
    Hari

    Hi,
    Its really surprizing the to see even after maintaining all the parameters you are getting XML output.
    Can you verify the receiv er end structure of Message type
    and refer
    /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter - FCC
    /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter - FCC
    http://help.sap.com/saphelp_nw04/helpdata/en/ee/c9f0b4925af54cb17c454788d8e466/frameset.htm - cc
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/95/bb623c6369f454e10000000a114084/content.htm - fcc cOUNTER
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/da1e7c16-0c01-0010-278a-eaed5eae5a5f - conversion agent
    /people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
    /people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
    /people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
    /people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
    /people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
    /people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
    /people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
    /people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30
    /people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
    /people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
    /people/jeyakumar.muthu2/blog/2005/11/29/file-content-conversion-for-unequal-number-of-columns
    /people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
    /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
    /people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
    Thanks
    Swarup
    Edited by: Swarup Sawant on Feb 22, 2008 1:15 PM

  • EURO sign not displayed correctly after unicode migration

    All,
    Don't know where exactly to post this, in BI forum or Netweaver Platform.
    But here is my question:
    We have migrated our SAP BW 3.5 (NW04, SP20) system from codepage 1100 (using general fallback codepage 1160) to unicode UTF-8. After the conversion graphic reports (using internet explorer) do not show the euro sign correctly in the unicode system. In stead of the euro sign a # is shown in the internet explorer window. When looking at the source of the same web page however the euro sign is showed correctly.
    When looking at the encoding of the web page it is mentioned that utf-8 is used...
    Any ideas...?
    Thanks,
    Regards, Bart

    Hi Manoj,
    It is with displaying of graphical data that the error occurs. Text data (tables) on the same web page are shown normally, with the correct euro sign.
    When I look at the source of the web page it looks that for the graphic a java script is being called:
    /sap/bw/Mime/BEx/JavaScript/JSBW_C_Std.js
    Withing the coding I see the euro sign normally, however in the graphical display this is converted to #.
    I have run the suggested query. The database nls_characterset is set to UTF8.
    Thanks,
    Regards,
    Bart

  • Handling SOAP HEADER using SOAP Receiver Adapter

    Hi Experts,
    I need to implement SOAP Receiver Scenario and passing ( Header fields  User, Password and token)  I have seen a lot of scenario using a XSLT Mapping, to handling the SOAP Header. So  need to help to understand some details:
    Suppose I need to implement the SOAP Header below? In this case I need add only the fields u201CUsername, Password and AuthenticationTokenu201D. I created the XSLT Transform, the source XML? Where I put it? Or can create it into Message Interface?
    Is it possible to do this in Java Mapping?
    Thanks!
    Best Regards
    Fábio Ferri
    <soap:Header>
          <v1:ExecutionHintHeader>
             <v1:Name></v1:Name>
             <!Optional:>
             <v1:Arguments>
                <!1 or more repetitions:>
                <v1:Argument Name="?" Value="?"/>
             </v1:Arguments>
          </v1:ExecutionHintHeader>
          <v1:CredentialsHeader>
             <!Optional:>
             <v1:Username>pi</v1:Username>
             <!Optional:>
             <v1:Password>jhjhjjhjhjius</v1:Password>
             <!Optional:>
             <v1:AuthenticationToken></v1:AuthenticationToken>
          </v1:CredentialsHeader>
       </soap:Header>

    You need to put XSLT mapping in 2° step mapping in request.
    SourceMessageRequest -> Message Mapping -> XSLT Mapping -> DestinationMessageRequest
    If you test mapping program, this should be working fine.
    In adapter module, put localejbs/AF_Modules/MessageTransformBean with parameter value text/xml; charset=utf-8 and correctly set Conversion Parameter "Do not use Soap Envelop"
    XSLT Request Mapping
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:template match="/">
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <soap:Header>
    <ServiceAuthHeader  xmlns="http://WIND.WEBSERVICES.DMS/">
    <Username>USER</Username>
    <Password>PASSWORD</Password>
    </ServiceAuthHeader>
    </soap:Header>
    <soap:Body>
    <xsl:copy-of select="*"/>
    </soap:Body></soap:Envelope>
    </xsl:template>
    </xsl:stylesheet>

  • Use of UTF8 and AL32UTF8 for database character set

    I will be implementing Unicode on a 10g database, and am considering using AL32UTF8 as the database character set, as opposed to AL16UTF16 as the national character set, primarily to economize storage requirements for primarily English-based string data.
    Is anyone aware of any issues, or tradeoffs, for implementing AL32UTF8 as the database character set, as opposed to using the national character set for storing Unicode data? I am aware of the fact that UTF-8 may require 3 bytes where UTF-16 would only require 2, so my question is more specific to the use of the database character set vs. the national character set, as opposed to differences between the encoding itself. (I realize that I could use UTF8 as the national character set, but don't want to lose the ability to store supplementary characters, which UTF8 does not support, as this Oracle character set supports up to Unicode 3.0 only.)
    Thanks in advance for any counsel.

    I don't have a lot of experience with SQL Server, but my belief is that a fair number of tools that handle SQL Server NCHAR/ NVARCHAR2 columns do not handle Oracle NCHAR/ NVARCHAR2 columns. I'm not sure if that's because of differences in the provided drivers, because of architectural differences, or because I don't have enough data points on the SQL Server side.
    I've not run into any barriers, no. The two most common speedbumps I've seen are
    - I generally prefer in Unicode databases to set NLS_LENGTH_SEMANTICS to CHAR so that a VARCHAR2(100) holds 100 characters rather than 100 bytes (the default). You could also declare the fields as VARCHAR2(100 CHAR), but I'm generally lazy.
    - Making sure that the client NLS_LANG properly identifies the character set of the data going in to the database (and the character set of the data that the client wants to come out) so that Oracle's character set conversion libraries will work. If this is set incorrectly, all manner of grief can befall you. If your client NLS_LANG matches your database character set, for example, Oracle doesn't do a character set conversion, so if you have an application that is passing in Windows-1252 data, Oracle will store it using the same binary representation. If another application thinks that data is really UTF-8, the character set conversion will fail, causing it to display garbage, and then you get to go through the database to figure out which rows in which tables are affected and do a major cleanup. If you have multiple character sets inadvertently stored in the database (i.e. a few rows of Windows-1252, a few of Shift-JIS, and a few of UTF8), you'll have a gigantic mess to clean up. This is a concern whether you're using CHAR/ VARCHAR2 or NCHAR/ NVARCHAR2, and it's actually slightly harder with the N data types, but it's something to be very aware of.
    Justin

  • Insert date from YUI calendar into mysql table

    Is there a way to insert the date selected from the YUI calendar into a mysql table?

    I have been in trouble since last 2 days... I have tried all possible and explored any related topics.. Still I cannot address the problem.
    Problem 1:
    I have a simple jsp form which passed a bengali word.
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <form action="Test2.jsp" method="post" >
    <input type="text" value="&#2453;&#2494;&#2453;&#2494;" name="word">
    <input type="submit" value="Next">
    </form>
    </body>
    </html>
    When this page is post, a java program( TestWord) is called by the Test2.jsp. TestWord insert the passed bengali word into a MySQL table. Then it retrieves again the inserted bengali word and display in Test2.jsp.
    Test2.jsp:----
    <%@page contentType="text/html"%>
    <%@page pageEncoding="UTF-8"%>
    <jsp:useBean id="get2" class="dictionary.TestWord" scope="session"/>
    <jsp:setProperty name="get2" property="*"/>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    </head>
    <body>
    <form action="" method="post">
    <%
    String s=get2.getWord();
    out.write("Before converion onversion = "+s);
    s = new String(s.getBytes("ISO-8859-1"),"UTF-8");
    out.write(" After conversion = "+s);
    %>
    </form>
    </body>
    </html>
    It gives the output something like:
    Before conversion = &#65533;?&#2494;&#65533;?&#2494; After conversion = ??????
    Problem 2:
    The record in mysql table is inserted in bengali font (eg. &#2453;&#2494;&#2453;&#2494;). When I retrieve the record and display in a jsp page, I can see the bengali word (&#2453;&#2494;&#2453;&#2494;) properly. But if I insert the bengali word again in MySql table, then I see some string like "&#65533;?&#2494;&#65533;?&#2494;" storing in the table.
    Please help me out..
    Thanks in advance.

  • Please support Surrogate Pair String data receive.

    Dear Sir,
    I would like to report a problem that when data which contains surrogate pair string is posted, BlazeDS servlet does not response.
    One of good example { 0xf0, 0xa9, 0xb8, 0xbd } utf-8 bytes.
    The charactor of the bytes is a japanese charactor 'hokke'.
    What's going on is that when a surrogate pair string is posted, flex.messaging.io.amf.Amf3Input::readUTF() method could not handle the utf-8 bytes to String conversion, then UTFDataFormatException, which is an IOException,  is thrown.
    The exception is handled at flex.messaging.endpoints::service() method as "This happens when client closes the connection, log it at info level".
    I would like you to improve flex.messaging.io.amf.Amf3Input::readUTF() method to be ABLE to handle surrogate pair string.
    One of quick fix for my test purpose is the following...
         * @exclude
        protected String readUTF(int utflen) throws IOException
            char[] charr = getTempCharArray(utflen);
            byte[] bytearr = getTempByteArray(utflen);
            int c, char2, char3;
            int count = 0;
            int chCount = 0;
            in.readFully(bytearr, 0, utflen);
             * Use String class constructor for utf-8 deserialization for surrogate pair support.
            String s = new String(bytearr, 0, utflen, "utf-8");
            return s;
    I, Japanese user, and Asian users has confront the surrogate pair issue for IT solutions.
    Please, consider to officially support this issue.
    I would like to have some comment on this.
    Best Regards.

    We need to understand what you mean by "store surrogate pair". What do you want to do? Say you want to store the Unicode supplementary character represented by U+10300. This "internally" involves two surrogate code points, in pair (in the reserved U+D800 - U+DFFF range).
    Surrogate Pair. A representation for a single abstract character that consists of a sequence of two 16-bit code units, where the first value of the pair is a high-surrogate code unit, and the second is a low-surrogate code unit. (See definition D75 in Section 3.8, Surrogates.)
    Surrogate Code Point. A Unicode code point in the range U+D800..U+DFFF. Reserved for use by UTF-16, where a pair of surrogate code units (a high surrogate followed by a low surrogate) “stand in” for a supplementary code point.
    From the Unicode glossary at:
    http://unicode.org/glossary/
    Note that it says "reserved for use by UTF-16".

  • LNK2019: unresolved external symbol _RfcOpenEx

    Hi,
    i downloaded the SAP RFC SDK 7.20 unicode (Windows Server on IA32 32bit)
    opened VisualStudio 2010
    created new MFC dialog based project
    added additional include directories (linked to SDK folder)
    added additional library folders (also linked to SDK directory)
    added librfc32u.lib;libsapucum.lib as additional dependency
    added #include "saprfc.h"
    put some code into the project
    void CsaprfcsdkDlg::OnBnClickedButton1()
         RFC_HANDLE rfc_handle;
         RFC_ERROR_INFO_EX rfc_err_inf_ex;
         rfc_handle = RfcOpenEx("",&rfc_err_inf_ex);
    and tried to compile ... the result is an error that the external symbol RfcOpenEx is unresolved.
    1>------ Build started: Project: saprfcsdk, Configuration: Debug Win32 -
    1>  saprfcsdkDlg.cpp
    1>saprfcsdkDlg.obj : error LNK2019: unresolved external symbol _RfcOpenEx@8 referenced in function "public: void __thiscall CsaprfcsdkDlg::OnBnClickedButton1(void)" (?OnBnClickedButton1@CsaprfcsdkDlg@@QAEXXZ)
    1>...\saprfcsdk\Debug\saprfcsdk.exe : fatal error LNK1120: 1 unresolved externals
    ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
    Anyone has an idea what i am doing wrong ?
    I have a Win7 64Bit running but i also tried it on VS2003 and WinXP ... but got the same error.
    Thanks for your hints.

    Hey Bernhard,
    i made a test and this test was successful. It is more or less a workaround but maybe it helps.
    I used the non unicode SDK and opened a RFC connection, called a self written function to retrieve data.
    After calling the RfcCallReceiveEx i made a MultibyteToWideChar to convert all the received stuff to utf-8.
    Before the conversion i got this:
    el. Nr.: 00xxxxxxxxxxx
    *Person:
         Herr Frank é«u02DCæu2013°åu0152ºæ¨ªå±±è·¯106号 Wüldemann
    *Firma:
         CRM-TEST Name 2
         64293 Darmstadt
         DE
    *Extras:
         Rating:     Ba
         Firma-Nr.:     0003010048
         Person-Nr:     0003010069
    After the conversion i got this : (don't wonder first name is a test with asian characters)
    http://www.l2server.de/unicode.jpg
    I would imagine when you make an Widechartomultibyte before sending information to the SAP system  you could receive the correct input also.
    Maybe this will help you to.
    Regards Frank
    snip code ***
    CString CCTICallHandler::ReadDataFromSAP(CString sANum, CString sHost, CString sSysNr, CString sUser, CString sPwd)
         try
              if(sHost==_T("") || sSysNr==_T("") || sUser==_T("") || sPwd==_T(""))
                   return _T("");
              RFC_HANDLE rfc_handle;
              RFC_ERROR_INFO_EX rfc_err_inf_ex;
              CString sConnStr;
              sConnStr.Format(_T("ASHOST=%s CLIENT=100 USER=%s PASSWD=%s LANG=E SYSNR=%s TRACE=0"),sHost,sUser,sPwd,sSysNr);
              rfc_char_t * conn_str = (rfc_char_t*)malloc(sConnStr.GetLength()+1);
              WideCharToMultiByte(CP_ACP,0,sConnStr,sConnStr.GetLength()1,conn_str,sConnStr.GetLength()1,"",FALSE);
              rfc_handle = RfcOpenEx(conn_str,&rfc_err_inf_ex);
              if(rfc_handle != 0)
                   rfc_char_t function_name[] = "Z_HBM_CTI_FIND_BP_BY_NUMBER";
                   rfc_char_t * exception = NULL;
                   RFC_RC rfc_rc;
                   char * chANum = (char*)malloc(sANum.GetLength()+1);
                   int sz = sANum.GetLength() + 1;
                   WideCharToMultiByte(CP_ACP,0,sANum,sz,chANum,sz,"",FALSE);
                   RFC_PARAMETER importing[2],exporting[2];
                   RFC_STRING value1 = (RFC_STRING)chANum;//m_sANumber;
                   RFC_STRING value2;
                   exporting[0].name = "TEL_NUMBER";
                   exporting[0].nlen = strlen (exporting[0]. name);
                   exporting[0].type = RFCTYPE_STRING;
                   exporting[0].addr = &value1;
                   exporting[0].leng = sizeof(value1);
                   exporting[1].name = NULL;
                   importing[0].name = "RESULT";
                   importing[0].nlen = strlen(importing[0].name);
                   importing[0].addr = &value2;
                   importing[0].type = RFCTYPE_STRING;
                   importing[1].name = NULL;
                   rfc_rc = RfcCallReceiveEx(rfc_handle, function_name, exporting, importing, NULL, NULL ,&exception);
                   RfcClose(rfc_handle);
                   free(chANum);
                   free(conn_str);
                   CString sRetVal;
                   MultiByteToWideChar(CP_UTF8, 0, (LPCSTR)value2,-1, sRetVal.GetBuffer(strlen((const char *)value2)), strlen((const char *)value2));
                   sRetVal.ReleaseBuffer(strlen((const char *)value2));
                   return sRetVal;
              else
                   return _T("Connection to CRM failed.");
         CRML_CATCH_UNI_RET_EMPTY
    snip code end ***

  • Trying to connect to Samba share

    alright so I have a smaba server setup on a fedora core 5 box
    I am trying to mount the share on os x. I go to finder and to the network and I see the server. I then click connect, sign in and it says "The alias Server could not be opened, because the original item can not be found" So, what does that mean? I try and connect through the terminal using start smb commands I say "smbclient //192.168.1.108/joe" and it goes
    init_iconv: Conversion from UTF-16LE to CP0 not supported
    init_iconv: Attempting to replace with conversion from UTF-16LE to ASCII
    init_iconv: Conversion from UTF-8-MAC to CP0 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from UTF-8-MAC to CP0 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from CP0 to UTF-16LE not supported
    init_iconv: Attempting to replace with conversion from ASCII to UTF-16LE
    init_iconv: Conversion from CP0 to UTF-8-MAC not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from CP0 to UTF-8-MAC not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from CP0 to UTF8 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from UTF8 to CP0 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from UTF-16LE to CP0 not supported
    init_iconv: Attempting to replace with conversion from UTF-16LE to ASCII
    init_iconv: Conversion from UTF-8-MAC to CP0 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from UTF-8-MAC to CP0 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from CP0 to UTF-16LE not supported
    init_iconv: Attempting to replace with conversion from ASCII to UTF-16LE
    init_iconv: Conversion from CP0 to UTF-8-MAC not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from CP0 to UTF-8-MAC not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from CP0 to UTF8 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    init_iconv: Conversion from UTF8 to CP0 not supported
    init_iconv: Attempting to replace with conversion from ASCII to ASCII
    Password:
    I then type in my password and press enter and I am in
    I can move around and everything is good. So, why can't I mount the share? I know I am typing in the correct info.

    You could try creating a new sharedpoint on your PC. An empty folder for example. Then try connecting to it from your mac. Then you can rule out a problem with any of the contents of your original sharepoint.

  • CONVERSION FROM ANSI ENCODED FILE TO UTF-8 ENCODED FILE

    Hi All,
    I have some issues in conversion of ANSI encoded file to utf encoded file. let me tell you in detail
    I have installed the Language Support for Thai Language on My Operating System.
    now, when I open my notepad and add thai character on the file and save it as ansi encoding. it saves it perfectly and also I able to see it on opening the file again.
    This file need to be read by my application , store in database and should display thai character on jsp after fetching the data from database. Currently it is showing junk character on jsp reason being that my database (UTF8 compliant database) has junk data . it has junk data because my application is not able to read it correctly from the file.
    If I save the file with encoding as UTF 8 it works fine. but my business requirement is such that the file is system generated and by default it is encoded in ANSI format. so I need to do the conversion of encoding from ANSI to UTF8 . so Any of you can guide me on the same how to do this conversion ?
    Regards
    Gaurav Nigam

    Guessing the encoding of a text file by examining its contents is tricky at best, and should only be done as a last resort. If the file is auto-generated, I would first try reading it using the system default encoding. That's what you're doing whenever you read a file with a FileReader. If that doesn't work, try using an InputStreamReader and specifying a Thai encoding like TIS-620 or cp838 (I don't really know anything about Thai encodings; I just picked those out of a quick Google search). Once you've read the file correctly, you can write the text to a new file using an OutputStreamWriter and specifying UTF-8 as the encoding. It shouldn't really be necessary to transcode files like this, but without knowing a lot more about your situation, that's all I can suggest.
    As for native2ascii, it isn't for encoding conversions. All it does is replace each non-ASCII character with its six-character Unicode escape, so "voil&#xE1;" becomes "voil\u00e1". In other words, it avoids the problem of character encodings by converting the file's contents to a form that can be stored as ASCII. It's mainly used for converting property or resource files to a form that can be read by the Properties and ResourceBundle classes.

  • Perform unicode to UTF-8 conversion on F110 bacs payment file in ABAP

    Hi,
    I am facing a conversion issue for the UK BACS payment files.
    The payment run tcode F110 creates a payment file but the file when created on the application server has soem sort of code conversion. If I removed the # value, i can read most of the data.
    The data example is as below-
    #V#O#L#1#0#0#1#5#8#8# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #2#4#3#3#0#9#
    #H#D#R#1#A#2#4#3#3#0#9#S# # #1#2#4#3#3#0#9#0#0#0#0#0#2#0#0#0#1#0#0#0#1# # # # # # # #1#0#1#1#2#
    #H#D#R#2#F#0#2#0#0#0#0#0#1#0#0# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
    #U#H#L#1# #1#0#1#1#3#9#9#9#9#9#9# # # # #0#0#0#0#0#0#0#0#1# #D#A#I#L#Y# # #0#0#0# # # # # # # #
    This is then transferred to the bank via the FTP UNIX Script but after the conversion which is happening as-
    #Perform unicode to UTF-8 conversion on bacs file
    $a = "iconv -f UNICODE -t UTF-8 $tmpUNI > $tmpASC";
    The need going forward is to bring the details via the interface and then make an uplaod.
    The ABAP code should be able to make the conversion, remove the additional chracters and then send the file across.
    I have searched everywhere but I am not able to find out how to make the same conversion in ABAP.
    We are on ECC6.
    Can someone please help me?
    Regards,
    Archana

    Hi Archana,
    can  you please check SAP notes 1064779 and 1365764 (including the attachment) and see if this helps you ?
    Best regards,
    Nils Buerckel
    SAP AG

  • File/FTP adapter, outbound channel, content conversion, UTF-8 (Unicode)?

    We would like to send "delimited" files to another application (tab-delimited, CSV, ... - the other application does not support XML-based interfaces). Obviously we will have an outbound channel that uses the file/FTP adapter and the data will be subjected to "content conversion".
    The data contains names in many languages; not all of this can be represented in ISO Latin-1, much less in US-ASCII. I suppose UTF-8 would work. The question is: how is this handled by the FTP protocol? (considering that the FTP client is part of the SAP PI file/FTP adapter and the FTP server is something on the "other" machine)

    Hi Peter,
    you can maintain the file encoding in the outbound adapter. See [Configuring the Receiver File/FTP Adapter|http://help.sap.com/saphelp_nw2004s/helpdata/en/bc/bb79d6061007419a081e58cbeaaf28/content.htm]
    For your requirements "utf-8" sounds pretty fitting.
    Regards,
    Udo

  • HTTP-Receiver: Code page conversion error from UTF-8 to ISO-8859-1

    Hello experts,
    In one of our interfaces we are using the payload manipulation of the HTTP receiver channel to change the payload code page from UTF-8 to ISO-8859-1. And from time to time we are facing the following error:
    u201CCode page conversion error UTF-8 from system code page to code page ISO-8859-1u201D
    Iu2019m quite sure that this error occurs because of non-ISO-8859-1 characters in the processed message. And here comes my question:
    Is it possible to change the error behaviour of the code page converter, so that the error will be ignored?
    Perhaps the converter could replace the disruptive character with e.g. u201C#u201D?
    Thank you in advance.
    Best regards,
    Thomas

    Hello.
    I'm not 100% sure if this will help, but it's a good Reading material on the subject (:
    [How to Work with Character Encodings in Process Integration (NW7.0)|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42]
    The part of the XSLT / Java mapping might come in handy in your situation.
    you can check for problematic chars in the code.
    Good luck,
    Imanuel Rahamim.

Maybe you are looking for

  • No sound after recording MIDI using cycle mode take folder

    I'm going through the Apple Certified training book "Logic Pro 9 and Logic Express 9". My project and global settings are pretty close to what comes with Logic when you install them. I.e. I haven't been messing with any settings not explicitly descri

  • Iphone4 vibration not working after upgrading to ios 7

    I upgraded my i phone 4 to ios 7 and drom then vibration is not working....!!! I dont know how could apple lag this feature in a iphone4 ios7 I find my iphone really annoying without the vibration feature

  • Applescript to automatically update the EyeTV DVB EPG program guide

    For EyeTV users in Europe and Australia with DVB EPG access, you will all know that EyeTV will not keep that database automatically updated. So I wrote this bit of Applescript will update EyeTV's free to air DVB EPG database. Paste it into the Apples

  • Templates for deploying content management in sap ep at user end

    Where i can find templates for deploying content management at user end in sap enterprise portal ? or Please suggest me about default templates provided by sap if any for content management ? Regards- Sumeet Sharma

  • Mbp slow after yosemite install

    System is really slow it becomes tremendous  slow after getting back from suspended state. seems today's system update has improved the performance a bit. ** also the webcamera doesnt load at times after geting back from suspended state Problem descr