Files decoding unther UNICODE

Hi,
I have attachments in xstrings. I'm getting them through WebServices from another non-SAP system.
I need to decode them before putting in one of the Workplace folders in SAP.
The problem is I can't do it unther UNICODE system.
Unther non-UNICODE i use:
CONV = CL_ABAP_CONV_IN_CE=>CREATE( INPUT = ATTCH_WA-CONTENT ).
CONV->READ( importing DATA = CONTENT LEN = CLEN ).
But unther UNICODE I get:
<b>CX_SY_CONVERSION_CODEPAGE</b>
<i>Bei der Konvertierung eines Textes von Codepage '4110' nach Codepage
'4103' wurde</i>
In attachments i have many types of files: txt, doc, xls, jpg and many others.
None of them is decoded in the right way.
I tried many dofferent codepages anf FMs to do this but it's just not working...
Please help if You have any ideas.

Hi,
Use of data types X and XSTRING should be avoided if your system is using unicode.
Because of the difference of length for coding characters between ascii and unicode, conversion is not possible.
You should try to use other data types to solve your problem.
Hope this helps.
Regards,
Nicolas.

Similar Messages

  • CS2/CS3/CS4: Cannot get file path in Unicode of the current document on Windows

    Hi All,
    In my automation plugin I need to have full absolute path of the opened document with any possible non-English letters. Using SDK examples Listener and Getter that come with Photoshop SDK the full absolute path which I obtain is in the default ANSI code page (CP_ACP) and I can convert it to Unicode using MultiByteToWideChar() API. However this works well when I have corresponding to document name language set in the "Control Panel -> Regional and Language Options -> Advanced -> Select a language to match the language version of the non-Unicode programs you want to use." For example if name of the document has Russian letters and chosen language in "Regional and Language Options" is also Russian the described conversion works well. If I change "Regional and Language Options" to English for example, full path returned by Photoshop SDK API (AliasToFullPath in PIUFile.cpp) for the document with Russian letters will contain "????????.psd" symbols.
    So I need to have an ability to get absolute file path in Unicode. Is it possible in Photoshop CS2/CS3/CS4 for Windows? I have searched forum and SDK but could not find info on it.
    Is it possible to have native HANDLE of the opened file to get file info using Windows API?
    Please advice.
    Below given slightly modified code from Photoshop CS3 which I use to get absolute file pat of the opened document.
    Thanks and regards,
    Sergey
    std::string outFilePath;
    int32 theID = 0;
    SPErr error = kSPNoError;
    error = PIUGetInfo(classDocument, keyDocumentID, &theID, NULL);
    if (error == kSPNoError)
    Handle theFileHandle = NULL;
    error = PIUGetInfoByID(theID, classDocument, keyFileReference, &theFileHandle, NULL);
    if (error == kSPNoError)
    int32 length = sPSHandle->GetSize(theFileHandle);
    Boolean oldLock = FALSE;
    Ptr pointer = NULL;
    sPSHandle->SetLock(theFileHandle, true, &pointer, &oldLock);
    if (pointer != NULL)
    outFilePath = (char*)pointer;
    sPSHandle->SetLock(theFileHandle, oldLock, &pointer, &oldLock);

    Hi All,
    Does anybody know, whether it is possible to get Unicode file path of the current document in Photoshop via Photoshop SDK API or without them?
    Thanks,
    Serhiy

  • How to Determine Text File Encoding is UNICODE

    Hi Gurus,
    How to determine whether the file is a UNICODE format or not?
    I have the file stored as a BLOB column in a table
    Thanks,
    Sombit

    That's a rather hard problem. You would, realistically, either have to make a bunch of simplifying assumptions based on the data or you would want to buy a commercial tool that does character set detection.
    There are a number of different ways to encode Unicode (UTF-8, UTF-16, UTF-32, USC-2, etc.) and a number of different versions of the Unicode standard. UTF-8 is one of the more common ways to encode Unicode. But it is popular precisely because the first 127 characters (which is the majority of what you'd find in English text) are encoded identically to 7-bit ASCII. Depending on the size and contents of the document, it may not be possible to determine whether the data is encoded in 7-bit ASCII, UTF-8, or one of the various single-byte character sets that are built off of 7-bit ASCII (ISO 8859-15, Windows-1252, ISO 8859-1, etc).
    Depending on how many different character sets you are trying to distinguish between, you'd have to look for binary values that are valid in one character set and not in another.
    Justin

  • SIK Transport files and None unicode SAP system

    Dear all,
    I have a question about SIK Transport files.
    As you know, when we install BOE SIK,we need transport some files into SAP system.
    There is  a TXT file for describing how to use SIK transport files in SAP system.
    I found that there is no detail about none unicode SAP system in this TXT file.
    All of them is about unicode.
    If your SAP system is running on a BASIS system earlier than 6.20, you must use the files listed below:
    (These files are ANSI.)
    Open SQL Connectivity transport (K900084.r22 and R900084.r22)
    Info Set Connectivity transport (K900085.r22 and R900085.r22)
    Row-level Security Definition transport (K900086.r22 and R900086.r22)
    Cluster Definition transport (K900093.r22 and R900093.r22)
    Authentication Helpers transport (K900088.r22 and R900088.r22)
    If your SAP system is running on a 6.20 BASIS system or later, you must use the files listed below:
    (These files are Unicode enabled.)
    Open SQL Connectivity transport (K900574.r21 and R900574.r21)
    Info Set Connectivity transport (K900575.r21 and R900575.r21)
    Row-level Security Definition transport (K900576.r21 and R900576.r21)
    Cluster Definition transport (K900585.r21 and R900585.r21)
    Authentication Helpers transport (K900578.r21 and R900578.r21)
    The following files must be used on an SAP BW system:
    (These files are Unicode enabled.)
    Content Administration transport (K900579.r21 and R900579.r21)
    Personalization transport (K900580.r21 and R900580.r21)
    MDX Query Connectivity transport (K900581.r21 and R900581.r21)
    ODS Connectivity transport (K900582.r21 and R900582.r21)
    If our SAP BASIS system  is beyond 6.20,but iit is not unicode system.
    Could we use these transport files to none unicode SAP system ?
    Thanks!
    Wayne

    Hi Wayne,
    the text and the installation guide is clearly advising based on the version of your underlying BASIS system and differentiates 620 or 640 and higher.
    so based on the fact that you system is a BI 7 system you are in the category of a 640 (or higher) basis system and therefore you have to use the Unicode ENABLED transports.
    ingo

  • Create unicode file and read unicode file

    Hi
        How can create a unicode file and open unicode file in LV
    Regards
    Madhu

    gmadhu wrote:
    Hi
        How can create a unicode file and open unicode file in LV
    Regards
    Madhu
    In principle you can't. LabVIEW does not support Unicode (yet)! When it will officially support that is a question that I can't answer since I don't know it and as far as I know NI doesn't want to answer.
    So the real question you have to ask first is where and why do you want to read and write Unicode file. And what type of Unicode? Unicode is definitly not just Unicode as Windows has a different notion of Unicode (16 Bit characters) than Unix has (32 Bit characters). The 16 Bit Unicode from Windows is able to cover most languages on this globe but definitly not all without code expansion techniques.
    If you want to do this on Windows and have decided that there is no other way to do what you want you will probably have to access the WideCharToMultiByte() and MultibyteToWideChar() Windows APIs using the Call Library Node in order to convert between 8 bit multybyte strings as used in LabVIEW and the Unicode format necessary in your file.
    Rolf Kalbermatter
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • File constructor and unicode filename issue

    I have issues with unicode filename.
    I need to create a File object from unicode file name "&#1070;&#1075;&#1072;.mp3" in order to use it in the code.
    File in=new File("&#1070;&#1075;&#1072;.mp3")
    AudioFile f = AudioFileIO.read(in);
    Tag tag = f.getTag();
    Anybody knows how to deal with a file or directory that is named using Unicode characters?

    1) Have you tried just reading the file as a file using a FileInputStream?
    2) What operating system?
    3) What Java version?
    4) What are the \uxxxx values of some of the problem characters?
    Edit: I have just run        String filename =  System.getProperty("user.home") + "/\u1070\u1075\u1072.mp3";
           File file = new File(filename);
           OutputStream os = new FileOutputStream(file);
           os.write("Hello World".getBytes("utf-8"));
           os.close();
           DataInputStream dis = new DataInputStream(new FileInputStream(file));
           byte[] buffer = new byte[(int)file.length()];
           dis.readFully(buffer);
           dis.close();
           System.out.println(new String(buffer,"utf-8"));on Unbuntu 7.10 using JRE/JDK1.6.0_3
    without any problems other than not being able to see the \u1070\u1075\u1072 in my console because I don't have fonts installed for displaying them.
    Edited by: sabre150 on Dec 28, 2007 8:52 AM
    Further edit: It also runs on Windows XP using 1.6.0_03 .
    Edited by: sabre150 on Dec 28, 2007 9:00 AM
    I don't think the file is located where you think it is!
    Edited by: sabre150 on Dec 28, 2007 9:02 AM

  • How do I tell if a File is ANSI, unicode or UTF8?

    I have a jumble of file types - they should all be the same, but they are not.
    How do I tell which type a file has been saved in?
    (and how do I tell a file to save in a certain type?)

    "unicode or UTF-8" ?? UTF-8 is unicode !NO! UTF-8 is not UNICODE. Yes it is !!No it is not.
    And to prove it I refer to your links.........
    You simply cannot say "unicode or UTF-8" just because
    UTF is Unicode Transformation Format.UTF is a transfomation of UNICODE but it is not UNICODE. This is not playing with words. One of the big problems I see on these forums is people saying the Java uses UTF-8 to represent Strings but it does not, it uses UNICODE point values.
    You can say "UTF-8 or UTF16-BE or UTF-16LE" because
    all three are different Unicode representations. But
    all three are unicode.No! They are UNICODE transformations but not UNICODE.
    >
    So please don't play on words, I wanted to notify the
    original poster that "unicode or UTF-8" is
    meaningless, he/she would probably have said :
    "unicode (as UTF-8 or UTF-16 or...)"You are playing with words, not me. UTF-8 is not UNICODE, it is a transformation of UNICODE to a multibyte representation - http://www.unicode.org/faq/utf_bom.html#14 .

  • Java.io.File and non-unicode characters in file name

    Unix filesystem object names are byte sequences. These byte sequences are not required to correspond to any character sequence in the current or any locale. How do I open a file if it has characters that do not corrospond to a valid unicode encoding for some current locale? Unless I am missing something, if I do a list on a parent directory that has some file names like this, those file names do not get added to the list. Hmmm....
    R.

    OK, create.c is a program that will create a file whose name is not a character in the 'ja' locale.
    Lister.java defines a class that lists files in the current directory. For each file, it spits out the 'toString()' version of the file, the char array of the name as hex, and the 'getBytes' byte array of the name.
    So, what you can do is compile and run create.c, which will create a file whose name is a single byte whose hex value is 99. Then compile and run Lister.java, which will give you the following output (shown for two different locales:
    $ export LANG=
    $ java Lister
    name:?; chars:99,; bytes:99,
    $ export LANG=ja
    $ java Lister
    name:?; chars:fffd,; bytes:3f,
    ---------------------------------------------Note that when running in the JA locale, there is no character corresponding to byte value 0x99. So, Java uses the replacement character 0xFFFD, and the '?' character 0x3F, as a replacement.
    The point is that there are files which Java cannot uniquely represent as a straight String. I suppose we could get the filename via JNI, do the conversion ourselves, and then use the private-use area of Unicode to encode all our strings, but ugh.
    //create.c
    #include <stdio.h>
    int main()
       const char* name = "\x99";
       FILE* file = fopen( name, "w" );
       if( file == NULL )
          printf( "could not open file %s\n", name );
          return 1;
       fclose( file );
       return 0;
    // Lister.java
    import java.io.*;
    public class Lister
        public static void main( String[] args )
            new Lister().run();
        public void run()
            try
                doRun();
            catch( Exception e )
                System.out.println( "Encountered exception: " + e );
        private void doRun() throws Exception
            File cwd = new File( "." );
            String[] children = cwd.list();
            for( int i = 0; i < children.length; ++i )
                printName( children[ i ] );
        private void printName( String s )
            System.out.print( "name:" );
            System.out.print( s );
            System.out.print( "; chars:" );
            printCharsAsHex( s );
            System.out.print( "; bytes:" );
            printBytesAsHex( s );
            System.out.println();
        private void printCharsAsHex( String s )
            for( int i = 0; i < s.length(); ++i )
                char ch = s.charAt( i );
                System.out.print( Integer.toHexString( ch ) + "," );
        private void printBytesAsHex( String s )
            byte[] bytes = s.getBytes();
            for( int i = 0; i < bytes.length; ++i )
                byte b = bytes[ i ];
                System.out.print( Integer.toHexString( unsignedExtension( b ) ) + "," );
        private int unsignedExtension( byte b )
            return (int)b & 0xFF;
    }

  • Scanning files for non-unicode characters.

    Question: I have a web application that allows users to take data, enter it into a webapp, and generate an xml file on the servers filesystem containing the entered data. The code to this application cannot be altered (outside vendor). I have a second webapp, written by yours truly, that has to parse through these xml files to build a dataset used elsewhere.
    Unfortunately I'm having a serious problem. Many of the web applications users are apparently cutting and pasting their information from other sources (frequently MS Word) and in the process are embedding non-unicode characters in the XML files. When my application attempts to open these files (using DocumentBuilder), I get a SAXParseException "Document root element is missing".
    I'm sure others have run into this sort of thing, so I'm trying to figure out the best way to tackle this problem. Obviously I'm going to have to start pre-scanning the files for invalid characters, but finding an efficient method for doing so has proven to be a challenge. I can load the file into a String array and search it character per character, but that is both extremely slow (we're talking thousands of LONG XML files), and would require that I predefine the invalid characters (so anything new would slip through).
    I'm hoping there's a faster, easier way to do this that I'm just not familiar with or have found elsewhere.

    require that I predefine the invalid charactersThis isn't hard to do and it isn't subject to change. The XML recommendation tells you here exactly what characters are valid in XML documents.
    However if your problems extend to the sort of case where users paste code including the "&" character into a text node without escaping it properly, or they drop in MS Word "smart quotes" in the incorrect encoding, then I think you'll just have to face up to the fact that allowing naive users to generate uncontrolled wannabe-XML documents is not really a viable idea.

  • File Transfer non-unicode - unicode via client

    Hello.
    I downloaded a binary file from a SAP 4.0 system to my client (win2k) with OPEN DATASET (to read the file on the app server) and WS_DOWNLOAD (to save it to the client).
    Now i want to upload this file from my client to a 6.40 unicode system. Therefore i do the following:
    - GUI_UPLOAD to get the file from the client into an internal table
    - OPEN DATASET dsn FOR OUTPUT IN BINARY MODE to save the contents of the internal table to the file system of the app server.
    This works pretty well on non-unicode systems but does not work properly for unicode systems.
    Which options do i have to use? Anything with code pages?!
    THX
    --MIKE

    check out the <b>OPEN DATASET - Mode - {TEXT MODE ENCODING {DEFAULT|UTF-8|NON-UNICODE}} </b> option .
    The additions after ENCODING determine in which character representation the content of the file is handled.
    DEFAULT
    In a Unicode system, the designation DEFAULT corresponds to the designation UTF-8, and the designation NON-UNICODE in a non-Unicode system.
    UTF-8
    The characters in the file are handled according to the Unicode character representation UTF-8.
    NON-UNICODE
    In a non-Unicode system, the data is read or written without being converted. In a Unicode system,the characters in the file are handled according to the non-Unicode-codepage that would be assigned to the current text environment according to the database table TCP0C, at the time of reading or writing in a non-Unicode system.
    Check out the ABAP Key Word Documentation .
    Regards
    Raja

  • Java files and the Unicode!!!

    how to read from a unicode(UTF) file in a java program
    (give me an example please)

    Use InputStreamReader and pass in the appropriate constructor arguments.
    Drake

  • Downloading a File with a Unicode File-name

    hello!
    I have a program under Tomcat
    this program has a JSP file with Links on different files
    that can be downloaded by the user
    on click to a certain link, a pop-up for save-as dialog will appear.
    the unicode file-name appears correctly on the save-as dialog box
    the save-as dialog box contains buttons such as
    Open, Save and Cancel
    the result is:
    Dialog-Display FileName- OK
    Open - NG
    Save - OK
    im using IE 6 sp1
    heres my code on the force-download
         response.setContentType("application/octct-stream;charset=UTF-8");
            response.setContentType("application/force-download");
         response.setHeader("Content-Disposition", "attachment;filename="
                    + downloadFileName  );

    already solved....................

  • Unable to properly read a binary .xls file in a unicode system.

    Hello,
    We are currently upgrading to ecc 6 unicode
    from a non unicode ecc5 system and we have
    found several problems reading files for
    Excel in place (sap office intergration).
    We have an excel file with macros scripted
    within it and it, parts of the file are in
    hebrew.
    We use open dataset for input command to
    provide the data to the sap integration
    methods.
    In ecc5, no additional parameters were
    needed for the command.
    In the unicode system, the file isn't read
    properly even when providing codepage for
    the file in legacy binary mode (I tried '1800' and '1810' for hebrew).
    When compared to a non unicode system, it seems
    that some of the binary data is missing and some
    characters are missing and instead appear
    like squares.
    this leads to the file not being read at
    all.
    Is there any method to read the file with
    the characters in it properly , or force it
    to use the non unicode read method?
    Best regards, Yoav

    Try the Acrobat forum; this is the Adobe Reader forum.

  • Put "BigInt" value to a file in a unicode system

    Hi there,
    does anyone have an idea, how i am able to put a "BigInt"-value (i suppose, this is an ABAP INT4) into an interfaces file if the system is an unicode system?
    I know the class cl_abap_char_utilities for to insert e.g. a tab character into a file under unicode conditions, but i don't know, how to move an INT4 value to a string oder char variable and to inhibit the system to convert anything, when writing to an ASCII file system.
    Thanks in advance to any ideas!
    Axel

    Hi there,
    does anyone have an idea, how i am able to put a "BigInt"-value (i suppose, this is an ABAP INT4) into an interfaces file if the system is an unicode system?
    I know the class cl_abap_char_utilities for to insert e.g. a tab character into a file under unicode conditions, but i don't know, how to move an INT4 value to a string oder char variable and to inhibit the system to convert anything, when writing to an ASCII file system.
    Thanks in advance to any ideas!
    Axel

  • Does photoshop cs5 sdk support file names with unicode characters?

    Hi,
    For the export module, does the windows sdk have support for filenames with unicode characters?
    The ExportRecord struct in PIExport.h has an attribute filename declared as char array.
    So I have a doubt whether filenames with unicode characters will be supported for export module.
    Thanks in advance,
    Senthil.

    SPPlatformFileSpecificationW is not available in ReadImageDocumentDesc.
    It has SPPlatformFileSpecification_t which is coming as null for export module.
    I am using phostoshop cs5 sdk for windows.
    Thanks,
    Senthil.

Maybe you are looking for

  • New Hard Drive for My T-61

    My hard drive in my T61 crashed and none of the info could be retrieved.  I've got to replace it.  Never had to do this on my own.   Here are my questions: 1)  Are all these drives the same size  .  .  .  since I can't find them tied to individual no

  • Please help! Will any of these camera's work with VC3 Camera Requirements?

    I am looking to do news broadcasts for my website. I was looking into Panasonic GS-500, GOOD CAMERA, discontinued. I don't like the GS320 as it doesn't include a mic or headphone input. So I have no choice but to spendaround a thousand dollars for a

  • Ridiculous Customer Service

    After waiting a week for service on our non-working DSL line, the technician that came entered our home and looked at the lines run in the house. He says... They are run incorrectly...I'm surprised it ever worked at all...Needless to say he fixed the

  • Composition of business team in GRC Access control project

    Hi Can I get any information about the composition of business team in a GRC access control project? What type of people form this team? Please provide some clarity on the role of business people in this type of projects. Regards Abhijeet

  • Scroll bars jumpy and screen wavy

    I had a problem with my macbook pro (2011) this morning, I had a black screen with a cursor showing.  Went to the community and found a fix, (thanks again jesseinma).  Now, when I scroll, my scroll bar is "jumpy" and there is a wave in my screen almo