File encoding issue on import step

Dear all,
I am experiencing an import issue with csv files generated from MS Dynamics AX ERP, a Reporting Services query I guess.
Having a look at the concerned files using notepad++ I can see that encoding is UTF-8.
As a workaround, if I open files using Excel and re-save it as csv, then they are encoded as ANSI as UTF-8, in that case FDMEE import process is effective.
Regarding our settings (FDMEE v.11.1.2.3.530):
- System level: file charset is set to UTF-8
- System level: no specified file charset
- User level: no specified file charset
My questions are :
- Is this normal behavior? Why UTF-8 files are not supported if system level conf sayz UTF-8?
- Is it relative to system encoding (OS, DB...)
- Is it related to file so do I have to force encoding through befImport script?
I had a deep look at this article http://akafdmee.blogspot.fr/2014/11/importing-files-with-different-file.html which is great but still no magic solution comes to my mind! I'll keep on testing.

Hello Dora,
I had the same problem, even with the most simple JavaBean Class. I found out that the export-jar function of the Developer Studio 2.0.13 creates sightly different jars than when I use the jar-tool from the command line. You can check the jar structure by using the command
jar tf file.jar
JAR File created by Developer Studio (did not work):
D:temp>jar tf test.jar
META-INF/MANIFEST.MF
de/test/bean/SimpleBean.class
JAR created by jar tool from the SDK from the top directory with
jar cf test.jar de
D:temp>jar tf test.jar de
META-INF/
META-INF/MANIFEST.MF
de/
de/test/
de/test/bean/
de/test/bean/SimpleBean.class
The second jar file worked for me even <b>without any special manifest file</b> but the one that is automatically created.
I hope that helps!
Jari

Similar Messages

  • File encoding issues with Mac OSX?

    I really love DreamWeaver and I think its the best editor our there, except for one thing - the file encoding!
    I tried setting the default encoding to UTF-8 under Preferences -> New Document, but that won't cut it.
    Dreamweaver always creates/saves new files as Western/ISO format, so I always have to open them up with another text editor and overwrite the file with UTF-8 format.
    Is there a fix to this or a extension to download?
    I'm using CS4 on Mac OSX Leopard.

    CS4 should automatically save files as UTF-8. It's the default setting. You shouldn't need to do anything special.
    One thing you could try is removing the Prefs file. On a Mac, it's located at <username>:Library:Preferences:Adobe Dreamweaver CS4 Prefs.

  • Character Encoding and File Encoding issue

    Hi,
    I have a file which has a data encoded using default locale.
    I start jvm in same default locale and try to red the file.
    I took 2 approaches :
    1. Read the file using InputStreamReader() without specifying the encoding, so that default one based on locale will be picked up.
    -- This apprach worked fine.
    -- I also printed system property "file.encoding" which matched with current locales encoding (on unix cooand to get this is "locale charmap").
    2. In this approach, I read the file using InputStream as an array of raw bytes, and passed it to String contructor to convert bytes to String.
    -- The String contained garbled data, meaning encoding failed.
    I tried printing encoding used by JVM using internal class, and "file.encoding" property as well.
    These 2 values do not match, there is weird difference.
    For e.g. for locale ja_JP.eucjp on linux box :
    byte-character uses EUC_JP_LINUX encoding
    file.encoding system property is EUC-JP-LINUX
    To get byte to character encoding, I used following methods (sun.io.*):
    ByteToCharConverter btc = ByteToCharConverter.getDefault();
    System.out.println("BTC uses " + btc.getCharacterEncoding());
    Do you have any idea why is it failing ?
    My understanding was, file encoding and character encoding should always be same by default.
    But, because of this behaviour, I am little perplexed.

    But there's no character encoding set for this operation:baos.write("���".getBytes());

  • Encoding issue in importing file using myfaces

    Hi,
    I are trying to upload a XML file (with UTF-8 encoding) in our JSF UI using myfaces "inputFileUpload" option. But when I am trying to read the file using org.apache.myfaces.custom.fileupload.UploadedFile.getInputStream() API, looks like the encoding is not being preserved and I am getting Scandinavian characters as ‘?’. I have checked this by writing the read contents to a file via stream which is UTF-8 encoded.
    How can I preserve the character encoding while reading from the uploaded file?
    Thanks.

    Since UploadedFile.getInputStream() returns an InputStream, you can impose the character encoding when you wrap it with a Reader.

  • UTF-8 file encoding issues within Java?

    I'm working on an application that takes data from an IBM mainframe(z/OS), converts it from IBM-1047 encoding to UTF-8(via iconv utility) and binary FTP's it to a Unix box where we process the file with our Java app and return the processed file.
    Within our Java app on the Unix platform we stream the file into a byte array and then create a new String from the byte array specifying "UTF-8" as the encoding parameter.
    The problem is that Java appears to be taking certain 2 byte UTF-8 characters and converting them to a single char.
    E.g. I have a \uC3A6 char in the input file, I can view the bytes in the byte array that's read in, and it's still a \uC3A6, but as soon as I create the new String with UTF-8 encoding and view the bytes, those 2 bytes are now shown as a single byte(0xE6). The code I have that's looking for the char \uC3A6 then fails.
    Can anyone explain what's happening here?? Sorry for the long message.

    The encodings which convert the character (char)0xC3A6 to the 2-entry byte array {0xC3, 0xA6} (unsigned) are "UTF-16BE", "UnicodeBigUnmarked", and "UnicodeBig." These are essentially identical except for the use of byte-order mark. As was said above UTF-8 converts (char)0xC3A6 to the 3-entry byte array {0xEC, 0x8E, 0xA6} (unsigned).
    http://java.sun.com/j2se/1.4.1/docs/guide/intl/encoding.doc.html

  • Media Encoder won't import MXF files from Canon XF100

    I'm running Media Encoder CC on two laptops: a macbook pro and a macbook air. Both systems are running the most recent version of Media Encoder CC (including updates) but only one will import MXF files from a Canon XF100? The macbook air imports and encodes the files without issue, however the macbook pro won't import the same MXF files and kicks the following error:
    The file /Users/.../.../.../AA0106.MXF could not be imported. Could not read from source. Please check the settings and try again.
    I could understand if both computers errored but using the same file, one will import and the other will not. Any help would be greatly appreciated. Thanks.

    Just try this and see if it will work for you.
    Open iPhoto > Preferences > General and deselect 'Connecting camera opens: iPhoto'.
    Change it to 'No Application' and close iPhoto.
    Now connect your camera to your computer and turn it on or better still use a card reader.
    If iMovie tries to open, close it.
    Your camera/card should now appear as an external drive.
    Copy/paste your video files into a new folder on your desktop.
    Eject and disconnect your camera/card.
    Now open iMovie and go File > Import > Movies... and navigate to that folder.
    Z.

  • FAQ: How do I troubleshoot audio issues when importing .MTS files?

    Hello.   The link FAQ: How do I troubleshoot audio issues when importing .MTS files?
    does not work, and i need the answer please. My imported MTS clips are not playing back audio because it seems that no audio imported with the video.   Thanks, Don.

    Hi there
    Audio plays when the object with the audio assigned appears. This is why you hear it when the Button appears.
    Try assigning the audio instead to the object that appears when you click the button. Perhaps the caption.
    Cheers... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcerStone Blog
    Captivate eBooks

  • XI File Adapter Custom File Encoding for  issues between SJIS and CP932

    Dear SAP Forum,
    Has anybody found a solution for the difference between the JVM (IANA) SJIS and MS SJIS implementation ?
    When users enter characters in SAPGUI, the MS SJIS implementation is used, but when the XI file adapter writes SJIS, the JVM SJIS implementation is used, which causes issues for 7 characters:
    1. FULLWIDTH TILDE/EFBD9E                 8160     ~     〜     
    2. PARALLEL TO/E288A5                          8161     ∥     ‖     
    3. FULLWIDTH HYPHEN-MINUS/EFBC8D     817C     -     −     
    4. FULLWIDTH CENT SIGN/EFBFA0             8191     ¢     \u00A2     
    5. FULLWIDTH POUND SIGN/EFBFA1            8192     £     \u00A3     
    6. FULLWIDTH NOT SIGN/EFBFA2              81CA     ¬     \u00AC     
    7. REVERSE SOLIDUS                             815F     \     \u005C
    The following line of code can solve the problem (either in an individual mapping or in a module)
    String sOUT = myString.replace(\u0027~\u0027,\u0027〜\u0027).replace(\u0027∥\u0027,\u0027‖\u0027).replace(\u0027-\u0027,\u0027−\u0027).replace(\u0027¢\u0027,\u0027\u00A2\u0027).replace(\u0027£\u0027,\u0027\u00A3\u0027).replace(\u0027¬\u0027,\u0027\u00AC\u0027);
    But I would prefer to add a custome Character set to the file encoding. Has anybody tried this ?

    Dear SAP Forum,
    Has anybody found a solution for the difference between the JVM (IANA) SJIS and MS SJIS implementation ?
    When users enter characters in SAPGUI, the MS SJIS implementation is used, but when the XI file adapter writes SJIS, the JVM SJIS implementation is used, which causes issues for 7 characters:
    1. FULLWIDTH TILDE/EFBD9E                 8160     ~     〜     
    2. PARALLEL TO/E288A5                          8161     ∥     ‖     
    3. FULLWIDTH HYPHEN-MINUS/EFBC8D     817C     -     −     
    4. FULLWIDTH CENT SIGN/EFBFA0             8191     ¢     \u00A2     
    5. FULLWIDTH POUND SIGN/EFBFA1            8192     £     \u00A3     
    6. FULLWIDTH NOT SIGN/EFBFA2              81CA     ¬     \u00AC     
    7. REVERSE SOLIDUS                             815F     \     \u005C
    The following line of code can solve the problem (either in an individual mapping or in a module)
    String sOUT = myString.replace(\u0027~\u0027,\u0027〜\u0027).replace(\u0027∥\u0027,\u0027‖\u0027).replace(\u0027-\u0027,\u0027−\u0027).replace(\u0027¢\u0027,\u0027\u00A2\u0027).replace(\u0027£\u0027,\u0027\u00A3\u0027).replace(\u0027¬\u0027,\u0027\u00AC\u0027);
    But I would prefer to add a custome Character set to the file encoding. Has anybody tried this ?

  • [SOLVED] File name encoding issue

    Hi all,
    I have a large series of files with accented characters, they were all displayed nicely, but at some point, when I copied them to another computer, the characters were replaced by codes, for instance: "ó" --> "ó".
    +Renaming ie. "Pasó" (bad encoding of "Pasó") --> Pasó, while writing it, it shows the correct character, but when pressing enter the name remains ("Pasó")
    +If I rename the file to something else and then to the correct name, it will accept it: Pasó --> Pas --> Pasó will display correctly.
    I don't know if it's a system wide encoding issue because new files are displayed correctly, but I would like to know if I have to change file names manually to make them right.
    PS. When copying bad encoded files to another FS (like a USB drive), nautilus and bash refuse to copy them.
    Last edited by Wasser (2012-09-17 21:10:52)

    My fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    tmpfs /tmp tmpfs nodev,nosuid 0 0
    # /dev/sda2 LABEL=ROOT
    UUID=d2243d9c-b8e7-442a-8446-5a43a4d9221b / ext4 rw,relatime,data=ordered 0 1
    # /dev/sda5 LABEL=HOME
    UUID=e67f5cfa-3ec3-4c06-9c2c-62c4cc188ffe /home ext4 rw,relatime,data=ordered 0 2
    # /dev/sda3 LABEL=VAR
    UUID=caac4924-2a13-4c97-9926-668ac0595ba3 /var reiserfs rw,relatime 0 2
    # /dev/sda1 LABEL=UEFI
    UUID=1E70-6485 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
    # /dev/sda4
    UUID=14993c2e-4bc4-42e4-b2e5-9dbc286abb4c none swap defaults 0 0
    Files in question are in /dev/sda5 (HOME)
    Last edited by Wasser (2012-09-16 08:37:52)

  • Import files encoded in H264 or DVCPro?

    Does Final Cut Express import files encoded in H264?
    Apple’s Technical Specifications page http://support.apple.com/kb/SP536 is silent about supported codecs and makes no mention of supported file formats at all. It notes some CAMERAS supported, but not files or codecs.
    From the FCP Express manual:
    *Video Formats Supported by Final Cut Express
    Final Cut Express supports any video format that uses an installed QuickTime codec.*
    On the same page, a table appears listing
    *D-5 HD
    D-6
    HDCAM
    HDCAM SR
    DVCPRO HD
    XDCAM HD
    HDV
    RGB video 1080p30 720p60*
    Can this really be? Does anyone have experience editing in FCP Express in DVCPro HD, or H264, for example? Or is the manual just misleading?

    FCE does not support H.264 for editing. Neither does FCP BTW.
    The manual is mistaken. DVCPRO HD is only available with FCP.

  • Encoding issue for file manager

    I am using the ditto command to duplicate a file. This file has unicode filename and as per http://developer.apple.com/qa/qa2001/qa1173.html I am first normalizing the name to kCFStringNormalizationFormD and then converting it to utf-8 before calling ditto on it. This all works smoothly but when I try to get the FSRef using the original unicode name I get fnfErr. Dosn't the API CFURLGetFSRef convert the string to kCFStringNormalizationFormD? Or is there any alternate for ditto on Tiger.

    no encoding issues if i use xml (xlf or xliff) bundle as xml supports utf-8 encoding.

  • Anyone having issues with importing CR2 files into lightroom 5 as error message comes up saying "Some import operations were not performed". please advise what is a solution please

    Urgent please
    anyone having issues with importing CR2 files into lightroom 5 as error message comes up saying "Some import operations were not performed". please advise what is a solution please

    Sounds like the folder Write permissions issue described here with a solution:
    "Some import operations were not performed" from camera import

  • New issue , when importing new paid app spreadsheet into Configurator iTunes attempts to download an ipa file for each redeem code .  If you have 100 codes 100 downloads will start. Call enterprise support if you are seeing this issue.

    New issue , when importing new paid app spreadsheet into Configurator iTunes attempts to download an ipa file for each redeem code .  If you have 100 codes 100 downloads will start. Call enterprise support if you are seeing this issue.

    New issue , when importing new paid app spreadsheet into Configurator iTunes attempts to download an ipa file for each redeem code .  If you have 100 codes 100 downloads will start. Call enterprise support if you are seeing this issue.

  • Receiver File Adapter - Encoding issue.

    Hi Everybody,
    The file format (encoding) is different to the format generally we used to get.
    Currently we are get the flat files in DOS format.The current file when we are downloading it we are getting it in the UNIX or other format.
    For eg: 20 has been changed to 0D in the file.
    Can somebody help me on the same.
    Thanks,
    Zabiulla

    Hi,
    Check on this for file adapters
    Text
    Under File Encoding, specify a code page.
    The default setting is to use the system code page that is specific to the configuration of the installed operating system. The file content is converted to the UTF-8 code page before it is sent.
    Permitted values for the code page are the existing Charsets of the Java runtime. According to the SUN specification for the Java runtime, at least the following standard character sets must be supported:
    &#9632;       US-ASCII
    Seven-bit ASCII, also known as ISO646-US, or Basic Latin block of the Unicode character set
    &#9632;       ISO-8859-1
    ISO character set for Western European languages (Latin Alphabet No. 1), also known as ISO-LATIN-1
    &#9632;       UTF-8
    8-bit Unicode character format
    &#9632;       UTF-16BE
    16-bit Unicode character format, big-endian byte order
    &#9632;       UTF-16LE
    16-bit Unicode character format, little-endian byte order
    &#9632;       UTF-16
    16-bit Unicode character format, byte order
    Regards
    Vijaya

  • File sender adapter - Encoding issue

    Hi,
    On my customer site, we have an interface taking a file and sending an IDoc to the non Unicode ERP system. Unfortunately, when we have cyrillic characters in the file, the processing files with the error:
    com.sap.aii.utilxi.misc.api.BaseRuntimeException: Fatal Error: com.sap.engine.lib.xml.parser.ParserException: Invalid char #0x6(:main:, row:17776, col:893)
    This is of course the result of using an invalid encoding in the communication channel. Until now, it was left blank, so UTF8 was used. I want to improve this interface in order to never again have this error because it involves some manual work fixing it and it's getting annoying in production to see this once a month.
    What I want to do next is finding out the encoding from the guys delivering the file and then placing it in the communication channel. Pretty straightforward, right? On SAP, I think the cyrillic, non ASCII character will be replaced by #, but this is acceptable by the business. Not acceptable is this constant error.
    Because I want to be sure of my assessment before I ask for approval on doing this modification with the associated testing, communication and everything, my question to you is: have you experienced this before in PI? Are all my conclusions accurate? How would you solve the problem?
    Thanks in advance and best regards,
    George

    did you try giving the encoding as ISO 8859-5  in the file adapter?
    File Type
    Specify the document data type.
    u25CB       Binary
    u25CB       Text
    Under File Encoding, specify a code page.
    The default setting is to use the system code page that is specific to the configuration of the installed operating system. The file content is converted to the UTF-8 code page before it is sent.
    also ref: http://en.wikipedia.org/wiki/Cyrillic_alphabet#Computer_encoding
    http://help.sap.com/saphelp_nw04/helpdata/en/e3/94007075cae04f930cc4c034e411e1/content.htm

Maybe you are looking for

  • Mac version 10.7.4 OSX lion

    Just a while ago my mac's safari closed unexpectedly while viewing some news articles on yahoo. I re-opened it and it close again, it closed a couple of times, i am not sure if it is something wrong with the articles in yahoo, or my mac has a virus o

  • I am not able to import old songs to Itunes on my PC

    I have some old song i have on my PC and wont to import them to iTunes to add to my Ipad, however when i add a folder to my library only 1 or 2 songes come over the others never do. The files are all MP3 and all play properly in Window Media. How can

  • Selecting a FCP Project to Export Mix To ....

    HI When Exporting a STP Mutlitrack Mix Back to FCP (File/Export/ Master Mix with Send to FCP picked) FCP Launches and a Dialog Opens with a "Destination" drop down. Am I meant to be able to see / navigate to a FCP Project File from here? Seems like a

  • Delivery date error

    Dear All, When the PO is saved with the delivery date 31/03/2011 the following error appears. "Deli. date outside period covered by factory calender IN" How to solve this error? Regards

  • How to Reset RTSI?

    Dear reader, When I connect the RTSI bus to NI 6040, I am getting 5V output in all the RTSI channels(RTSI 0...7). Do anybody have idea how to make it to 0V? I am programming in VC++, since I am using NI 6040 for triggering and synchronising with othe