Byte order and file transference

I need some help about deciding how an application that transfers files from one computer to another tackles with the issue of byte order.
I am using NIO.
Say Alice is sending a file to Bob. Say Alice's computer is little endian, but Bob's is big endian. I have heard that "well mannered" applications should transmit in network order. THowever, the simplest solution would be that the application sending the files would transmit before the transmition a code representing the byte order of its machine. Then it sends the file content using that byte order. Bob's machine should only convert the data to its corresponding byte order if the received byte order does not match its own.
In this way we can avoid a conversion in the sending machine, and probably conversions in the receiving machines given that most of the computers are based on little endian processors.
Dou you think it is ok to contravene the network order rule?
TIA
-----

I vaguely recall some protocol (SunRPC & NFS?) using the method you describe. Sure, if you need to, no law against it.
Then, again, my tired old laptop can decode more than twenty million host-independent ints per second. How much data are you sending for byte order to be an issue? That's 800 Mbit/s, enough to saturate a gigabit ethernet. E.g. HDTV streaming video is 0.2% of that.
Can you decode host dependent ints significantly faster than host independent ints? Measurement program below, put your host dependent decoder in there.
public class TimeTest
    public static void main(String args[])
        byte buf[] = new byte[1000];
        int total = 0;
        for (int m = 0; m < 10; m++) {
            long start = System.currentTimeMillis();
            for (int n = 0; n < 10000000; n++) {
                int pos = 123;
                int x = (buf[pos++] & 0xff) << 24;
                x += (buf[pos++] & 0xff) << 16;
                x += (buf[pos++] & 0xff) << 8;
                x += buf[pos++] & 0xff;
                total += x;
            long end = System.currentTimeMillis();
            System.out.println("time " + (end - start) + " ms");

Similar Messages

  • CS4 won't open TIFF with IBM PC Byte order

    When our team tries to open images saved as TIFFs with a Byte order for IBM PC with Photoshop CS4, we get an error message that we do not have enough RAM to open the image.
    We then opened the image on a machine running Photoshop CS2 with no problem, resaved the TIFF with a Macintosh Byte order and that seemed to fix the problem.
    Is this a glitch that Adobe is aware of and will it be fixed in any upcoming updates?

    I know of one bug like that, reading PackBits compressed images (the bug is very rare).
    But if you could send me a copy of the file that won't open, I'll take a look and see if it might be a problem we don't know about.

  • Will this work to determine the system byte order?

    I need to be able to determine, preferably without relying on conditional disable symbols, whether the system on which a VI is run uses big-endian byte order.  What I came up with was to flatten a U32 containing 0xDDCCBBAA to string using the unknown system byte order, and unflatten using a big-endian byte order.  If the pre- and post-flattening U32's are equal, the system must be use big-endian byte order.  Two questions:
    1. Will this work?  It recognizes, correctly, that my PC does not use big-endian byte order, but I don't have any big-endian targets to test it out on (see #3).
    2. Is there a better way?  Perhaps something already built into LabVIEW?
    3. Would somebody with a big-endian target verify that my VI detects it as big-endian?
    Thanks,
    Mark Moss
    Attachments:
    Detect Big-Endian System.vi ‏8 KB

    You can simplify your VI a little.  Since you know what the result of the string flattening should be in big endian, just compare the string.
    Attachments:
    Detect Big-Endian System 2.vi ‏7 KB

  • FIle Creation in the Application Server With Unicode-8 and Byte-Order Mark

    Hi Guys,
    I've requirement of creating a file in the Application server with the Data.
    The Data Format Should be in UTF-8 and Byte-Order Mark.
    I need to supply this data from SAP to PRMS.
    I'm able to create a file with Unicode, but any of the guys have worked on Umicode with Byte-Order Mark, please let me know.
    Thanks,
    Adi.

    Hi Mathieu,
    If you haven't found an aswer yet, you can check in transaction SE24 CL_ABAP_FILE_UTILITIES method CREATE_UTF8_FILE_WITH_BOM. You can check the code of the method (it's very short) so you can understand how it works. It's also a static method so you can call it directly in your program.
    Ex:
    CALL METHOD cl_abap_file_utilities=>create_utf8_file_with_bom(your_file_name).
    I hope this helps.
    Pax Vobiscum.
    ~ Eric

  • Weblogic and byte order mark in files

    We have web application with static content - html files, js files, images, etc.
    There are byte order mark at the beginning of all html files.
    These files were genereted by some tool. So I cannot modify them.
    We deploy this application on Weblogic.
    When I try to access this web application via direct link to Weblogic, then I have a lot of javascript errors.
    But in case when I try to access this page via Apache proxy - then all is ok.
    But Apache forwards all request direct to Weblogic.
    And I do not have such errors in case if application was deployed on JBoss.
    In this case I can access application both via direct link to JBoss and via proxy.
    Anybody have some assumptions - why I cannot access application via direct link to Weblogic?

    I have found a solution for this problems.
    I've just added following mime mapping to web.xml:
    <mime-mapping>
    <extension>xml</extension>
    <mime-type>text/xml</mime-type>
    </mime-mapping>
    <mime-mapping>
    <extension>js</extension>
    <mime-type>text/javascript</mime-type>
    </mime-mapping>

  • How can I create files in unicode format without "byte order mark"?

    Hello together,
    I have to export files in UTF-8 format and have to send them to another partner system which works with linux as operating system.
    I have tried the different possibities to create files with ABAP. But nothing works 100% how I want.
    Some examples:
    1.)
    OPEN DATASET [filename] FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
    If I create a file in this way and download it from application server to local system the result for file format in a unicode text edior like NotePad is "ANSI AS UTF-8". This means I have no BYTE ORDER MARK inside.
    But it is also possible that the file format is only ANSI if the file contains no "special characters", isn't it?
    In my test cases I create 3 files. 2 of them has format "ANSI AS UTF-8", and one only "ANSII".
    After transfer to Linux I get as result 2 times UTF8 and one time ASCII as file format.
    2.)
    OPEN DATASET [filename] FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE ORDER MARK.
    With this syntax the result in local editor looks like ok. I get as format really "UTF-8".
    But I get problems with the system which receives the files.
    All files has the file format UTF-8 on Linux but the interface / script can not read the file with BYTE ORDER MARK.
    This is a very big problem for me.
    Do anybody of you know if it possible to force creation in UTF-8 without a BYTE ORDER MARK?
    This means more or less the first example but all files should have UTF-8 format!
    Thanks in advance
    Christian

    This means it is not possible to create a pure unicode file without the byte order mark?
    You wouldn't happen to know how a file with byte order mark should read on a Linux system?
    Or if this possible or not?
    Regards
    Christian

  • Byte Order Mark (BOM) not found in UTF-8 file download from XI

    Hi Guys,
    Facing difficulty in downloading file from XI in UTF-8 format with byte order mark.
    Receiver File adapter has been configured to download the file in UTF-8 file format. But the byte order mark is missing. Same works well for UTF-16. Could see the byte order mark at the beginning of  file "FEFF" for UTF-16BE - Unicode big endian.
    As per SAP help, UTF-8 supposed to be the default encoding for TEXT file type.
    Configuring the Receiver File/FTP Adapter in the SAP help link.
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/frameset.htm
    Could you please advice on how to achieve BOM in UTF-8 file as it is very important for the outbound file to get loaded in our vendor system.
    Thanks.
    Best Regards
    Thiru

    Hi!<br>
    <br>
    Had the same problem. But here, we create a "CSV"-File which must have the BOM otherwise it will not be recogniced as UTF-8.
    <br>
    Therefore I've done the folowing:
    Created a simple destination-structure which represents the CSV and done the mapping with the graphical-mapper. The destination-Structure looks like:
    <br>
    (?xml version="1.0" encoding="UTF-8"?)<br>
    (ONLYLINES)<br>
         (LINE)<br>
              (ENTRY)Hello I'm line 1(/ENTRY)<br>
         (/LINE)<br>
         (LINE)<br>
              (ENTRY)and I'm line 2(/ENTRY)<br>
         (/LINE)<br>
    (/ONLYLINES)
    As you can see, the "ENTRY"-Element holds the data.<br>
    <br>
    Now I've created the folowing Java-Mapping and added that mapping within the Interface-Mapping as second step after the graphical mapping:<br>
    <br>
    ---cut---<br>
    package sfs.biz.xi.global;<br>
    <br>
    import java.io.InputStream;<br>
    import java.io.OutputStream;<br>
    import java.util.Map;<br>
    <br>
    import javax.xml.parsers.DocumentBuilder;<br>
    import javax.xml.parsers.DocumentBuilderFactory;<br>
    <br>
    import org.w3c.dom.Document;<br>
    import org.w3c.dom.Element;<br>
    import org.w3c.dom.NodeList;<br>
    <br>
    import com.sap.aii.mapping.api.StreamTransformation;<br>
    import com.sap.aii.mapping.api.StreamTransformationException;<br>
    <br>
    public class OnlyLineConvertAddingBOM implements StreamTransformation {<br>
    <br>
         public void execute(InputStream in, OutputStream out) throws StreamTransformationException {<br>
              try {<br>
                   byte BOM[] = new byte[3];<br>
                   BOM[0]=(byte)0xEF;<br>
                   BOM[1]=(byte)0xBB;<br>
                   BOM[2]=(byte)0xBF;<br>
                   String retString=new String(BOM,"UTF-8");<br>
                   Element ServerElement;<br>
                   NodeList Server;<br>
                   <br>
                DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance();<br>
                DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder();<br>
                Document doc = docBuilder.parse(in);<br>
                doc.getDocumentElement().normalize();<br>
                NodeList ConnectionList = doc.getElementsByTagName("ENTRY");<br>
                int count=ConnectionList.getLength();<br>
                for (int i=0;i<count;i++) {<br>
                    ServerElement = (Element)ConnectionList.item(i);<br>
                    Server = ServerElement.getChildNodes();<br>
                    retString += Server.item(0).getNodeValue().trim() + "\r\n";<br>
                }<br>
                <br>
                out.write(retString.getBytes("UTF-8"));<br>
                   <br>
              } catch (Throwable t) {<br>
                   throw new StreamTransformationException(t.toString());<br>
              }<br>
         }<br>
    <br>
         public void setParameter(Map arg0) {<br>
              // TODO Auto-generated method stub<br>
              <br>
         }<br>
    <br>
    /*<br>
         public static void main(String[] args) {<br>
              File testfile=new File("c:\\instance.xml");<br>
              File testout=new File("C:\\testout.txt");<br>
              FileInputStream fis = null;<br>
              FileOutputStream fos= null;<br>
              OnlyLineConvertAddingBOM myFI=new OnlyLineConvertAddingBOM();<br>
              try {<br>
                    fis = new FileInputStream(testfile);<br>
                     fos = new FileOutputStream(testout);<br>
                    myFI.setParameter(null);<br>
                    myFI.execute(fis, fos);<br>
              } catch (Exception e) {<br>
                   e.printStackTrace();<br>
              }<br>
                    <br>
                    <br>
         }<br>
         */<br>
    <br>
    }<br>
    --cut---
    <br>
    This Mapping searches all "ENTRY"-Tags within the XML-Strucure and creates a big string which startes with the UTF-8-BOM and than combined each ENTRY-Element, separated by CR/LF.<br>
    <br>
    We use this as Payload for an Mail-Adapter (sending via SMTP) but it should also work on File-Adapter.<br>
    <br>
    Hope it helps.<br>
    Rene<br>
    <br>
    Besides: could someone tell SAP that this editor is the WORSEST editor I've ever seen. Maybe this guys should copy somethink from wikipedia :-((
    Edited by: Rene Pilz on Oct 8, 2009 5:06 PM

  • UNICODE Byte Order Marker at beginning of text files

    Hi,
    I'm running in to problems when reading text from a number of text files, some of which are plain US-ASCII text and others which are also plain US-ASCII content but contain a UNICODE UTF-8 Byte Order Mark at the beginning of the file i.e. the bytes 0xEF 0xBB 0xBF.
    I open each file using standard :
    InputStream fis = new FileInputStream(fileName);
    Reader fileReader = new InputStreamReader(fis);
    However, in those cases where a BOM is present, the first 3 characters of my stream are the BOM above which I would have expected to have been automatically stripped. When I set the encoding in the InputStreamReader I still get a single garbage character, whereas when I perform above with UTF-16 I get only the files chars as expected.
    Do I need to open this file as a byte stream and check the BOM myself and then derive the type of encoding I should be opening the file with? And if so, for UTF-8 I also then must discard the first 3 bytes?
    Please help as I don't want to have to do this if possible and I hope someone can understand my problem.
    My JVM environment is 1.4.2 on XP.
    Many THanks,
    Henry

    Probably not. Some of the Unicode encoding types... the list is here:
    http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html
    support the BOM, so you'd only need to know that it's UnicodeBig or UnicodeLittle or whatever it really is.
    Of course, if you don't know what it is, that is a problem. You can probably assume the BOM bytes are actually that, but technically, you can't generally infer any particular encoding type by just reading the file. I mean, who's to say that a file is UTF-8 encoded or ISO8859-1? Yes, if it is UTF-8, and it includes chars that are of multi-byte sets (Chinese, for example), then many characters, if read as ISO8859-1, would look on screen like gibberish. But from the standpoint of reading a file at the character level, Java doesn't care and can't know.
    So to really know, you would either have to know ahead of time what the encoding is, or do some analysis of the data to see if it's likely 1 or the other, which is probably hard to do cuz it would require some sort of natural language knowledge.

  • How to read binary files wrt specific BYTE size and length??

    Hello Everyone,
                                I have a project I want to accomplish. I have a binary file, and I would like to read the data and print on wfm in a specific order and size.
    The data is 16 bit binary type , and needs to be read in chunks of 2 bytes.
    i have 30 bytes of sample 1.
    followed by 2 bytes of sample 2.
    followed by another 2 bytes of sample 3.
    steps 2-4 should be repeated 10 times and then i should read sample 4 which is of 2 bytes.....
    How should I do it?? I don't have any VI build... all i have is the example VI...
    can anyone pleasehelp me???
    Now on LabVIEW 10.0 on Win7

    smercurio_fc, sorry for the confusion, i will try my best to explain...
    1. No, i don;t have to read the file again, once it has read, I used
    while loop just to see the data updating (i press run, and before i can
    visualize i have the waveforms; i can get rid of the while loop)
    2. I have 30 different values of 1 sample. actually, the data is cmg
    from tri-axial accelerometer; each axis is of 10 bytes(hence 3*10 =
    30bytes)
    3. I am repeating the steps 2-4 10 times because the data was written
    into the binary file after 10 times sampling the sensors(if first 3
    samples are read @ 1000hz, sample 4 was read at 1000/10 = 100hz)
    4. I am using the graphs to interpret the values, that's it. The
    values are already scaled when they were wrote to the binary file, I
    have to simply interpret it.
    I have made some changes in the VI, now i am reading only the first
    30bytes, that too, in chunks of 10-10-10 bytes, and plotting the 3
    samples simultaneously on a waveform chart. (will approach 1
    sensor/sample at a time) and running the loop for 10 times. I have changed I8 to I16 now.
    Please let me know if it makes sense to you now.
    P.S. each sample is a sensor data.
    Now on LabVIEW 10.0 on Win7
    Attachments:
    data_read.vi ‏24 KB

  • TIF file byte order

    Are the TIF files LR writes MAC or IBM PC byte order?

    I think it is always what Photoshop for historical reasons calls IBM PC. This means little endian byte order. In the heydays of computing Macs used Motorola processors that were big endian (hence the outdated reference to Mac in the Photoshop screen). Of course it is completely irrelevant nowadays as a byte order swap is highly trivial and I don't think there is any software out there that cannot deal with either format just as well.

  • Select Byte Order when Savin a DAT file

    Hi,
    I need to save a DAT file selecting "Low -> High" as Byte Order, but nothing works. Doesn't matter what I do It's always stored as "High -> Low"
    I tried assigning the value "Low -> High" to the variables FileHdByteOrder & DataSetByteOrder but when saving the values return to "High -> Low" and are ignored.
    Is there a way to do it?
    Thanks in advance.
    Marc.

    Hi Brad,
    Thanks for the clarification.
    We generate the DAT files with LabVIEW (any byte order can be selected) and then we use a custom Java application to manage a database results from the DAT files.
    Sometimes we need to work on the files using DIAdem but gives us many problems as our Java application only supports Big-Endian DAT files. I think that we will have to implement that feature in our application.
    Any way could be nice to select the byte order in DIAdem.
    Best regards,
    Marc.

  • I operate a windows 8,  62 bit computer.  I have ordered and downloaded the following product for AU$129 Adobe Photoshop Elements 13 (Windows,English) on the understanding that it would convert PD files to word.  It is not doing so. Is this a wrong order?

    I operate a windows 8,  62 bit computer.  I have ordered and downloaded the following product for AU$129 Adobe Photoshop Elements 13 (Windows,English) on the understanding that it would convert PD files to word.  It is not doing so. Is this a wrong order?

    if you're trying to convert pdf files to word files, you want acrobat pro or acrobat standard, Buying guide | Adobe Acrobat XI Standard
    http://helpx.adobe.com/x-productkb/policy-pricing/return-cancel-or-change-order.html
    http://helpx.adobe.com/x-productkb/global/phone-support-orders.html

  • I want to order files (e.g., photos) in a particular order (not necessarily an order that is an option in MacOS. Ie, my own custom order, and I want them STAY in that order until I change the order. Is this possible?

    I want to order files (e.g., photos) in a particular order (not necessarily an order that is an option in MacOS. Ie, my own custom order, and I want them STAY in that order until I change the order. Is this possible?

    Custom sort order is not a typical option in the list view of any desktop computing systems I know of. The way you're seeing it work is also how it works in Windows and other systems.
    There is no "None" sort order. What you're seeing is a "None" option under "Arrange By." Arranging is not the same as sorting; arranging is how the files are grouped in the window. For example, you can arrange (group) files by Kind while sorting by Name. You'll notice that if you are in List view and you choose Arrange By "None," in the window there is still a sort triangle next to the column name that it's sorting by.
    If Arrange is set to None, the files appear in one big group (the traditional view) that is controlled by the Sort order (where there is no None option).
    I didn't even know the difference between Sort and Arrange until I saw your question and did a little research! This Macworld article helped:
    How to arrange and sort files in Lion Finder (also applies to later OSs like Mavericks)
    As for your main question, how to do a custom sort order in the Finder, you can only rearrange files manually in Icon view. As with other operating systems, if you want to create a custom sort order for photos there are other ways:
    Use a photo organizer application like iPhoto or Lightroom. Some let you have custom sort orders in folders, but others only let you have a custom sort inside a different container like an "album" or "collection". Having those additional container types is part of why photos are often best managed in an organizer application and not on the desktop.
    Rename the files. You can use a renaming utility program to rename the photos with numbers that will enforce your sort order when sorted by name on the desktop, like
    01 Photo.jpg
    01 Photo.jpg
    01 Photo.jpg
    or
    Photo 01.jpg
    Photo 02.jpg
    Photo 03.jpg

  • Order of directories and files listed by the ls command

    Per default, the "ls" command lists directories and non-directories separately in lexicographical order.
    Is there a possibility to change that order of listing and sorting?
    For example, if I want "ls" to list my directories and files not by name but by change date +per default+ without using any additional options, what will I have to do?
    Is there a command or a setting which influcenes "ls"'s order of sorting?

    I don't think the defaults for ls can be changed as it is a compiled binary located in /bin. Besides compiling your own version of ls, one solution is to add lines such as the following to /etc/profile or (presuming you're using BASH) ~/.bash_profile and opening a new Terminal window:
    alias l="ls -l"
    alias ll="ls -la"
    I have two such lines in my own profile so that when I type "l" followed by the return key I get the output of "ls -l" and issuing "ll" gives me a listing of dot-files, too.
    Probably not exactly what you were looking for but it does simplify the command issued at the command line and avoid compiling your own.
    hth,
    Johnnie Wilcox
    aka mistersquid

  • ConvertToClob and byte order mark for UTF-8

    We are converting a blob to a clob. The blob contains the utf-8 byte representation (including the 3-byte byte order mark) of an xml-document. The clob is then passed as parameter to xmlparser.parseClob. This works when the database character set is AL32UTF8, but on a database with character set WE8ISO8859P1 the clob contains an '¿' before the '<'AL32UTF8');
    I would assume that the ConvertToClob function would understand the byte order mark for UTF-8 in the blob and not include any parts of it in the clob. The byte order mark for UTF-8 consists of the byte sequence EF BB BF. The last byte BF corresponds to the upside down question mark '¿' in ISO-8859-1. Too me, it seems as if ConvertToClob is not converting correctly.
    Am I missing something?
    code snippets:
    l_lang_context number := 1;
    dbms_lob.createtemporary(l_file_clob, TRUE);
    dbms_lob.convertToClob(l_file_clob, l_file_blob,l_file_size, l_dest_offset,
                                       l_src_offset, l_blob_csid, l_lang_context, l_warning);
    procedure fetch_xmldoc(p_xmlclob in out nocopy clob,
                                       o_xmldoc out xmldom.DOMDocument) is
    parser xmlparser.Parser;
    begin
      parser := xmlparser.newParser;
      xmlparser.parseClob(p => parser, doc => p_xmlclob);
      o_xmldoc := xmlparser.getDocument(parser);
      xmlparser.freeParser(parser);
    end;The database version is 10.2.0.3 on Solaris 10 x86_64
    Eyðun
    Edited by: Eyðun E. Jacobsen on Apr 24, 2009 8:58 PM

    can this be of some help? http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions027.htm#SQLRF00620
    Regards
    Etbin

Maybe you are looking for

  • I can't load 11.4.  Now I don't have Itunes at all.  keeps giving me a runtime error

    I keep getting I need to update itunes to the lastest one.  I did and after that, I keep getting this error R6034 that says an application has made an attempt to load the c runtime library incorrectly.  Please contact support.  What do I do?  I remov

  • How to set up multiple log4j configurations for multiple WAR packages?

    hi all, the EAR that i am trying to deploy has multiple WARs. I need to configure separate loggers for each these WARs. Could any of experts can help me out.

  • Bootcamp problem cannot format windows partition

    I have tried numerous times to run bootcamp to install Windows on new Macbook 13" Retina.  It always hangs at the stage where windows tries to format the BOOTCAMP partition.  It says it cannot format it I contacted Apple support who said that I could

  • Using IDOC_WRITE_AND_START_INBOUND

    Hi,    I have problem in using function module IDOC_WRITE_AND_START_INBOUND. The idoc number is generated but the IDoc does not exists in the Database. When i debugged, i found that a select query for table TBD55 fails that raises an exception... So

  • LR 4 Beta

    What are the qualifications to download and test LR 4 Beta? I have LR2 and CS5. What is best guess on a LR 4 Beta date? Thanks