Converting 'little endian bytes' to short

i need to convert to bytes in little endian format (low order byte first) into a short. i wrote an algorithm that i thought would solve this problem. it seems to work most of the time, however occassionally it does produce the wrong value. i was hoping someone out there could help me solve this problem
public static short ByteToShort(byte one, byte two)
// little endian format
// remember LO then HO bytes
short result = 0;
result = (short) ((two << 8) + one);
return result;
}

byte one and two are converted to int in the operation (before reconverted to short in the assignment).
We must be extra careful here with possible signed bytes.
public static short ByteToShort(byte one, byte two)
  // little endian format
  // remember LO then HO bytes
  short result = 0;
  result = (short) (((two << 8) & 0xFF00) | (one & 0x00FF));
  return result;

Similar Messages

  • Converting bytes to shorts using arrays and NIO buffers - is this a bug?

    I'm benchmarking the various methods for converting a sequence of bytes into shorts. I tried copying a byte array, a direct NIO buffer and a non-direct NIO buffer to a short array (using little-endian byte order). My results were a little confusing to me. As I understand it, direct buffers are basically native buffers - allocated by the OS instead of on the runtime's managed heap, hence their higher allocation cost and ability to perform quicker IO with channels that can use them directly. I also got the impression that non-direct buffers (which have a backing array) are basically just regular Java arrays wrapped in a ByteBuffer.
    I didn't expect a huge difference between the results of my three tests, though I thought the two NIO methods might perform a bit quicker if the virtual machine was smart enough to notice that little-endian is my system's native byte order (hence allowing the chunk of memory to simply be copied, rather than applying a bunch of shifts and ORs to the bytes). Conversion with the direct NIO buffer was indeed faster than my byte array method, however the non-direct NIO buffer performed over ten times slower than even my array method. I can see it maybe performing a millisecond or two slower than the array method, what with virtual function call overhead and a few if checks to see what byte order should be used and what not, but shouldn't that method basically be doing exactly the same thing as my array method? Why is it so horribly slow?
    Snippet:
    import java.nio.*;
    public class Program {
      public static final int LOOPS = 500;
      public static final int BUFFER_SIZE = 1024 * 1024;
      public static void copyByteArrayToShortArrayLE(byte[] buffer8, short[] buffer16) {
        for(int i = 0; i < buffer16.length; i += 2){
          buffer16[i >>> 1] = (short) ((buffer8[i] & 0xff) | ((buffer8[i + 1] & 0xff) << 8));
      public static void main(String[] args) {
        short[] shorts = new short[BUFFER_SIZE >>> 1];
        byte[] arrayBuffer = new byte[BUFFER_SIZE];
        ByteBuffer directBuffer = ByteBuffer.allocateDirect(BUFFER_SIZE).order(ByteOrder.LITTLE_ENDIAN);
        ByteBuffer nonDirectBuffer = ByteBuffer.allocate(BUFFER_SIZE).order(ByteOrder.LITTLE_ENDIAN);
        long start;
        start = System.currentTimeMillis();
        for(int i = 0; i < LOOPS; ++i) copyByteArrayToShortArrayLE(arrayBuffer, shorts);
        long timeArray = System.currentTimeMillis() - start;
        start = System.currentTimeMillis();
        for(int i = 0; i < LOOPS; ++i) directBuffer.asShortBuffer().get(shorts);
        long timeDirect = System.currentTimeMillis() - start;
        start = System.currentTimeMillis();
        for(int i = 0; i < LOOPS; ++i) nonDirectBuffer.asShortBuffer().get(shorts);
        long timeNonDirect = System.currentTimeMillis() - start;
        System.out.println("Array:      " + timeArray);
        System.out.println("Direct:     " + timeDirect);
        System.out.println("Non-direct: " + timeNonDirect);
    }Result:
    Array:      328
    Direct:     234
    Non-direct: 4860

    Using JDK1.6.0_18 on Ubuntu 9.1 I typically get
    Array: 396
    Direct: 550
    Non-direct: 789
    I think your tests are a little too short for accurate timings because they are likely to be significantly influenced by the JIT compilation.
    If I change to use a GIT warmup i.e.
    public static final int LOOPS = 500;
        public static final int WARMUP_LOOPS = 500;
        public static final int BUFFER_SIZE = 1024 * 1024;
        public static void copyByteArrayToShortArrayLE(byte[] buffer8, short[] buffer16)
            for (int i = 0; i < buffer16.length; i += 2)
                buffer16[i >>> 1] = (short) ((buffer8[i] & 0xff) | ((buffer8[i + 1] & 0xff) << 8));
        public static void main(String[] args)
            short[] shorts = new short[BUFFER_SIZE >>> 1];
            byte[] arrayBuffer = new byte[BUFFER_SIZE];
            ByteBuffer directBuffer = ByteBuffer.allocateDirect(BUFFER_SIZE).order(ByteOrder.LITTLE_ENDIAN);
            ByteBuffer nonDirectBuffer = ByteBuffer.allocate(BUFFER_SIZE).order(ByteOrder.LITTLE_ENDIAN);
            long start = 0;
            for (int i = -WARMUP_LOOPS; i < LOOPS; ++i)
                if (i == 0)
                    start = System.currentTimeMillis();
                copyByteArrayToShortArrayLE(arrayBuffer, shorts);
            long timeArray = System.currentTimeMillis() - start;
            for (int i = -WARMUP_LOOPS; i < LOOPS; ++i)
                if (i == 0)
                    start = System.currentTimeMillis();
                directBuffer.asShortBuffer().get(shorts);
            long timeDirect = System.currentTimeMillis() - start;
            for (int i = -WARMUP_LOOPS; i < LOOPS; ++i)
                if (i == 0)
                    start = System.currentTimeMillis();
                nonDirectBuffer.asShortBuffer().get(shorts);
            long timeNonDirect = System.currentTimeMillis() - start;
            System.out.println("Array:      " + timeArray);
            System.out.println("Direct:     " + timeDirect);
            System.out.println("Non-direct: " + timeNonDirect);
        }then I typically get
    Array: 365
    Direct: 528
    Non-direct: 750
    and if I change to 50,000 loops I typically get
    Array: 37511
    Direct: 57199
    Non-direct: 73913
    You also seem to have a bug in your copyByteArrayToShortArrayLE() method. Should it not be
    public static void copyByteArrayToShortArrayLE(byte[] buffer8, short[] buffer16)
            for (int i = 0; i < buffer8.length; i += 2)
                buffer16[i >>> 1] = (short) ((buffer8[i] & 0xff) | ((buffer8[i + 1] & 0xff) << 8));
        }Edited by: sabre150 on Jan 27, 2010 9:07 AM

  • Swapping little endian short to big endian short

    I am trying to read a unsigned short value that a C program generates in my Java MIDlet. I am reading the data using a ByteArrayInputStream in conjunction with a DataInputStream. I tried calling readShort but it was returning bogus values. Here's the problem:
    The short I am trying to read in is 0xb500 (it was generated on a little endian machine) the value should be 181. I realized my mistake using readShort (it was signing the value) and so I called readUnsignedShort instead. Now the value of the int after the call to readUnsignedShort is 0xb500 (46336). I need to swap this value back to 0x00b5 (181) on the Java side. I have used the following function, but it returns 0xb50000 and i'm not exactly sure why.
    The swap function is like so:
    public static int swapInt(int i)
         return ((i << 24)
    | ((i & 0x0000ff00) << 8)
    | ((i & 0x00ff0000) >>> 8)
    | (i >>> 24));
    I do not have any idea what >>> is in Java. I found this function on google groups. If anyone could help me find a solution to my problem it would be much appreciated.

    An int is a signed 4 byte value (as I am sure you know)
    You are using an int to represent an unsigned 2 byte value.
    Now you want to swap the 2 bytes of your unsigned 2 byte value.
    This means that you only want to swap the two least significant bytes of your int.
    Oh, by the way, USE CODE TAGS TO FORMAT CODE!!!
    [code]
    public static int swapUnsignedShort(int i) {
        return ((i & 0x00FF) << 8) | ((i & 0xFF00) >> 8);
    }[[i]code]

  • Big endian, little endian, and converting to another datatype.

    Hi all,
    I'm working on a sound visualization program. I'm putting the sound into byte arrays, and I then need to convert those bytes into ints to draw onto the screen. This is easy if the soulds are 8-bit, because you don't have the endian issue. How do I take a byte[] and pull ints from it that represent the waveform that the bytes were pulled from? Is it something like this (where soundBytes is the byte[] pulled from a sound file):
    int sampleFirstPart = (int)soundBytes[0];
    int sampleSecondPart = (int)soundBytes[1];
    int putTogether = sampleFirstPart + sampleSecondPart * 128;
    That's just a wild guess, and I'm not really sure what order these things should go in. How do you do it if it's big-endian? What about if it's little-endian? Is there a preference of one over the other? Does the fact that I'm working with wav files make a difference.
    I'm just full of questions, but any ammount of help you can give is very very much appreciated.
    thanks,
    Matt

    int value = soundBytes[x] + (soundBytes[x+1] << 8); // LE
    or
    int value = (soundBytes[x] << 8) + soundBytes[x+1]; // BE
    I think the LE and BE labels are right, but the calculation part is correct for one or the other.
    However, you need to know the format of the file to know if it's one or the other. You can't really tell from the bytes whether it's one or the other. There should, presumably, be some way to know from the audio format header. Either there'd be a flag to incidate which, or it would be assumed one way or the other because it's of format X. I think that wav files would be LE, but don't quote me on that.

  • Converting byte[] to short, long, etc.

    I need to packetize the data with my own header embedded in the data. The header field is composed of data types: short, byte, and long. Since the header will be transported over the network using UDP socket, it will have to be converted to type byte[].
    Is there a clean way to convert from byte[] to short, and long data types? Or do I have to concatenate bytes and type cast them? How do I even type cast byte[] to short? Thanks.

    Have a look at the ByteBuffer class. You can wrap a byte
    array in such a buffer and write other types (like ints or longs etc)
    to the byte array through the buffer. Of course you can read those
    values back again using the buffer.
    kind regards,
    Jos

  • How to extract a double from a little endian stream?

    I want to support Java 1.3.1, which means that I cannot use class ByteBuffer.
    I would need ByteBuffer with the methods order and getDouble.
    I read the socket in little endian format (thus I cannot use the getDouble method of the class DataInputStream). Is there any solution in Java 1.3.1 to extract a double from 8 bytes?
    In other words, if I want to keep my code compatiple with Java 1.3.1, do I have to change the order of the bytes manually and then also extract the double value manually (extracting the sign-bit, the exponent and significant to calculate the double value is possible but quite stupid if there is another way...)
    It would also help to have the release notes for Java 1.4.2. ( the link on the SUN pages points to nowhere). I am especially interested in the history of the development of ByteBuffer. Did the functionality of order (LITTLE_ENDIAN / BIG_ENDIAN) exist in any other way in Java 1.3.1?
    Ps. I put another case to this Forum earlier today, and even added it to my watch list, and even got an answer BUT after relogin there was not even a sign of that case... I sent feedback to SUN but I'm sorry for not being able to get back to the person who answered my case.

    Thank you for your help, meloro77!
    I modified your masks and shifts and then finally created the proper method for the double values.
    There was one more trick needed: each byte MUST be converted into a long value before running the shift! The byte and the result is otherwise treated as int (32 bytes and any bytes higher than this would be discarded.
    For your and everybody else's fun and joy, I include here the code which I used:
    byte []   doubleData = new byte[8];
                 reading the bytes
          double myDouble = 0.0;
          // Create a long value of the bytes
          long doublebits;
          short MASK_TO_BYTE = 0xFF;
          doublebits = ((long)doubleData[7] & MASK_TO_BYTE)
                      | ((long)(doubleData[6] & MASK_TO_BYTE) << 8)
                      | ((long)(doubleData[5] & MASK_TO_BYTE) << 16)
                      | ((long)(doubleData[4] & MASK_TO_BYTE) << 24)
                      | ((long)(doubleData[3] & MASK_TO_BYTE) << 32)
                      | ((long)(doubleData[2] & MASK_TO_BYTE) << 40)
                      | ((long)(doubleData[1] & MASK_TO_BYTE) << 48)
                      | ((long)(doubleData[0] & MASK_TO_BYTE) << 56);
           * Let method Double.longBitsToDouble convert the value to a double
          myDouble = Double.longBitsToDouble(doublebits);

  • DataInput, DataOutput , unsigned, and little endian

    Why are readUnsignedInt(), readUnsignedLong(), etc missing from DataInput?
    Also, DataOutput doesn't even have writeUnsignedShort() or writeUnsignedByte()
    Another thing, Little Endian input/output streams are completely missing.
    I've found about 900 implementations of this missing functionality floating around the net. So the question is, why isn't it part of the standard api?
    --Stephen                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I realize this. There are hundreds of implementations floating around the net that implement it. Obviously a lot of people want it. Why is it not part of standard java?
    --Stephen
    Java does not support unsigned byte/short/int/long.
    If you need these you need to read the right width and
    convert to the int or long or BigDecimal to process
    the number you need.

  • How do I swap 64-bit and 32-bit floats from little-endian to big-ending

    I am trying to read a file that could contain a list of 64-bit floats or 32-bit floats that were written on a PC so they are little-endian.
    I need to convert the float values to big-endian so that I can process the values. I know that straight swapping each byte with the adjacent byte doesn't work (especially since their floating point values). I've tried swapping them end for end (i.e., byte 15 from the file becomes byte 0 in my array) and that didn't work either.
    I know that if I were to read the little-endian float into the big-endian float type (float or double) that the format is pretty much lost (from little I understand about how floating point values are stored in memory).
    So, what I need is a way to read in a series of little-endian floating point values (64-bit and 32-bit) into a big-endian array of floating point values.
    Anyone have any ideas on how to do this? Any help would be much appreciated.

    A 64-bit double is represented by the sign bit, an 11-bit (biased) exponent
    followed by a 52-bit mantissa. Both x86 and SPARC use the exact same
    representation for 64-bit double. The only difference is the endianness
    when stored to memory, as you observed.
    A 32-bit float is represented by the sign bit, an 8-bit (biased) exponent
    followed by a 23-bit mantissa. Again, both x86 and SPARC use the exact
    same representation for 32-bit float modulo endianness.
    As you can see, a 64-bit double is not merely a pair of 32-bit float.
    You need to know exactly how the floating-point data was written:
    if a 32-bit float was written, you must endian-swap it as a 32-bit float;
    if a 64-bit double was written, you must endian-swap it as a 64-bit double.

  • How to read data which is written in little endian using labview

    dear all
    My program in c++ creats a binary file with data as
    struct varaiable
    int a;
    double b;
    doublec;
    float d[30];
    floate[50];
    char time_date[30];
    30 such data type is stored in one .b file.
    when i am reading the file back using labview the data is not the same as stored.
    i think that my c++ program stores data in little endian format while labview is retrieving it in  big endian format.
    i Checked and found that when i am storing a=2 then while retrieving back in lab i displays it as HEX of 02 00 00 00.
    Same is the problem with double and float.so can somebody tell me how i can change labview to read in little endian format.
    I am using labview7.1 
    Abhimaniu

    Yes you are correct. Labview and C uses different endian format. Do not worry you have the toolbox in labview. The most important is the typecast function. With this you can convert anything into everything. And as long as you keep the number of bytes and the internal order intact you will always be able to convert things back. I have made an example for you regarding this topic. It is in LV7.1. I have kept this version since the new butt ugly graph cursor introduced in LV8 is a gigantic bug, and should never have seen the light of day.  Well enough of my frustration. In the data manipulation palette you will find the type cast and tools for byte/integer swapping and byte/integer splitting and merging.  If have done a similar thing before. But I do not remember exactly how to convert a C double into a labview DBL. But take a look at the number 123.123. In this number all the internal bytes are different so by comparing the C version and the labview version byte by byte you will be able to convert it correct. I your case the C struct is 4+8+8+(8X30)+(8x50)+30=690 bytes long. So this is the number of bytes you read each time. I your case I think it will be more convenient to read it as an array of 345 I16(words) but this is up to you.
    Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
    (Sorry no Labview "brag list" so far)
    Attachments:
    sample.vi ‏59 KB

  • Big-endian or little-endian

    Hello,
    Is JVM big-endian or little-endian? Or is it platform dependent?
    How does one check this?
    Regards.

    I also saw over the internet that java is big-endian. But when you run this small piece of code,
    class Test {
         public static void main(String[] args) {
              short x = 10;
              byte high = (byte)(x >>> 8);
              byte low = (byte)x;/* cast implies & 0xff */
              System.out.println( "x=" + x + " high=" + high + " low=" + low );
    The output is:
    x=10 high=0 low=10
    This sample code I have taken from "mindprod.com" site only.
    Is storage different from display? Or am I missing something here?
    Regards.

  • Little endian & big endian format system

    Hi,
    I have code which works fine for Little endian format system, can somebady please tell how to get data from big endian format system.
    or is sap has setting where i can turn on and than run my code on that system.
    Thanks,
    John.

    hi
    chk this
    OPEN DATASET dset IN LEGACY TEXT MODE [(BIG|LITTLE) ENDIAN] [CODE PAGE cp]
    Effect
    Data is read or written in a form which is compatible to BINARY MODE in Releases <= 4.6. This addition is primarily used to convert a file into the code page format specified already when it is opened. At runtime, the system uses the format of the system code page of the application server. The system saves the file then again in the code page specified. This procedure is important if data is exchanged between systems using different code pages. For more information, see READ DATASET and TRANSFER.
    Notes
    on BIG ENDIAN, LITTLE ENDIAN
    These additions specify the byte sequence in which to store numbers (ABAP types I, F, and INT2) in the file.
    These additions may only be used in combination with the additions IN LEGACY BINARY MODE and IN LEGACY TEXT MODE. If these are not specified, the system assumes that the byte sequence determined by the hardware of the application server is used in the file.
    If the byte sequence specified differs from that determined by the hardware of the application server, READDATASET and TRANSFER make the corresponding conversions.
    These additions replace the language element TRANSLATE ... NUMBER FORMAT ... which must not be used in Unicode programs.
    on CODE PAGE cp
    This addition specifies the code page which is used to represent texts in the file.
    This addition may only be used in combination with the additions IN LEGACY BINARY MODE and IN LEGACY TEXT MODE. If this addition is not specified, the system uses the code page defined by the text environment current at the time a READ or TRANSFER command is executed (see SET LOCALE LANGUAGE).
    This addition replaces the language element TRANSLATE ... CODE PAGE ... which must not be used in Unicode programs.
    open datset ... IN LEGACY BINARY MODE [(BIG|LITTLE) ENDIAN] [CODE PAGE cp]
    Effect
    Data is read or written in a form which is compatible to BINARY MODE in Releases <= 4.6. This addition is primarily used to convert a file into the code page format specified already when it is opened. At runtime, the system uses the format of the system code page of the application server. The system saves the file then again in the code page specified. This procedure is important if data is exchanged between systems using different code pages. For more information, see READ DATASET and TRANSFER.

  • Big endian - Little endian conversion

    Hi all, I'm a new user!
    I'm working with As400 and java using jtopen.
    With this code:
    PrintObjectTransformedInputStream is = spoolFile.getTransformedInputStream(printParms);           
    BufferedInputStream bufIn = new BufferedInputStream(is);                       
    BufferedOutputStream bufOut = new BufferedOutputStream(new FileOutputStream("c:/x.tif"));
    int c;
    while ((c = bufIn.read()) != -1) {
         bufOut.write(c);
    bufIn.close();
    bufOut.close();I'm getting a tiff stream and then write it to a file. The tiff stream is in Big endian format, but I need a tiff in Little endian format.
    How can I convert the stream from Big endian to Little endian?
    Thanks

    Tiff is a complex format, containing fields of different lengths, so you aren't likely to succeed just by reversing every pair of bytes in the stream without following the TIF format.
    You could look at the JAI (Java Advanced Imaging), which can read and write TIFF files in a variety of modes.

  • What is Little Endian / Big Endian in Sound settings for Apple Intermediate

    Hi.
    In Final Cut Express, I am trying to splice together multiple video clips, combining footage from:
    1) HDV camcorder imported into iMovie 08 as 960 x 540 (the "lower resolution" option from iMovie '08 import), 16-bit @ 48 KHz (Big Endian), 25 FPS
    2) AVI files (DiVX 512 x 384, MP3 at 44.1 KHz, 23.98 FPS)
    3) Canon Camera "movie" files (Apple OpenDML JPEG 640 x 480, 16-bit Little Endian, Mono @ 44.1 KHz, 30 FPS)
    Questions
    1) What is this little endian / big endian thing?
    2) What is the best codec for me to edit in? My targets are NTSC DVD and also a HD version served via iPod connected to HDTV.
    I am not sure what codec to convert everything to, so that I can edit without having to RENDER every time I do something. I tried to export using QuickTime Pro to convert to Apple Intermediate Codec but am not sure about the option for "Little Endian" (I am using an Intel Mac; I assume I do NOT use little endian? Can someone help clarify?)
    Many thanks!!!

    1. They're compression formats. Different codec use different compression schemes.
    2. You should convert your material to QuickTime using the appropriate DV codec and frame rate.
    None of your material is HD. Some of it is low resolution, lower than even DV. There is no good way to make this material HD.

  • Any need for conversation from big endian and little endian?

    Hi,
    I am planning to migrate an Oracle 9i Database on AIX 5.3 to Oracle 11g R2 Windows 2008, and have planned to use transportable tablespace. But prior to that task is the conversation required from big endian and little endian using RMAN?
    Appreciate any suggestions, comments and hints
    Thanks

    Hi,
    Check V$TRANSPORTABLE_PLATFORM, it shows the ending for each supported platform. Given the results on my 11g, I suspect that you'll have to convert the tablespaces...
    SYSTEM@oracle11 SQL>select *
      2   from V$TRANSPORTABLE_PLATFORM
      3  ;
    PLATFORM_ID PLATFORM_NAME                                                                                         ENDIAN_FORMAT
              7 Microsoft Windows IA (32-bit)                                                                         Little
              6 AIX-Based Systems (64-bit)                                                                            Big
              8 Microsoft Windows IA (64-bit)                                                                         Little
             12 Microsoft Windows x86 64-bit                                                                          LittleHtH
    Johan

  • TCP Programming / Why do not need to worry about Big and little Endian?

    Please help, I do not understand this concept please explain.
    The architecture of a CPU is either little-endian or big-endian; some modern CPU's allow a choice via software.
    The TCP/IP protocol standard specifies that all the bytes that make up an item must be sent in "network order", which happends to be big-endian. Intel Pentium CPU's are little-endian.
    This implies that on an Intel machine the TCP software will have to chop an int into bytes and then reverse the bytes before transmitting them.
    Why does the JAVA TCP software does not need to perform the reversal?
    Thanks,
    Alex

    But why would I need to use the DataOutputStream,You don't have to.
    But that's what the Java API provides for sending java primitives over a stream. You wouldn't have to use that. You could chop the int into bytes yourself, and send the bytes, and your Java code still wouldn't have to worry about the endiannes of it, because the VMs handle that.
    DataOutputStream just does the chopping and reassembling for you, so it's easier than doing it yoursefl.

Maybe you are looking for

  • SOA composite failing with unexpected element

    Hello All In OIM 11g R2PS2 and SOA 11.1.1.7, we have a custom workflow which sends an approval request to all members in a specific role e.g. Role1 at operational level. I took that workflow and added a switch case which will send an approval to same

  • Payment through payment batch table update

    Hi all , my user mistakenly choose whole invoices as he was creating payment through batches in oracle payables payment batches. Now when we choose any supplier to payment, it did not select the invoices. I search some table like ap_checks, ap_inovic

  • Time Machine has Gone Nuts

    I hope somebody can help with this. I have an aluminum Macbook Pro with a 160 GB internal drive and an external Seagate 300 GB drive. There's about 50 GB of miscellaneous stuff on the external drive alongside a Time Machine backup -- in the same part

  • Larger fonts in Menu bar and title bars?

    I recently purchased an Intel Mac Mini for my mother. She does not have the greatest eyesight and I would like to increase the size of the font used in the menu bar, window title bar, etc.. In Windows there is a "large font" theme that does this very

  • Where can I download the plugin for .psdx files

    I have photoshop cs5.1 extended on a win7 desktop and photoshop touch on an android tablet.  I've read that there is a plugin called touch app plugins and I'm directed to adobe's download page but that plugin does not seem to exist there. Where can I