Bit bytes....bitten

ok, im used to vb, where bytes are unsigned (0-255). is this possible in java?
if not how are bytes saved in java?
eg in vb:
0000 0000 = 0
0000 0001 = 1
1111 1110 = 254
1111 1111 = 255
in java is it:
0000 0000 = -128
0000 0001 = -127
1111 1110 = 126
1111 1111 = 127

Bytes are signed but that doesn't stop you from treating them unsigned...
Byte values are like they usually are, 0000 0000 = 0, 0000 0001 = 1, ...., 0111 1111 = 127, 1000 0000 = -128, 1000 0001 = -127, ..., 1111 1111 = -1.
That is, values to 2^7-1 go normally. If the "sign bit" is 1 then the value displayed is -(~x+1) but the same algorithms for addition and multiplication that apply to unsigned numbers can still be used.
If you want the "unsigned" value of a byte you can do this:
System.out.println(b & 0xFF);

Similar Messages

  • Addition in Java bits, bytes and shift

    I have a byte array consisting of 8 bit values in each cell.
    When I do an addition of two cells - which gives me a 16 bit result I use the following:
    Integer a = (int) (arr[value] & 0xff);
    a = a << 8;
    Integer b = (int) (arr[secondValue] & 0xff);
    Integer c = a + b;This seems to work fine. My question is how would I add 3 (24 bit result) or 4 (32 bit result) cells.
    Would the following work: 24 bit result
    Integer a = (int) (arr[value] & 0xff);
    a = a << 16;
    Integer b = (int) (arr[secondValue] & 0xff);
    Integer c = (int) (arr[thirdValue] & 0xff);
    Integer d = a + b + c;I am not sure if I have got the shift (<<16) usage correct or if it should be used in order to obtain the variables b or c.
    All help appreciated.

    khublaikhan wrote:
    Just to confirm for 32 bit it would be:
    // 32-bit
    int a = (int)(arr[value] & 0xff);
    int b = (int)(arr[secondValue] & 0xff);
    int c = (int)(arr[thirdValue] & 0xff);
    int d = (int)(arr[fourthValue] & 0xff);
    int e = (a<<24) + (b<<16) + (c<<8) + d;
    Integer eInt = new Integer(e);
    Actually, the more I think about it, you may need to use longs instead of ints for 32-bit (not 16- or 24-bit though). It depends on what your data actually is. If you're expecting your 32-bit number to be an unsigned integer, then you'd better go with long. This is because in Java, ints utilize two's complement, so if the high-order byte you read in for a 32-bit value is very large, the resulting int will be interpreted by Java as a negative number instead of the large positive value you expect.
    I'm probably not being very clear. Check out's_complement for the details.
    In other words, if you wanted to get 32-bit values in this way, and the computed values are expected to be non-negative integers, use this:
    // 32-bit
    int a = (int)(arr[value] & 0xff);
    int b = (int)(arr[secondValue] & 0xff);
    int c = (int)(arr[thirdValue] & 0xff);
    int d = (int)(arr[fourthValue] & 0xff);
    long e = (a<<24) + (b<<16) + (c<<8) + d;
    Long l = new Long(e);If you're actually reading in a 32-bit, two's complement value, then keep using int like you originally posted.

  • Concat 4 8-bit bytes into 1 32-bit word

    I am just getting started up with LabView. I have an FPGA that is sending a signed (2's Comp) 32 bit vector to the PC serial port in four 8-bit chunks. I am having trouble figuring out how to recombine the four bytes into the original 32-bit word. It is a simple task in any programming language using concatenation but I am really having problems with this.

    Two ways come to mind immediately.
    1) Go from a string (VISA returns a string from a read) to a byte array then loop and shift each byte the correct amount while OR'ing the result with that from each loop iteration.
    2) Use the join numbers vi to combine multiple bytes into a single 32-bit word.
    I've attached a vi that demonstrates these two solutions.
    Attachments: ‏28 KB

  • Bits & Bytes

    I'm trying to reach a binary switch file. The file consists of bytes. Each byte consists of 4 bits, e.g., in my first byte position I have 02. I'm trying to read this binary file and write to another file in decoded ASCII/Hex format. I'm facing following problems :
    1. If the value is less than 0xF, I'm not able to process the byte.
    1.1) The byte is always being read in reverse order, i.e., 20 instead of 02. This problem is also when I read any byte..whether greater than or less than 0xF. But for values greater than 0xF, I'm taking them in a String array and writing it in reverse order.
    So, when I read the first bit, the control comes out of the function where I'm doing the processing, into main, and when the control goes back into the funtcion, it again reads the byte read earlier. If I try to take the values in a 2D array, the value is over written.
    2. How do I check for EOF in a binary file?
    So, how should I handle this. Any answers??? My code is shown below :
    Thanks & regards,
    public class fileinputstream1_3
    static FileInputStream inFile;
    static FileOutputStream outFile;
    static FileWriter fileWriter;
    static DataInputStream inData;
    static PrintWriter outData;
    public static void toHex(byte v) throws IOException
    double tempVal1 = 0;
    double tempVal2 = 0;
    double tempVal3 = 0;
    double tempVal4 = 0;
    int tempVal5 = 0;
    int tempVal8 = 0;
    int testVal1 = 0;
    int j = 0;
    String testVal;
    String tempVal6 = "";
    String tempVal7[] = new String[2];     
    tempVal7[1] = "";                
    File f = new File( System.getProperty( "Programs" ), "cdr6.txt" ); // find path to o/p file
    fileWriter = new FileWriter( f, true );
    outData = new PrintWriter(fileWriter);
    // Convert a 4-bit value into a hex digit char
    if (v >= 0x0 && v <= 0xF)
         fileWriter.write(((char)(v < 0xA ? v+'0' : v-0xA+'A')));
         if ( v > 0xF )
         tempVal1 = (double) v;
         while ( tempVal1 > 1.0 )
         tempVal1 = tempVal1 / 16.0;
         tempVal2 = (int) tempVal1;
         tempVal3 = tempVal1 - tempVal2;
         tempVal4 = tempVal3 * 16;
         if ( tempVal4 >= 10.0 )
         tempVal5 = (int)tempVal4;
         tempVal6 = Integer.toHexString( 0x10000 | tempVal5).substring(4).toUpperCase();
         tempVal5 = (int)tempVal4;
         if ( tempVal6 == "" )
              tempVal6 = Integer.toString(tempVal5);
              testVal = Double.toString(tempVal4);
              testVal1 = (int)tempVal4;
              tempVal1 = tempVal2;
              tempVal7[0] = Integer.toString(testVal1);
              tempVal7[1] = tempVal6;
         for ( int i = 0; i <= 1; i++ )
         catch ( IOException ioe )
         System.out.println( "Could not open the files. " +ioe );
         System.out.println("Exiting toHex.");
    public static void main(String[] args) throws Exception, IOException
    int x;
    int size;
    File f = new File( System.getProperty( "Programs" ), "cdr1");
    inFile = new FileInputStream( f );     // open the input binary file
    inData = new DataInputStream( inFile ); // associate an input stream to the file
    f = new File( System.getProperty( "Programs" ), "cdr3.txt" ); // find path to o/p file
    // this is just to reproduce the input file - not much practical use
    outFile = new FileOutputStream( f );
    outData = new PrintWriter( outFile,true );               
    catch ( IOException ioe )
    System.out.println( "Could not open the files. " +ioe );
    byte bytearray[] = new byte[size+1];
    for (int loop_index = 0; loop_index < bytearray.length ; loop_index++ )
    catch ( IOException ioe)
    System.out.println( " IO error " + ioe + " occured.");
         System.out.println("Exiting main...");;

    I don't understand the question. A byte is alwaysa
    byte, and a byte is always 8 bits, and there's no
    /KajNot necessarilly. While currently more or less
    standardised there is hardware that has wider or
    narrower bytes.
    Whether Java is supported on those platforms and how
    it would handle such situations if it were I don't
    know.But a byte is still 8 bits, even if the hardware uses 9 bits internally (let's say that the hardware uses one bit for crc)??
    I might make a slight change to; In java a byte is always 8 bits :)

  • Bits, bytes, and all the rest

    I need clarification on what some stuff really represents.
    My project is to build a Huffman tree (no problem). However, all tutorials and examples that I can find on the net take from a text file with the format <letter> <frequency>, such as:
    a 12
    b 3
    c 4
    d 8
    Mine has to read any kind of file, such as a text file.
    For example, if myDoc.txt contains:
    This is my document.
    I have to have my program read the bytes from the infile, count the frequency of each byte from 0 through 255. Then the frequencies must be placed on a list of single node Huffman trees, and build the tree from this list.
    I think I am having trouble because I just cannot get straight what a byte "looks like". My project says ensure you are not counting the "letters" of the infile, but the "bytes".
    So what does a byte look like? When I read in the file as above, what is a "byte" representation of the letter T, i,h, etc?
    Any ideas?

    Ok, is a little history lesson that you should have learned or should have been explained to you by your instructor before he/she gave you the assignment to construct a Huffman tree.
    A bit is a binary digit which is either 0 or 1 -- it is the only thing that a computer truly understands. Think about it this way, when you turn on the light switch, the light comes on (the computer sees it as a 1), when you turned the switch off, the light goes out (the computer sees it as a 0).
    There are 8 bits to a byte -- you can think of it as a mouthful. In a binary number system, 2 to the power of 8 gives you 256. So, computer scientists decided to use this as a basis for assigning a numerical value to each letter of the English alphabets, the digits 0 to 9, some special symbols as well as invisible characters. For example, 0 is equivalent to what is known as null, 1 is CTRL A, 2 is CTRL B, etc...... (this may vary depending on the computer manufacturers).
    Now, what was your question again (make sure that you count the byte and not the letters)?
    As you can see, when you read data from a file, there may be characters that you don't see such as a carriage return, line feed, tab, etc.....
    And like they said, the rest is up to the students!

  • Bits, bytes and megabits

    In telecommunications (data rates) 1,000,000 bits in a megabit, apparently.
    In computing (file size)  1,048,576 bytes in a megabyte, apparently (1,048,576 bits in a megabit)
    There are different names available to distinguish them (e.g. kibibit/kilobit) but apparently, these and similar are often not used. Result? Confusion.
    I suspect that quite a few people don't realise this - I didn't. Makes a difference when you are trying to check your upload and download speeds and verify BT's claims about their speeds. 

    The way I do it is X the results on speedtesters that give results in kbps [something like 74500/15500] by 1024.
    The fake results above would be 74500x1024 = 76288000Kbps which = 76.28Mbps [approx 76.3Mbps] which is very near the line max after
    74954kbps = 76752896Kbps = 76.7Mbps
    15207 = 15571968Kbps = 15.6Mbps
    these are my actual readings.

  • Int (32 bits) - byte (8 bits) using bitwise operators

    I am storing a 16-bit value into a 32-bit integer and trying to extract the Most Significant 8-byte and Least Significant 8-byte as follows:
    int n = 129;
    byte msb = (byte)(((byte)(n>>8))&0xff);
    byte lsb = (byte)(n&0xff) ;
    System.out.println ("msb = "+msb+" lsb = "+lsb);
    My msb is "0" and my lbs becomes "-127".
    The above code works correctly for positive numbers but for signed values (like >129-255), my lsb comes to a negative number.
    How should the above code be to handle the lsb?

    My msb is "0" and my lbs becomes "-127".
    The above code works correctly for positive numbers
    but for signed values (like >129-255), my lsb comes to
    a negative number.
    How should the above code be to handle the lsb?
    Your code works fine. Are you wondering why an int value of 129 shows up as a msb of 0 and a lsb of -127? It's because byte values are signed and a bit pattern of 10000001 is equal to -127.

  • Breaking Specific Bits/Bytes off Hexadecimal LIN Signal

    Hello All,
    I am currently working on implementing a LIN-Analog Converter in Labview, and in order to do so I must break off a specific set of bits and convert these to decimal format.  The VI currently breaks off the first 8 bits and converts them to decimal, however I am a bit confused on how to modify this for my present needs.  I need to break off the "3rd and 4th byte pairs" (i.e. bits 17-32).  This point may be illustrated more clearly in the attachment entitled "Bits of Interest".  Furthermore, my current VI is attached to this message.  Any assistance would be greatly appreciated!
    Bits of Interest.jpg ‏15 KB ‏86 KB

    In your case, you only care about specific bytes.  This makes like really easy.  Here are two possible solutions for this.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Get U16 from String Data.png ‏13 KB

  • Bit permutation inside a byte array

    Hi everyone,
    I'm trying to implement DES algorithm, i want to permute specified bits inside an 8byte array.
    I tought about extract every bits from the byte array inside a 64 byte array and then do the permutation.
    But i don't see how to put these bits back into an 8byte array to return the result of encryption/decryption.
    This is what i've done so far (inspired by sources i found on the internet) :
    // extends 8 byte array into a 64 byte array
    public static byte[] extendsByteArray(byte[] bytes) {
    byte[] tab = bytes;
    short len = (short)(desBlockSize * (short)8);
    byte[] bits = new byte[len];
    short i,j;
    for (i = (short)(desBlockSize - (short)1); i >=0; i--) {
    for (j = (short)0; j < desBlockSize; j++)
    bits[--len] = (byte) ((tab[i] >> j) & 0x01);
    return bits;
    // 64 byte array to 8 byte array
    public static byte[] transformByteArray(byte[] bits){
    byte[] tab = bits;
    byte[] bytes = new byte[desBlockSize];
    short i,j = (short)0;
    for ( i = (short)0; i < (short)8; i++){
    bytes[i] = (byte)((bits[j] * (short)2^7) + (bits[j+(short)1] * (short)2^6)+
                   (bits[j+(short)2] * (short)2^5) + (bits[j+(short)3] * (short)2^4)
                   + (bits[j+(short)4] * (short)2^3) + (bits[j+(short)5] * (short)2^2)
                   + (bits[j+(short)6] * (short)2) + (bits[j+(short)7]));
    j +=(short)7;
    return bits;
    This implementation doesn't seems to work, do u have some ideas why it doesn't work or is there an easier solution to permute bits as i'm stuck with this fo the moment.
    Thanks in advance for your answers,

    If you are using JavaCard please do not use multiplications as you used in your code. Integer multiplications are very slow in any platform that does not implement them in hardware, and the virtual machine that runs in JavaCards usually does not do any optimizations or Just-In-Time compiling - simply interprets the codes. Memory and CPU power are at a premium in Java Cards. Even if your Java Card has a hardware multiplier, usually it is reserved for RSA operations - it is not available for general use for Java programs.
    Instead of using x * 2 raised to the sixth power, use x << 6. It is faster and generates fewer bytecodes.

  • "show process mem" cisco doesnt specify bits? or bytes? or kbytes?

    cisco doesnt specify bits/bytes/kbytes
    perhaps cisco thinks people are not dump so they can understand if it is bytes or kilobytes. :)
    output of "#show processes memory" on my 4500:
    Total: 230487506, Used: 94774080, Free: 135713426
    I need to know free space so I can assign it to log file.
    this free "135713426" is supposed to be bytes perhaps?
    tx for helping

    allocated and freed are in bytes. See the following link for additional details on the output from this command.
    Hope this helps

  • Bits in a Carrier file

    what would i need to do to write a program to display the bits in a carrier file.

    Do you want to scrape off the LSBs (Least Significant Bits)? Have a look
    at this for starters:public class LSBInputStream extends InputStream {
       private InputStream carrier; // the carrier input
       public LSBInputStream(InputStream carrier) { this.carrier= carrier; }
       public void close()  throws IOException { carrier.close(); }
       public int read() throw IOException {
          int bits= 0;
          for (int i= 0; i < 8; i++) { // read 8 bytes
             int byte=;
             if (byte == -1) return (i == 0)?-1:bits; // eof reached in the middle?
             bits<<= 1; // roll in the next bit
             bits|= byte&1;
          return bits;
    }This was from the top of my head, but I hope you get the picture
    kind regards,

  • Converting 8 bits to unicode?

    how do i convert 8 bit chars to unicode?
    thnaks in advance

    You have to prevent sign extension on the (signed) 8 bit byte:   byte b= ...;
       char c= (char)(b&0xff);kind regards,

  • Bit operations, base64

    I have 4 bytes with 6 significant bits (base64) which should be concatenated into 3 bytes, moved to the left.
    Im just wondering if this is right, or maybe if theres some better solution:
    oldbyte1 << 2;
    tempbyte2 = oldbyte2;
    tempbyte2 >> 4;
    newbyte1 = oldbyte1 | tempbyte2;
    oldbyte2 << 4;
    tempbyte3 = oldbyte3;
    tempbyte3 >> 2;
    newbyte2 = oldbyte2 | tempbyte3;
    oldbyte3 << 6;
    newbyte3 = oldbyte3 | oldbyte4;Stig.

    This is part of a decoding algorithm for Base-64 encoded content (RFC 1421). This encoding technique allows a binary file or message to be transmitted through applications or protocols that traditionally only work with human-readable data, such as News or Mail.
    Each group of 3 8-bit byte values, 24 bits, are split into a group of 4 6-bit values. Each of these values (0-63) is then mapped to a printable ASCII character [A-Z,a-z,0-9,+,/]. The process is reversed to recover the original binary values.
    Except for a few syntax errors, it looks "right". You need to turn those bit shift operators into bit-shift assignments, and the results of your bitwise OR needs to be cast back to byte (the OR implicitly promotes both values to ints): tempbyte3 >>= 2;
    newbyte2 = (byte)(oldbyte2 | tempbyte3);
    oldbyte3 <<= 6; I typically have used an algorithm that first recombines the 6-bit values back into a 24-bit int and then splits them back into 8-bit values. I found that it works well when I'm passing the byte values into a method by array and want to avoid modifying the values:     /**
         * decodes a group of 4 base-64 digital values, stored as
         * individual bytes, into 3 base-256 digital values.
         * @param src       input bytes
         * @param srcOffset index of the first byte to decode in src
         * @param dst       output bytes
         * @param dstOffset where in dst to start writing bytes to
         * @return the 3 output bytes rendered as a single int
        static public int decodeBase64(byte[] src, int srcOffset, byte[] dst, int dstOffset) {
            int retval ;
            int register = src[srcOffset] ;
            int i ;
            for (i=1; i<4; i++) {
                register = (register << 6) | src[srcOffset + i] ;
            retval = register ;
            for (i=2; i >= 0; i--) {
                dst[dstOffset + i] = (byte)register ;
                register >>>= 8 ;
            return retval ;
        }I've found that your algorithm works well with individual bytes.

  • Transmitting a byte command via UDP

    I hope you'll forgive the newbie question, but my education is in aerospace engineering and not computer science...
    I have a verified specification document which tells me that in order to command a data source to begin transmitting data, I need to send it the following command via UDP:
    1 byte        1 byte               2 bytes                              n bytes
    Command Code =
    Where the data field is ignored for this command.
    My best understanding of this situation would be to use the Byte Array to String function on an array of U8s which looks like this:
    where the network order is big endian.  However, I have some functioning, inherited code which interfaces with the same data source and issues a start data command using an array which looks like this:
    In classic style, the author of this code has retired and I'm the only other LabVIEW programmer.  I'm not savvy enough to tell whether this array accomplishes the same task and it's my limited understanding of bytes which is causing the confustion or if this command is combined with another command or what.
    Any insight, leading questions, or valuable commentary is appreciated.
    Go to Solution.

    I think I'm starting to understand how bits, bytes, and such behave with LabVIEW.  Kind of confusing without a solid background in computing...
    According to the documentation-when using the startup command--there is no data payload required; it is ignored by the data source.  My only guess for those elements remaining after the 4th element are that they are simply placeholders so that the data source know this is a completed command or that the original programmer copied the array from one of the other commands (which required the data payload) and those are just residual values he knew were ignored and never bothered to delete.  It would be in keeping with the just-get-it-done approach this programmer favored.
    Thanks for the assistance.  I think I can interpret the other commands for this data source now.

  • [SOLVED] After qemu update: KVM does not work anymore

    I used to use qemu-kvm, as in
    qemu-kvm \
    -net nic,vlan=1 -net user,vlan=1 \
    -drive file=/home/flash-vm/hdd0,if=virtio,cache=writethrough \
    -vga std -vnc localhost:2 -usbdevice tablet \
    -m 1024
    However, now qemu-kvm has been removed. If I replace "qemu-kvm" with "qemu-system-x86_64" and try to run it with an extra argument  "-machine accel=kvm" or "-enable-kvm", what I get is
    Could not access KVM kernel module: Permission denied
    failed to initialize KVM: Permission denied
    The kernel modules are there:
    # lsmod | grep kvm
    kvm_intel 124064 0
    kvm 384721 1 kvm_intel
    and the processor is
    $ lscpu
    Architecture: x86_64
    CPU op-mode(s): 32-bit, 64-bit
    Byte Order: Little Endian
    CPU(s): 4
    On-line CPU(s) list: 0-3
    Thread(s) per core: 2
    Core(s) per socket: 2
    Socket(s): 1
    NUMA node(s): 1
    Vendor ID: GenuineIntel
    CPU family: 6
    Model: 37
    Stepping: 5
    CPU MHz: 1197.000
    BogoMIPS: 5321.24
    Virtualization: VT-x
    L1d cache: 32K
    L1i cache: 32K
    L2 cache: 256K
    L3 cache: 3072K
    NUMA node0 CPU(s): 0-3
    I have added my user to the group kvm as well, as instructed by pacman. Any help would be much appreciated!
    Last edited by ffcitatos (2013-03-07 06:47:06)

    Interestingly, mine worked without being in the new group. However, I think that's because I hadn't logged out since the update, so the session didn't know that I had been removed from the group. Though I could be missing something.

Maybe you are looking for

  • Safari 4.0.3 will not open in Snow Leopard

    I did the Snow Leopard upgrade on Friday, and this morning I tried to open Safari 4.0.3. I can't. I get the following error: "You can't use this version of the application Safari with this version of Mac OS X. You have Safari 4.0.3". I also cannot do

  • Laptop connects to iPhone 4s personal hotspot but internet doesn't work

    I want to be able to use internet on my PC laptop (or any wi-fi capable device) with my 4S as the wi-fi hotspot. The laptop I'm using (Windows XP) sees my phone and connects to it (with full connectivity) but web pages won't load (there doesn't seem

  • Using variables in ODI

    Hi, i have created 4 variables in ODI 11g as mentioned below: 1.Variable_name = v_date datatype = alphanumeric default value = '30-NOV-2011 11:00:00' 2.Variable_name = v_time_offset datatype = alphanumeric default value = 125 3.Variable_name = v_star

  • My Imac 24" alu screen will not go to sleep!

    Just got my new Imac Alu 24" 2,8 Ghz with the Geforce card. In energy savings i set my monitor to go to sleep after 10 min. But nothing happens! Any tips on this issue? My Imac will run 24h a day in this is not good! Message was edited by: drifter750

  • Replicated machines to Azure - started Test Failover but cannot RDP to the hosts

    Just replicated two VM's into Azure from my on premises test servers. They are in Azure fine but when I do  a test failover they end up in a state Waiting for Action.  That I discover is waiting for me to take some action on the server. I try to RDP