Bytes and bits in Lookout

This is for a Lookout 5.0 application. The OPC Client Object reads a integer value (between 0 and 32767 plus a sign bit) that represents a 16 bit word. What I want is knowing the integer value, unpack the word into 16 bits. How can I do this in Lookout?"

I needed to do this and found a quick, mathematical way. Here it is:
Lets say you have a 16 bit word with Bit#1 being the least significant bit- LSB- (farthest right) and Bit #16 being the most significant bit -MSB- (farthest left). As I am sure you know each bit has a decimal equivalent in value: Bit #1=1; Bit #2=2, Bit #3=4; Bit #4=8....Bit #15=16384, Bit#16=32768. The formual for a bit's value in decimal is (assuming the LSB satrts at 1 and not zero) 2^(Bit# - 1).
So to parse a 16 bit word here is the formula to use in an expression:
mod(int(Word/2^(Bit# - 1)),2) Ofcourse LSB bit# =1
Since the 16th bit is a sign bit you only need to look at the word's sign:
If(Word<0,1,0)
Regards,
Tommy Scharmann

Similar Messages

  • Bits, bytes, and all the rest

    I need clarification on what some stuff really represents.
    My project is to build a Huffman tree (no problem). However, all tutorials and examples that I can find on the net take from a text file with the format <letter> <frequency>, such as:
    a 12
    b 3
    c 4
    d 8
    Mine has to read any kind of file, such as a text file.
    For example, if myDoc.txt contains:
    This is my document.
    I have to have my program read the bytes from the infile, count the frequency of each byte from 0 through 255. Then the frequencies must be placed on a list of single node Huffman trees, and build the tree from this list.
    I think I am having trouble because I just cannot get straight what a byte "looks like". My project says ensure you are not counting the "letters" of the infile, but the "bytes".
    So what does a byte look like? When I read in the file as above, what is a "byte" representation of the letter T, i,h, etc?
    Any ideas?

    Ok, Roshelle....here is a little history lesson that you should have learned or should have been explained to you by your instructor before he/she gave you the assignment to construct a Huffman tree.
    A bit is a binary digit which is either 0 or 1 -- it is the only thing that a computer truly understands. Think about it this way, when you turn on the light switch, the light comes on (the computer sees it as a 1), when you turned the switch off, the light goes out (the computer sees it as a 0).
    There are 8 bits to a byte -- you can think of it as a mouthful. In a binary number system, 2 to the power of 8 gives you 256. So, computer scientists decided to use this as a basis for assigning a numerical value to each letter of the English alphabets, the digits 0 to 9, some special symbols as well as invisible characters. For example, 0 is equivalent to what is known as null, 1 is CTRL A, 2 is CTRL B, etc...... (this may vary depending on the computer manufacturers).
    Now, what was your question again (make sure that you count the byte and not the letters)?
    As you can see, when you read data from a file, there may be characters that you don't see such as a carriage return, line feed, tab, etc.....
    And like they said, the rest is up to the students!
    V.V.

  • Trying to understand the details of converting an int to a byte[] and back

    I found the following code to convert ints into bytes and wanted to understand it better:
    public static final byte[] intToByteArray(int value) {
    return new byte[]{
    (byte)(value >>> 24), (byte)(value >> 16 & 0xff), (byte)(value >> 8 & 0xff), (byte)(value & 0xff) };
    }I understand that an int requires 4 bytes and that each byte allows for a power of 8. But, sadly, I can't figure out how to directly translate the code above? Can someone recommend a site that explains the following:
    1. >>> and >>
    2. Oxff.
    3. Masks vs. casts.
    thanks so much.

    By the way, people often introduce this redundancy (as in your example above):
    (byte)(i >> 8 & 0xFF)When this suffices:
    (byte)(i >> 8)Since by [JLS 5.1.3|http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#25363] :
    A narrowing conversion of a signed integer to an integral type T simply discards all but the n lowest order bits, where n is the number of bits used to represent type T.Casting to a byte is merely keeping only the lower order 8 bits, and (anything & 0xFF) is simply clearing all but the lower order 8 bits. So it's redundant. Or I could just show you the more simple proof: try absolutely every value of int as an input and see if they ever differ:
       public static void main(String[] args) {
          for ( int i = Integer.MIN_VALUE;; i++ ) {
             if ( i % 100000000 == 0 ) {
                //report progress
                System.out.println("i=" + i);
             if ( (byte)(i >> 8 & 0xff) != (byte)(i >> 8) ) {
                System.out.println("ANOMOLY: " + i);
             if ( i == Integer.MAX_VALUE ) {
                break;
       }Needless to say, they don't ever differ. Where masking does matter is when you're widening (say from a byte to an int):
       public static void main(String[] args) {
          byte b = -1;
          int i = b;
          int j = b & 0xFF;
          System.out.println("i: " + i + " j: " + j);
       }That's because bytes are signed, and when you widen a signed negative integer to a wider integer, you get 1s in the higher bits, not 0s due to sign-extension. See [http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#5.1.2]
    Edited by: endasil on 3-Dec-2009 4:53 PM

  • Convert.To​Byte and logical masks, from VB to LabVIEW

    Hi guys,
    I have to write a VI communicating via RS232 with a microcontroller for a PWM application. A colleague has given me the GUI he has developpd in VB and I would like to integrate it into my LabVIEW programme by converting the VB code into Labview code.  Here's the code:
    Private Sub LoadButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles loadButton.Click
            Dim bufLen As Integer = 12      // Buffer length
            Dim freq As Integer = 0      // frequency
            Dim pWidth As Integer = 0      // pulse width
            Dim dac As Integer = 0       // Value used in oscillator setting for generating pulse frequency
            Dim addr As Integer = 0      // Address of the pulse width in the pulse generator.
            Dim rTime As Integer = 0      // duration of machining/pulse train in ms.
            Dim returnValue As Byte = 0      // A variable for storing the value returned by the 
                                                           //microcontroller after it receives some data
            Dim buffer(bufLen) As Byte       // creates an array of bytes with 12 cells => buffer size = 8 x12 = 96 bits
    // can you explain a bit please I know you're converting the entered values into byte and put them one by one in a specific order. This order is how the 
    //microcontroller expects them
                addr = (Floor((pWidth - Tinh) / Tinc)) // Formula from hardware, calculates address setting for pulse generator to set pulse width.
                buffer(0) = Convert.ToByte(Floor(3.322 * (Log10(freq / 1039)))) // Caluclates OCT value for use in setting oscillator for pulse freq.
                dac = (Round(2048 - ((2078 * (2 ^ (10 + buffer(0)))) / freq)))  // Calculates DAC value for use in setting oscillator for pulse freq.
                buffer(1) = Convert.ToByte((dac And &HF00) >> 8)                         //
    // &H is the vb.net to tell it its hex code, F00 gives the top 4 bits from a 12 bit value.
                buffer(2) = Convert.ToByte(dac And &HFF) // For values that are larger than 256 (8bits) the value needs to be split across 2 bytes (16 bits) this gets the //bottom 8 bits.  &H is the vb.net to tell it its Hex.
                buffer(3) = Convert.ToByte((addr And &HFF0000) >> 16) // This value may be so large it requires 3 bytes (24bits). This gets the top 8 bits.
                buffer(4) = Convert.ToByte((addr And &HFF00) >> 8) // This gets the middle 8 bits.
                buffer(5) = Convert.ToByte(addr And &HFF)// This gets the bottom 8 bits.
                buffer(6) = Convert.ToByte((rTime And &HFF0000) >> 16) //This value may also be 3 bytes long.
                buffer(7) = Convert.ToByte((rTime And &HFF00) >> 8)
                buffer(8) = Convert.ToByte(rTime And &HFF)
                buffer(9) = Convert.ToByte(2.56 * ocpBox.Value)  // The ocp pulse period is formed of 256 steps or counts, so if a percentage is requested, this formula gives the number of steps/counts required to set the pulse width
                buffer(10) = Convert.ToByte(tempBox.Value)
                If (tempCheck.Checked = True) Then
                    buffer(11) = 1
                Else
                    buffer(11) = 0
                End If
                freq = ((2 ^ buffer(0)) * (2078 / (2 - (dac / 1024))))
                pWidth = Tinh + ((((Convert.ToInt32(buffer(3))) << 16) Or ((Convert.ToInt32(buffer(4))) << 8) Or (Convert.ToInt32(buffer(5)))) * Tinc)
                ' Connect to device
                serialPort1.Write(1) // Sends the number 1. This tells the microcontroller we are about to start sending data. It should respond with a zero if it is ready 
                                             //and the connection is good.
    The line "serialPort1.Write(buffer, 0, bufLen)" sends the buffered data where the variables are: buffer =  the buffered data; 0 = the position in the buffer to start from; bufLen = the position in the buffer to stop sending data.
    What's the best way to deal with the Convert.ToBytes and the logical masks And ??
    Thanks in advance for your time and consideration,
    regards
    Alex
    Solved!
    Go to Solution.

    Try this
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • Int (32 bits) - byte (8 bits) using bitwise operators

    Hi!
    I am storing a 16-bit value into a 32-bit integer and trying to extract the Most Significant 8-byte and Least Significant 8-byte as follows:
    int n = 129;
    byte msb = (byte)(((byte)(n>>8))&0xff);
    byte lsb = (byte)(n&0xff) ;
    System.out.println ("msb = "+msb+" lsb = "+lsb);
    My msb is "0" and my lbs becomes "-127".
    The above code works correctly for positive numbers but for signed values (like >129-255), my lsb comes to a negative number.
    How should the above code be to handle the lsb?
    Thanks,
    x86

    My msb is "0" and my lbs becomes "-127".
    The above code works correctly for positive numbers
    but for signed values (like >129-255), my lsb comes to
    a negative number.
    How should the above code be to handle the lsb?
    Your code works fine. Are you wondering why an int value of 129 shows up as a msb of 0 and a lsb of -127? It's because byte values are signed and a bit pattern of 10000001 is equal to -127.

  • Is there an easier way to convert bytes into bit(boolean) arrays?

    I am currently using this method to convert bytes into bit arrays:
    /*convert byte to int such that it is between 0-255 this is the bytes[] array
                        if ((bytes/128)==1)
                             bit[c+0]=true;
                        else
                             bit[c+0]=false;
                        if ((bytes[i]-bitInt[c+0]*128)/64==1)
                             bit[c+1]=true;
                        else
                             bit[c+1]= false;
                        if ((bytes[i]-bitInt[c+0]*128-bitInt[c+1]*64)/32==1)
                             bit[c+2]=true;
                        else
                             bit[c+2]= false;
                        if ((bytes[i]-bitInt[c+0]*128-bitInt[c+1]*64-bitInt[c+2]*32)/16==1)
                             bit[c+3]=true;
                        else
                             bit[c+3]= false;
                        if ((bytes[i]-bitInt[c+0]*128-bitInt[c+1]*64-bitInt[c+2]*32-bitInt[c+3]*16)/8==1)
                             bit[c+4]=true;
                        else
                             bit[c+4]= false;
                        if ((bytes[i]-bitInt[c+0]*128-bitInt[c+1]*64-bitInt[c+2]*32-bitInt[c+3]*16-bitInt[c+4]*8)/4==1)
                             bit[c+5]=true;
                        else
                             bit[c+5]= false;
                        if ((bytes[i]-bitInt[c+0]*128-bitInt[c+1]*64-bitInt[c+2]*32-bitInt[c+3]*16-bitInt[c+4]*8-bitInt[c+5]*4)/2==1)
                             bit[c+6]=true;
                        else
                             bit[c+6]= false;
                        if ((bytes[i]-bitInt[c+0]*128-bitInt[c+1]*64-bitInt[c+2]*32-bitInt[c+3]*16-bitInt[c+4]*8-bitInt[c+5]*4-bitInt[c+6]*2)==1)
                             bit[c+7]=true;
                        else
                             bit[c+7]= false;

    You can loop through and use a bitwise operator instead. Here is an example without the loop.
    byte b = 6 ;
    if( b & Math.pow( 2, 0 ) == Math.pow( 2, 0 ) ) ;
    // the 2^0 bit is on
    if( b & Math.pow( 2, 1 ) == Math.pow( 2, 1 ) ) ;
    // the 2^1 bit is onetc...
    You should get something like 110 when you're done.
    If you're wonder what & does (no, its not a boolean &&), it takes the bits in the two numbers you give it and returns a number with all the bits on that are on for each of them.
    For example:
    10011011 &
    11001101 =
    10001001
    So if we take
    110 (6) &
    010 (2^1, or 2) =
    010 (2 again)
    Which means that the number (6) has the 2^1 bit on.

  • Working set, virtual bytes and private bytes

    Hello everyone,
    I am using Windows Server 2003 Performance Counter tool to monitor the memory consumed by my process.
    The interested terms are working set, virtual bytes and private bytes. My questions are,
    1. If I want to watch the real physical memory consumed by current process, which one should I monitor?
    2. If I want to watch the physical memory + swap file consumed by current process, which one should I monitor?
    3. Any more clear description of what these terms mean? I read the help from Performance Counter tool, but still confused
    which one(s) identifies the real used physical memory, and which one(s) identifies the real used physical memory + page swap file, and which one(s) identifies the required memory (may not be really allocated either in physical memory or in swap page file).
    If there are any related learning resource about the concepts, it is appreciated if you could recommend some. :-)
    thanks in advance,
    George

    And for further benefit:
    "Virtual bytes" is the total size of the not-free virtual address space of the process. It includes private committed, mapped, physically mapped (AWE), and even reserved address spaces. 
    "Private committed" is the portion of "virtual bytes" that is private to the process (as opposed to "mapped", which may be shared between processes; this is used by default for code). This normally includes program globals,
    the stacks (hence locals), heaps, thread-local storage, etc. 
    Each process's "private committed" memory is that process's contribution to the system-wide counter called "commit charge". There are also contributions to commit charge that aren't part of any one process, such as the paged pool. 
    The backing store for "private committed" v.m. is the page file, if you have one; if you have (foolishly in my opinion) deleted your page file, then all private committed v.m., along with things like the paged pool that would normally be backed
    by the page file, has to stay in RAM at all times... no matter how long ago it's been referenced (which is why I think running without a page file is foolish). 
    The "commit limit" is the size of RAM that the OS can use (this will exclude RAM used for things like video buffers) plus the total of the current size(s) of your pagefile(s). Windows will not allow allocations of private committed address space
    (e.g. VirtualAlloc calls with the COMMIT option) that would take the commit charge above the limit. If a program tries it, Windows will try to expand the page file (hence increasing the commit limit). If this doesn't succeed (maybe because page file expansion
    is disabled, maybe because there's no free disk space to do it) then the attempt to allocate committed address space fails (e.g. the VirtualAlloc call returns with a NULL result, indicating no memory was allocated). 
    The "Working set" is not quite "the subset of virtual pages that are resident in physical memory", unless by "resident" you mean "the valid bit is set in the page table entry", meaning that no page fault will occur
    when they're referenced. But this ignores the pages on the modified and standby page lists. Pages lost from a working set go to one of these lists; if the modified list, they are written to the page file and then moved to the standby list. From there they
    may be used for other things, such as to satisfy another process's  need for pages, or (in  Vista and later) for SuperFetch. But until that happens, they still contains the data from the process they came from and are still associated with that process
    and with that processes' virtual pages. Referring to the virtual page for which the physical page is on the standby or modified list will result in a "soft" page fault that is resolved without disk I/O. Much slower than referring to a page that's
    in the working set - but much faster than having to go to disk for it. 
    So the most precise definition of "working set" is "the subset of virtual pages that can be referenced without incurring a page fault." 

  • Byte and char !!!

    Hi!
    What is the diference between an byte and char in Java! Could I use char instead of byte and reverse?
    Thanks.

    TYPE BYTE BITS SIGN RANGE
    byte 1 8 SIGNED -128 to 127
    char 2 16 UNSIGNED \u0000 to \uFFFF
    Both the date types can be interchanged
    Have a look at this
    class UText
         public static void main(String[] args)
              byte c1;          
              char c2= 'a';
              c1 = (byte) c2;
              c2 = (char) c1;          
              System.out.println(c1 +" "+c2);
    But It Leads to confusion while interchanging because of its SIGN. And the range of byte is very less.So its better to prefer char.

  • Maximum audio sample rate and bit depth question

    Anyone worked out what the maximum sample rates and bit depths AppleTV can output are?
    I'm digitising some old LPs and while I suspect I can get away with 48kHz sample rate and 16 bit depth, I'm not sure about 96kHz sample rate or 24bit resolution.
    If I import recordings as AIFFs or WAVs to iTunes it shows the recording parameters in iTunes, but my old Yamaha processor which accepts PCM doesn't show the source data values, though I know it can handle 96kHz 24bit from DVD audio.
    It takes no more time recording at any available sample rates or bit depths, so I might as well maximise an album's recording quality for archiving to DVD/posterity as I only want to do each LP once!
    If AppleTV downsamples however there wouldn't be much point streaming higher rates.
    I wonder how many people out there stream uncompressed audio to AppleTV? With external drives which will hold several hundred uncompressed CD albums is there any good reason not to these days when you are playing back via your hi-fi? (I confess most of my music is in MP3 format just because i haven't got round to ripping again uncompressed for AppleTV).
    No doubt there'll be a deluge of comments saying that recording LPs at high quality settings is a waste of time, but some of us still prefer the sound of vinyl over CD...
    AC

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

  • I pull fiftyfour bytes of data from MicroProcessor's EEPROM using serial port. It works fine. I then send a request for 512 bytes and my "read" goes into loop condition, no bytes are delivered and system is lost

    I pull fiftyfour bytes of data from MicroProcessor's EEPROM using serial port. It works fine. I then send a request for 512 bytes and my "read" goes into loop condition, no bytes are delivered and system is lost

    Hello,
    You mention that you send a string to the microprocessor that tells it how many bytes to send. Instead of requesting 512 bytes, try reading 10 times and only requesting about 50 bytes at a time.
    If that doesn�t help, try directly communicating with your microprocessor through HyperTerminal. If you are not on a Windows system, please let me know. Also, if you are using an NI serial board instead of your computer�s serial port, let me know.
    In Windows XP, go to Start, Programs, Accessories, Communications, and select HyperTerminal.
    Enter a name for the connection and click OK.
    In the next pop-up dialog, choose the COM port you are using to communicate with your device and click OK.
    In the final pop
    -up dialog, set the communication settings for communicating with your device.
    Type the same commands you sent through LabVIEW and observe if you can receive the first 54 bytes you mention. Also observe if data is returned from your 512 byte request or if HyperTerminal just waits.
    If you do not receive the 512 byte request through HyperTerminal, your microprocessor is unable to communicate with your computer at a low level. LabVIEW uses the same Windows DLLs as HyperTerminal for serial communication. Double check the instrument user manual for any additional information that may be necessary to communicate.
    Please let me know the results from the above test in HyperTerminal. We can then proceed from there.
    Grant M.
    National Instruments

  • Common video dimensions and bit rates for dynamic streaming?

    I'm going to be converting my videos to flv and am trying to decide what to use for video dimensions and bit rates.  Some of my users have slow computers and connections so I'm thinking 150 on the low end. 
    Is there a common practice?  What has worked well for you in the past?

    Hello MrWizzer
    I am not sure what FMS version you are currently using.
    Well, if you look at the sampe vod folder that ships with FMS4.5, it has files encoded @ 150kbps, 500 kbps, 700 kbps, 1000 kbps, 1500 kbps.
    I am sure that all of these files stream perfectly fine given the correct bandwidth environment.
    Its totally depend upon the FMS hosting service provider depending upon the user base of a particular provider.
    You need to judge what kind of BW is available to your viewers and put the files encoded at appropriate bitrates.
    You may also look at http://www.adobe.com/devnet/flashmediaserver/articles/beginning-fms45-pt06.html for refrences.
    Regards,
    Shiraz Anwar

  • What are the best data and bit rate setting for uploading from final cut express to Youtube?

    Can anyone suggest the best data rate and bit rate presets for uploading footage from final cut express 4 to Youtube? What settings will provide the best resolution, quality, and match the current youtube requirements?
    Thank you in advance for your help,
    Susan Kayne

    It depends on whether you are using aspect ratios of 4:3 or 16:9.
    Below is some simple guidance that will provide good quality with reasonably small file sizes.
    The first part is for 4:3 video:-
    1. File>Export Using QT Conversion.
    2. The "Format" window should say, "QT Movie".
    3. In "Use" select "LAN/Intranet" from the dropdown menu.
    4. Click "Save" and when it has finished encoding, upload it to YouTube.
    If you are making 16:9 video (Standard or High Definition) do steps 1 to 3 above.
    Then when you have selected "LAN/Intranet" press the "Options" button and in the new
    window that opens press the  "Size"  button and change the  "640x480" to  "853x480"
    To do this you will have to click on the  640x480 and a dropdown menu appears.
    Select "Custom" from  the bottom of the menu and in the window that opens
    you will see 2 boxes.
    Put  853  in the first box and  480  in the second.
    Click OK.
    Then Save it.

  • How to fine the oracle version and bit info?

    I just want to find out the
    Oracle version
    Bit info (32 or 64....)
    and other system configuration about oracle
    Please help

    Duplicate post:
    how to fine the oracle version and bit info?

  • Color Space and Bit Depth - What Makes Sense?

    I'm constantly confused about which color space and bit depth to choose for various things.
    Examples:
    - Does it make any sense to choose sRGB and 16-bits? (I thought sRGB was 8-bit by nature, no?)
    - Likewise for AdobeRGB - are the upper 8-bits empty if you use 16-bits?
    - What is the relationship between Nikon AdobeWide RGB, and AdobeRGB? - if a software supports one, will it support the other?
    - ProPhoto/8-bits - is there ever a reason?...
    I could go on, but I think you get the idea...
    Any help?
    Rob

    So, it does not really make sense to use ProPhoto/8 for output (or for anything else I guess(?)), even if its supported, since it is optimized for an extended gamut, and if your output device does not encompass the gamut, then you've lost something since your bits will be spread thinner in the "most important" colors.
    Correct, you do not want to do prophotoRGB 8bit anything. It is very easy to get posterization with it. Coincidentally, if you print from Lightroom and let the driver manage and do not check 16-bit output, Lightroom outputs prophotoRGB 8bits to the driver. This is rather annoying as it is very easy to get posterizaed prints this way.
    It seems that AdobeRGB has been optimized more for "important" colors and so if you have to scrunch down into an 8-bit jpeg, then its the best choice if supported - same would hold true for an 8-bit tif I would think (?)
    Correct on both counts. If there is color management and you go 8 bits adobeRGB is a good choice. This is only really true for print targets though as adobeRGB encompasses more of a typical CMYK gamut than sRGB. For display targets such as the web you will be better off always using sRGB as 99% of displays are closer to that and so you don't gain anything. Also, 80% of web browsers is still not color managed.
    On a theoretical note: I still don't understand why if image data is 12 or 14 bits and the image format uses 16 bits, why there has to be a boundary drawn around the gamut representation. But for practical purposes, maybe it doesn't really matter.
    Do realitze hat the original image in 12 to 14 bits is in linear gamma as that is how the sensor reacts to light. However formats for display are always gamma corrected for efficiency, because the human eye reacts non-linearly to light and because typical displays have a gamma powerlaw response of brightness/darkness. Lightroom internally uses a 16-bit linear space. This is more bits than the 12 or 14 bits simply to avoid aliasing errors and other numeric errors. Similarly the working space is chosen larger than the gamut cameras can capture in order to have some overhead that allows for flexibility and avoids blowing out in intermediary stages of the processing pipeline. You have to choose something and so prophotoRGB, one of the widest RGB spaces out there is used. This is explained quite well here.
    - Is there any reason not to standardize 8-bit tif or jpg files on AdobeRGB and leave sRGB for the rare cases when legacy support is more important than color integrity?
    Actually legacy issues are rampant. Even now, color management is very spotty, even in shops oriented towards professionals. Also, arguably the largest destination for digital file output, the web, is almost not color managed. sRGB remains king unfortunately. It could be so much better if everybody used Safari or Firefox, but that clearly is not the case yet.
    - And standardize 16 bit formats on the widest gamut supported by whatever you're doing with it? - ProPhoto for editing, and maybe whatever gamut is recommended by other software or hardware vendors for special purposes...
    Yes, if you go 16 bits, there is no point not doing prophotoRGB.
    Personally, all my web photos are presented through Flash, which supports AdobeRGB even if the browser proper does not. So I don't have legacy browsers to worry about myself.
    Flash only supports non-sRGB images if you have enabled it yourself. NONE of the included flash templates in Lightroom for example enable it.
    that IE was the last browser to be upgraded for colorspace support (ie9)
    AFAIK (I don't do windows, so I have not tested IE9 myself), IE 9 still is not color managed. The only thing it does is when it encounters a jpeg with a ICC profile different than sRGB is translate it to sRGB and send that to the monitor without using the monitor profile. That is not color management at all. It is rather useless and completely contrary to what Microsoft themselves said many years ago well behaved browsers should do. It is also contrary to all of Windows 7 included utilities for image display. Really weird! Wide gamut displays are becoming more and more prevalent and this is backwards. Even if IE9 does this halfassed color transform, you can still not standardize on adobeRGB as it will take years for IE versions to really switch over. Many people still use IE6 and only recently has my website's access switched over to mostly IE8. Don't hold your breath for this.
    Amazingly, in 2010, the only correctly color managed browser on windows is still Safari as Firefox doesn't support v4 icc monitor profiles and IE9 doesn't color manage at all except for translating between spaces to sRGB which is not very useful. Chrome can be made to color manage on windows apparently with a command line switch. On Macs the situation is better since Safari, Chrome (only correctly on 10.6) and Firefox (only with v2 ICC monitor profiles) all color manage. However, on mobile platforms, not a single browser color manages!

  • How to view resolution (ppi/dpi) and bit depth of an image

    Hello,
    how can I check the native resolution (ppi/dpi) and bit depth of my image files (jpeg, dng and pef)?
    If it is not possible in lighroom, is there a free app for Mac that makes this possible?
    Thank you in advance!

    I have used several different cameras, which probably have different native bit depths. I assume that Lr converts all RAW files to 16 bits, but the original/native bit depth still affects the quality, right? Therefore, it would be nice to be able to check the native bit depth of an image and e.g. compare it to an image with a different native bit depth.....
    I know a little bit of detective work would solve the issue, but it
    would be more convenient to be able to view native bit depth in
    Lightroom, especially when dealing with multiple cameras, some of which
    might have the option to use different bit depths, which would make the
    matter significantly harder.
    This
    issue is certainly not critical and doesn't fit into my actual
    workflow. As I stated in a previous post, I am simply curious and wan't
    to learn, and I believe that being able to compare images with different
    bit depths conveniently would be beneficial to my learning process.
    Anyway,
    I was simply checking if somebody happened to know a way to view bit
    depth in Lr4, but I take it that it is not possible, and I can certainly
    live with that.
    Check the specifications of your camera to know at what bit depth it writes Raw files. If you have a camera in which the Raw bit depth can be changed the setting will probably be recorded in a section of the metadata called the Maker Notes (I don't believe the EXIF standard includes a field for this information). At any rate, LR displays only a small percentage of the EXIF data (only the most relevant fields) and none of the Maker Notes. To see a fuller elucidation of the metadata you will need a comprehensive EXIF reader like ExifTool.
    However, the choices nowadays are usually 12 bit or 14 bit. I can assure you that you cannot visually see any difference between them, because both depths provide a multiplicity of possible tonal levels that is far beyond the limits of human vision - 4,096 levels for 12 bit and 16,384 for 14 bit. Even an 8 bit image with its (seemingly) paltry 256 possible levels is beyond the roughly 200 levels the eye can perceive. And as has been said, LR's internal calculations are done to 16 bit precision no matter what the input depth (although your monitor is probably not displaying the previews at more than 8 bit depth) and at export the RGB image can be written to a tiff or psd in 16 bit notation. The greater depth of 14 bit Raws can possibly (although not necessarily) act as a vehicle for greater DR which might be discerned as less noise in the darkest shadows, but this is not guaranteed and applies to only a few cameras.

Maybe you are looking for

  • Selection of open item for payment

    Hi I understand that in the automatic payment program, the selection of the open items for payment is partly determined by the "documents entered up to date" and the "Next payment" run date. Would like to confirm if the posting date specified will af

  • Problem gathering files from PC.

    Hi, Since some time I´ve got a problem with gathering files from my PC to iTunes. When I click "File" and then "Add file to library", iTunes disapeares and I have to start iTunes again. But then the faillure happens again and again. "Add folder to li

  • Flat File Universe - Import to CMS

    Hi, I want to export a Flat file Universe to CMS, so that other users can access this Universe. But,where can i store the excel file on CMS? regards, ZEESR. Edited by: ZEESR_99 on Aug 17, 2010 11:52 PM

  • Open a pdf by a selected bookmark

    Open a pdf by a selected bookmark Exist command line modifyers to initiate Adobe acrobat v.7 and goes directly to a bookmark ? If exist where is the complete list of modifiers of the command line ? Can I write a VB Script , or bat to go directly to a

  • Iphone wont turn on.  help!

    whenever i try to turn on my iphone, all i get is the little white apple logo and thats it. In the rare chance that it does turn on, the screen will eventually freeze. I have restored it twice and it never seems to work. any ideas?!