OPC and 16 bit integers

Hi everybody,
I have a very time consuming problem with opc programming in labview: I need
to write 16 bit integers with the labview opc client to the server - and I
don't know how to do this because of the limitation of the labview Vi's to
32 bit integers.
Any suggestion is very welcome.
Bernd Szyszka

What is the limitation? LabVIEW has 8,16, and 32 bit integers - both signed
and unsigned.
"Dr. Bernd Szyszka" wrote:
>Hi everybody,>>I have a very time consuming problem with opc programming
in labview: I need>to write 16 bit integers with the labview opc client to
the server - and I>don't know how to do this because of the limitation of
the labview Vi's to>32 bit integers.>>Any suggestion is very welcome.>>Bernd
Szyszka>>

Similar Messages

  • Join two 8 bit integers and send via Serial Port

    I am trying to join two 8 bit integers and send the result via the serial port.
    I have a constant of hex value A that I need to join with a value from hex 0 - F (this is based on incoming data)
    When I use the Join VI, and type cast to change from hex to string for Visa Write, I recieve hex 0A0F.
    I have been able to use the hex 0-F with a case structure and then wire the corresponding constant ex A0 - AF.
    This makes the program very cumbersome and labour intensive to change. I have 22 commands I have to respond to with the address of 0-F.
    Currently, I have a Case structure that is selected with Message ID, then a case that is selected with subtype and then a case for address.
    Therefore I have to create a constant inside of each address case for each message as the responses are different.
    Thanks for any help
    Robin

    Gambin,
    As I understand it, you want to take the two bytes, put them together,
    and output the result as an ASCII string on the serial port.  This
    is easy.  Simply convert each number to an ASCII string,
    concatonate the two characters together, and send the resulting string
    to the VISA write function.  That's it!  I have attached a VI
    (ver. 7.1) that takes two hex numbers as input, converts them to ASCII,
    concatonates the results, and outputs the 'command' string.  Fell
    free to modify this vi and use it as you see fit.  I have left
    extra terminals in case you want to add error input/output for data
    flow, or whatever.  Notice that the display for the concatonated
    string is in '/' Codes Display mode.  This is because 0A hex is
    the newline character in ASCII.  You should also check to make
    sure that your VISA serial settings are not setup so that the newline
    character is the termination character.  If it is, the second
    character may not be recognised.  Hope this helps.
    Roy
    Attachments:
    HextoCommand.vi ‏17 KB

  • DNG SDK thinks 0xFFFFFFFF is invalid white level for 32 bit integers

    As discussed in another message the DNG SDK does not seem to be able to write floating point DNGs. Since I need higher dynamic range than a 16 bit sample I instead went for 32 bit integers.
    This works, and I manage to create a file but when I run dng_validate on it it says the white level is invalid. The white level I have of course set to 0xFFFFFFFF to make use of the full 32 bit range. Looking into the code in dng_ifd.cpp where this test is made it seems like the default max white level is set to 65535.0 and the file's white level is compared against that, regardless if the sample type is 16 or 32 bit. This means that I can only make use of 16 bits of the 32 bit integer which seems kind of strange. Looking into the DNG spec I don't see anything there that forbids using the full 32 bit range of 32 bit samples. So this looks like a bug to me.
    This is with version 1.4
    the created file can be opened in Lightroom 4, so the only problem seems to be that dng_validate does not think its valid.
    Message was edited by: torger76, removed clipping issue, that was a fault in my code.

    Hello Charles,
    I would be concerned too if my MacBook Prowas running slowly.  I found a couple of articles I recommend to help isolate and troubleshoot this issue.
    I recommend reviewing this article first for possible causes of the slowness:
    OS X Mavericks: If your Mac runs slowly
    http://support.apple.com/kb/PH13895
    You can further isolate the issue by determining if it is only happening in your user account or if it is happening system-wide:
    Isolating an issue by using another user account
    http://support.apple.com/kb/TS4053
    Thank you for using Apple Support Communities.
    Best,
    Sheila M.

  • Using 32 bit integers

    I need to use a 32bit integer in my app, but Java Card only supports 16 bit integers. What's the best way of handling this? I've been trying to do it with a 4 byte array but I'm having problems add/subtracting. I've made the following function to adding two byte array integers together
         public static byte [] intAdd(byte [] firstInt, byte[] secondInt) {
              if (firstInt.length != 4 || secondInt.length != 4) ISOException.throwIt(ISO7816.SW_WRONG_DATA);
              byte [] result = new byte[4];
              byte carry = (byte)0x00;
              for (int i=3; i>=0; i--) {
                   int tmp = firstInt[i] + secondInt[i] + carry;
                   byte[] tmpb = intToBytes(tmp);
                   result[i] = tmpb[3];
                   carry = tmpb[2];
              return result;
    It doesn't work properly because java treats each individual byte as a signed integer. Plus I think this is horribly inefficient.

    797530 wrote:
         public static byte [] intAdd(byte [] firstInt, byte[] secondInt)  {
              if (firstInt.length != 4 || secondInt.length != 4) ISOException.throwIt(ISO7816.SW_WRONG_DATA);
              byte [] result = new byte[4];
              byte carry = (byte)0x00;
              for (int i=3; i>=0; i--) {
                   int tmp = ((short)firstInt[i] & 0x00FF) + ((short)secondInt[i] & 0x00FF) + carry;
                   byte[] tmpb = intToBytes(tmp);
                   result[i] = tmpb[3];
                   carry = tmpb[2];
              return result;
    Please you code tags when posting code.
    In the code you have posted there is a memory leak and your card will run out of EEPROM eventually. You should only ever use the new keyword in your applet constructor or code that is only ever called from the applet constructor as there is (essentially) no garbage collector.

  • Bytes and bits in Lookout

    This is for a Lookout 5.0 application. The OPC Client Object reads a integer value (between 0 and 32767 plus a sign bit) that represents a 16 bit word. What I want is knowing the integer value, unpack the word into 16 bits. How can I do this in Lookout?"

    I needed to do this and found a quick, mathematical way. Here it is:
    Lets say you have a 16 bit word with Bit#1 being the least significant bit- LSB- (farthest right) and Bit #16 being the most significant bit -MSB- (farthest left). As I am sure you know each bit has a decimal equivalent in value: Bit #1=1; Bit #2=2, Bit #3=4; Bit #4=8....Bit #15=16384, Bit#16=32768. The formual for a bit's value in decimal is (assuming the LSB satrts at 1 and not zero) 2^(Bit# - 1).
    So to parse a 16 bit word here is the formula to use in an expression:
    mod(int(Word/2^(Bit# - 1)),2) Ofcourse LSB bit# =1
    Since the 16th bit is a sign bit you only need to look at the word's sign:
    If(Word<0,1,0)
    Regards,
    Tommy Scharmann

  • Maximum audio sample rate and bit depth question

    Anyone worked out what the maximum sample rates and bit depths AppleTV can output are?
    I'm digitising some old LPs and while I suspect I can get away with 48kHz sample rate and 16 bit depth, I'm not sure about 96kHz sample rate or 24bit resolution.
    If I import recordings as AIFFs or WAVs to iTunes it shows the recording parameters in iTunes, but my old Yamaha processor which accepts PCM doesn't show the source data values, though I know it can handle 96kHz 24bit from DVD audio.
    It takes no more time recording at any available sample rates or bit depths, so I might as well maximise an album's recording quality for archiving to DVD/posterity as I only want to do each LP once!
    If AppleTV downsamples however there wouldn't be much point streaming higher rates.
    I wonder how many people out there stream uncompressed audio to AppleTV? With external drives which will hold several hundred uncompressed CD albums is there any good reason not to these days when you are playing back via your hi-fi? (I confess most of my music is in MP3 format just because i haven't got round to ripping again uncompressed for AppleTV).
    No doubt there'll be a deluge of comments saying that recording LPs at high quality settings is a waste of time, but some of us still prefer the sound of vinyl over CD...
    AC

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

  • Common video dimensions and bit rates for dynamic streaming?

    I'm going to be converting my videos to flv and am trying to decide what to use for video dimensions and bit rates.  Some of my users have slow computers and connections so I'm thinking 150 on the low end. 
    Is there a common practice?  What has worked well for you in the past?

    Hello MrWizzer
    I am not sure what FMS version you are currently using.
    Well, if you look at the sampe vod folder that ships with FMS4.5, it has files encoded @ 150kbps, 500 kbps, 700 kbps, 1000 kbps, 1500 kbps.
    I am sure that all of these files stream perfectly fine given the correct bandwidth environment.
    Its totally depend upon the FMS hosting service provider depending upon the user base of a particular provider.
    You need to judge what kind of BW is available to your viewers and put the files encoded at appropriate bitrates.
    You may also look at http://www.adobe.com/devnet/flashmediaserver/articles/beginning-fms45-pt06.html for refrences.
    Regards,
    Shiraz Anwar

  • What are the best data and bit rate setting for uploading from final cut express to Youtube?

    Can anyone suggest the best data rate and bit rate presets for uploading footage from final cut express 4 to Youtube? What settings will provide the best resolution, quality, and match the current youtube requirements?
    Thank you in advance for your help,
    Susan Kayne

    It depends on whether you are using aspect ratios of 4:3 or 16:9.
    Below is some simple guidance that will provide good quality with reasonably small file sizes.
    The first part is for 4:3 video:-
    1. File>Export Using QT Conversion.
    2. The "Format" window should say, "QT Movie".
    3. In "Use" select "LAN/Intranet" from the dropdown menu.
    4. Click "Save" and when it has finished encoding, upload it to YouTube.
    If you are making 16:9 video (Standard or High Definition) do steps 1 to 3 above.
    Then when you have selected "LAN/Intranet" press the "Options" button and in the new
    window that opens press the  "Size"  button and change the  "640x480" to  "853x480"
    To do this you will have to click on the  640x480 and a dropdown menu appears.
    Select "Custom" from  the bottom of the menu and in the window that opens
    you will see 2 boxes.
    Put  853  in the first box and  480  in the second.
    Click OK.
    Then Save it.

  • How to fine the oracle version and bit info?

    I just want to find out the
    Oracle version
    Bit info (32 or 64....)
    and other system configuration about oracle
    Please help

    Duplicate post:
    how to fine the oracle version and bit info?

  • Color Space and Bit Depth - What Makes Sense?

    I'm constantly confused about which color space and bit depth to choose for various things.
    Examples:
    - Does it make any sense to choose sRGB and 16-bits? (I thought sRGB was 8-bit by nature, no?)
    - Likewise for AdobeRGB - are the upper 8-bits empty if you use 16-bits?
    - What is the relationship between Nikon AdobeWide RGB, and AdobeRGB? - if a software supports one, will it support the other?
    - ProPhoto/8-bits - is there ever a reason?...
    I could go on, but I think you get the idea...
    Any help?
    Rob

    So, it does not really make sense to use ProPhoto/8 for output (or for anything else I guess(?)), even if its supported, since it is optimized for an extended gamut, and if your output device does not encompass the gamut, then you've lost something since your bits will be spread thinner in the "most important" colors.
    Correct, you do not want to do prophotoRGB 8bit anything. It is very easy to get posterization with it. Coincidentally, if you print from Lightroom and let the driver manage and do not check 16-bit output, Lightroom outputs prophotoRGB 8bits to the driver. This is rather annoying as it is very easy to get posterizaed prints this way.
    It seems that AdobeRGB has been optimized more for "important" colors and so if you have to scrunch down into an 8-bit jpeg, then its the best choice if supported - same would hold true for an 8-bit tif I would think (?)
    Correct on both counts. If there is color management and you go 8 bits adobeRGB is a good choice. This is only really true for print targets though as adobeRGB encompasses more of a typical CMYK gamut than sRGB. For display targets such as the web you will be better off always using sRGB as 99% of displays are closer to that and so you don't gain anything. Also, 80% of web browsers is still not color managed.
    On a theoretical note: I still don't understand why if image data is 12 or 14 bits and the image format uses 16 bits, why there has to be a boundary drawn around the gamut representation. But for practical purposes, maybe it doesn't really matter.
    Do realitze hat the original image in 12 to 14 bits is in linear gamma as that is how the sensor reacts to light. However formats for display are always gamma corrected for efficiency, because the human eye reacts non-linearly to light and because typical displays have a gamma powerlaw response of brightness/darkness. Lightroom internally uses a 16-bit linear space. This is more bits than the 12 or 14 bits simply to avoid aliasing errors and other numeric errors. Similarly the working space is chosen larger than the gamut cameras can capture in order to have some overhead that allows for flexibility and avoids blowing out in intermediary stages of the processing pipeline. You have to choose something and so prophotoRGB, one of the widest RGB spaces out there is used. This is explained quite well here.
    - Is there any reason not to standardize 8-bit tif or jpg files on AdobeRGB and leave sRGB for the rare cases when legacy support is more important than color integrity?
    Actually legacy issues are rampant. Even now, color management is very spotty, even in shops oriented towards professionals. Also, arguably the largest destination for digital file output, the web, is almost not color managed. sRGB remains king unfortunately. It could be so much better if everybody used Safari or Firefox, but that clearly is not the case yet.
    - And standardize 16 bit formats on the widest gamut supported by whatever you're doing with it? - ProPhoto for editing, and maybe whatever gamut is recommended by other software or hardware vendors for special purposes...
    Yes, if you go 16 bits, there is no point not doing prophotoRGB.
    Personally, all my web photos are presented through Flash, which supports AdobeRGB even if the browser proper does not. So I don't have legacy browsers to worry about myself.
    Flash only supports non-sRGB images if you have enabled it yourself. NONE of the included flash templates in Lightroom for example enable it.
    that IE was the last browser to be upgraded for colorspace support (ie9)
    AFAIK (I don't do windows, so I have not tested IE9 myself), IE 9 still is not color managed. The only thing it does is when it encounters a jpeg with a ICC profile different than sRGB is translate it to sRGB and send that to the monitor without using the monitor profile. That is not color management at all. It is rather useless and completely contrary to what Microsoft themselves said many years ago well behaved browsers should do. It is also contrary to all of Windows 7 included utilities for image display. Really weird! Wide gamut displays are becoming more and more prevalent and this is backwards. Even if IE9 does this halfassed color transform, you can still not standardize on adobeRGB as it will take years for IE versions to really switch over. Many people still use IE6 and only recently has my website's access switched over to mostly IE8. Don't hold your breath for this.
    Amazingly, in 2010, the only correctly color managed browser on windows is still Safari as Firefox doesn't support v4 icc monitor profiles and IE9 doesn't color manage at all except for translating between spaces to sRGB which is not very useful. Chrome can be made to color manage on windows apparently with a command line switch. On Macs the situation is better since Safari, Chrome (only correctly on 10.6) and Firefox (only with v2 ICC monitor profiles) all color manage. However, on mobile platforms, not a single browser color manages!

  • How to view resolution (ppi/dpi) and bit depth of an image

    Hello,
    how can I check the native resolution (ppi/dpi) and bit depth of my image files (jpeg, dng and pef)?
    If it is not possible in lighroom, is there a free app for Mac that makes this possible?
    Thank you in advance!

    I have used several different cameras, which probably have different native bit depths. I assume that Lr converts all RAW files to 16 bits, but the original/native bit depth still affects the quality, right? Therefore, it would be nice to be able to check the native bit depth of an image and e.g. compare it to an image with a different native bit depth.....
    I know a little bit of detective work would solve the issue, but it
    would be more convenient to be able to view native bit depth in
    Lightroom, especially when dealing with multiple cameras, some of which
    might have the option to use different bit depths, which would make the
    matter significantly harder.
    This
    issue is certainly not critical and doesn't fit into my actual
    workflow. As I stated in a previous post, I am simply curious and wan't
    to learn, and I believe that being able to compare images with different
    bit depths conveniently would be beneficial to my learning process.
    Anyway,
    I was simply checking if somebody happened to know a way to view bit
    depth in Lr4, but I take it that it is not possible, and I can certainly
    live with that.
    Check the specifications of your camera to know at what bit depth it writes Raw files. If you have a camera in which the Raw bit depth can be changed the setting will probably be recorded in a section of the metadata called the Maker Notes (I don't believe the EXIF standard includes a field for this information). At any rate, LR displays only a small percentage of the EXIF data (only the most relevant fields) and none of the Maker Notes. To see a fuller elucidation of the metadata you will need a comprehensive EXIF reader like ExifTool.
    However, the choices nowadays are usually 12 bit or 14 bit. I can assure you that you cannot visually see any difference between them, because both depths provide a multiplicity of possible tonal levels that is far beyond the limits of human vision - 4,096 levels for 12 bit and 16,384 for 14 bit. Even an 8 bit image with its (seemingly) paltry 256 possible levels is beyond the roughly 200 levels the eye can perceive. And as has been said, LR's internal calculations are done to 16 bit precision no matter what the input depth (although your monitor is probably not displaying the previews at more than 8 bit depth) and at export the RGB image can be written to a tiff or psd in 16 bit notation. The greater depth of 14 bit Raws can possibly (although not necessarily) act as a vehicle for greater DR which might be discerned as less noise in the darkest shadows, but this is not guaranteed and applies to only a few cameras.

  • How to create a picture indicator from an array of 32-bit integers

    My problem is simple. I have an array of signed 32-bit integers that represents image information. I want to display that information in a picture indicator on the front panel of my VI, but none of the picture indicators accept the array. What conversions or picture control do I need to use?
    thanks!

    Try Draw Unflattened Pixmap.vi.
    Lynn

  • How to resolve audio sampling and bit rate differences before sound mix

    I'm editing a project where an additional recorder (Zoom H4n) was used and synched to the camcorder track with Plural Eyes.  Some of the sampling and bit rates don't match within the Plural Eyes synchronized clips and I need to send this out for a sound mix.  Could I use Compressor to transcode the audio? Would I have to go back to my original separate tracks, transcode in Compressor and then re-synch using Plural Eyes?  Next time ProRes from the beginning -
    but if there's an easier fix for my current problem it would be much appreciated.

    Thanks Michael -- you've helped me with the project before (back in May of this year). 
    I started this project  as a newby back in 2010 and was a bit overwhelmed (i.e. just happy to get everything into FCP and be able to edit.)
    I'll try the media manage solution, but if that doesn't work I think I'll live with the one-frame drift.  I'm assuming
    the only other alternative would be to go back to all the original video and audio files; transcode using Compressor, and then re-synch with Plural Eyes and go back in to match up the edits?
    Thanks to your continuing help, I should know how to set things up correctly from the get go for the next one!

  • Itunes plays purchased HD videos fine, but constantly stalls while trying to play home created videos converted to the same file format and bit rate?

    Itunes plays purchased HD videos fine, but constantly stalls while trying to play home created videos converted to the same file format and bit rate.
    System specs
    Win 7pro x64
    ATI 5830
    Phenom x6 1055t
    8gigs ram
    1tb 7200rpm sata hdd
    500 gig 5400 rpm sata hdd

    Itunes plays purchased HD videos fine, but constantly stalls while trying to play home created videos converted to the same file format and bit rate.
    System specs
    Win 7pro x64
    ATI 5830
    Phenom x6 1055t
    8gigs ram
    1tb 7200rpm sata hdd
    500 gig 5400 rpm sata hdd

  • Multi Track settings (Sample Rate and Bit Rate)

    I'm setting up my multi-track in 5.5 as 44100 16 bit, but when I create it...it's saying its 44100 32 bit (at the bottom of the multi-track).  Is there a way I can change this setting?  I've explored the preferences all day long and still can't find any answers.
    Thanks

    I guess the answer to this question relies on someone having an external digital amp/decoder/processor that can display the source sample rate and bit depth during playback, together with some suitable 'demo' files.
    AC

Maybe you are looking for

  • Video breaking up on DVD

    I recently upgraded from FCP4 to 6 and just getting used to some subtle differences. I am working on a project that is approximately 2 hours and 45 minutes long. Some sequences have more action and graphics than others so I export using compressor us

  • Compare fields in collection model to display which fields differ

    Hi all I have a previous post, which I may not have clearly enough stated my objective, and hence I feel as I may have been given an incorrect directive. Anyway, I aim to be able to compare fields in collection model to display which fields differ be

  • How to design an always square Jbutton?

    The JButton without any label or image is always square, and could be resized by its container or layout. But no matter how it is resized, it could only be square. Could anyone help me on that? Edited by: bbskill on Feb 18, 2008 4:19 AM Edited by: bb

  • Highlight colour change?

    is there a way to change the highlight colour? If I highlighted a text or website or something, the highlight colour is blue and is barely visible because I have a black/blue theme. I want to know if I can change the highlight colour to white or some

  • How Can I run My Forms And Reports On Application Server 10 g Release 3

    Hi Everybody i need to learn steps to configure forms and reports on application server 10g release 3 pls help , i am a beginner