Storing 32-bit numbers in plc registers.

I am using Labview 7.1 and DSC/OPC to communicate to an Allen Bradley SLC 5/05 over Ethernet. In my application, I desire to write and read part numbers ranging from 0-99999999 to/from the N registers of the PLC. Now since the N (Integer) registers of the 5/05 can't store a value of 99999999 I attempted to use the split function and store the value into (2) seperate N registers. Then I could take those two registers and use the join function to re-establish the value when reading. What happens is, I will write one value and read something back totally different. If the lower word is greater then 32767 when split, I have bad results.

"IF the lower word is greater then 32767 when split, I have bad results"
You don't say what OPC you're using, but if it's one of the Industrial Automation/Lookout drivers for AB that came with LVDSC, here's a guess what the problem is:
With a 32bit input, the split function results in two U16 values. A quick check of the IA/Lookout documentation for the AB drivers shows that N registers data members are configured to accept 16 bit _signed_ integers (-32768 to 32767). When you try to write 0x8000 as 32768 to your lo word tag/register, DSC/OPC just coughs.
Suggested possible solution:
Typecast the lo word U16 from your split to I16 and write that value to the tag/register.
If you're using another OPC driver, the problem is probably something similar.
=====================================================
Fading out. " ... J. Arthur Rank on gong."

Similar Messages

  • Must load a Matrix or array with 90 individual 16 bit numbers, then do matches with Vision inspection result.

    What is the best way to load 90 16 bit numbers into an array or matrix, and then compare those 90 values to a Vision system result?
    The operator manually enters the value into a display, assigns the value to a memory location (Bin #) and clicks "Enter".
    Each of the 90 locations will have a stored value in it.
    A Vision system inspection will transmit a result that matches one of the 90 entries, so we then have to try to match the value and determine which bin to sort the result to.
    We are learning a lot about LabView, but not fast enough.
    Thank you in advance for your help.
    Sincerely,
    Rich

    Rich
    It seems like you just want to create a histogram of the image. There is an IMAQ Histogram VI (Vision and Motion >> Image Processing >> Analysis). This allows you to define the max and min values along with how many "number of classes" (or bins). I believe this may accomplish what you are looking to do. As a note, there is also an IMAQ Histograph VI in the same pallet.
    Cameron T
    Applications Engineer
    National Instruments

  • All my stored data in Numbers are gone. What do I do?

    All my stored data in "Numbers" are gone. What do I do?

    every workbook is gone, data in one workbook is gone? Cant open a workbook, any workbook? Please be a little more specific as to the circumstances so we can try to help out.
    Thanks
    Jason

  • How to combine two 8 bit numbers to make a 16 bit number??

    Hi
    My electronics ADC ouputs the result of the conversion over two bytes because it is a 10 bit number. This is then fed into Labview via the serial port. I have the two bytes stored in the various elements of an unbundle block diagram.
    My current vi has the two bytes stored seperately. HOw may I combine the two together??
    So far I have
    1st byte sent : (MSB)bit7/bit6/bit5/bit4/bit3/bit2/bit1/bit0(LSB)
    2nd byte sent : (MSB)X/X/X/X/X/X/bit1/bit0(LSB)
    I can send the two bytes from my electronics in either order, whichever suits labview the best.
    At the present time, byte 1 max value is 255 and byte to is 3
    I would like to combine these two arrays to form the following:
    (MSB)X/X/X/X/X/X/bit9/bit8/bit7/bit6/bit5/bit4/bit3/bit2/bit1/bit0(LSB)
    This way, the combined two bytes has a range up to 1048, a true representation of the ADC value. This value can then be converted and displayed on a dial.
    If anybody has any ideas or help it would be appreciated. I can attach a vi to explain what I am doing if that helps/or my details above are unclear.
    Thanks

    Sign extention of a 10 bit integer. Input two bytes, Output signextended 10 Bit. The output value will be in the range from -512 to 511. No check if Highbyte is greater than 3!
    Code in LV 6.0.2
    Waldemar
    Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
    Don't forget to give Kudos to good answers and/or questions
    Attachments:
    SignExtend10Bit.vi ‏17 KB

  • Writing binary to file with 24 or 32-bit numbers

    I am using an NI4472 DAQ to sample some analog data at 24-bits and I want to write the data to disk. However LabView only has a VI to write 16-bit data to disk. Is there a way to write 24 or 32 bit binary numbers to a file?

    The VI you are looking at is probably one of the "Easy VIs" that is setup for a specific application. You can create more general programs to write a binary file with any data type you desire. I would recommend taking a look at the Write Binary File example that ships with LabVIEW. It shows a more general approach to writing data to a binary file. In this example they write double precision numbers but you could easily replace the data with I32s.

  • Storing strings and numbers in a spreadsheet

    in this file i understand that the data types are incorrect. but what i dont know is how to fix it so i can write arrays filled with strings and arrays filled with numbers to seperate columns in an excel spreadsheet.
    Solved!
    Go to Solution.

    sorty here it is.  thanks.
    Attachments:
    learning with kevin 17.vi ‏119 KB

  • Does java has problem storing floating point numbers?

    Hi just a quick question. I have been programming in java at least 4, 5 year by now and I feel rather comfortable with programming in java. However, just other day I hit problem with java which I was hoping someone on this forum can fill in for me.
    Now please consider following simple statement:
    double d = 3.0 / 5 / 3;
    and lets say I printed this number
    why does java compute it as 0.19999999999999998 rather than 0.2
    and 2.44 � 7.006 as -4.566000000000001 instead of �4.566
    Frankly, until now I never know java behaved this way and it seems so bizarre.
    I�ve wrote similar program in c and perl just see what happens and they produced an expected result.
    Anyway the program I was writing at the time was really trivial stuff and this problem does not need argent solution but I would like know why did is happening and what might be the solution for it.
    Oh, just in case, the version of JDK I am using right now is (build 1.4.2-b28)
    Thanks

    Take a look at some of these links:
    http://java.sun.com/developer/JDCTechTips/2003/tt0204.html#2
    http://java.sun.com/developer/JDCTechTips/2001/tt0807.html#tip1

  • Storing Numbers in a File

    I have a problem storing numbers in files. WHen i store them, they are stored as ANSI numbers. Which means I have a limit of 255. Anything over this becomes a '?'
    How would I go about getting around this limitation?
    Should I store them as strings?

    If you have a string which contains only a number, you can use the following method to get the number:
    String s="123";
    int i=Integer.parseInt(s);So you could store one number per line in the file, which is the simplest way to do it.
    Alternatively, you can store several numbers on each line, so long as they are separated by the same character, eg a space, or a comma or whatever. The use StringTokenizer to break the string up into tokens according to the delimeter you specify:
    String s="1,2,3,4,5,6";
    StringTokenizer st=new StringTokenizer(s, ",");
    while (st.hasMoreTokens()) {
      //This loop iterates over each token. Basically, each time this
      //loop goes round, i equals the next number in the string.
      int i=st.nextToken();
    }So you can store numbers however you like in the file, so long as the numbers are all stored in the same way (this keeps it simple).

  • Are there any example vi's for implementing a circular buffer between a plc, opc server, and labview dsc??

    I am storing a block of data inside plc registers and reading this group into labview as a continuous set of datapoints. I am counting the number of scans in the plc and sometimes the number of points collected inside labview doesn't match.

    To explain a a little bit about tag updating:
    The LabVIEW DSC tag engine is not just updated on any change of the value within the plc. There are, in fact, several "deadbands" that must be crossed before those tags are updated:
    1) The OPC Server has a deadband - where the plc register value has to change a certain % before it is recorded.
    2) In the LabVIEW DSC moduel, there is an I/O Group Deadband that determines when the tag engine is actually updated.
    Both of these deadbands must be satisfied before a new "value" is recorded in the LabVIEW DSC tag engine.
    Therefore, I would check your OPC Server's deadband (configurable in the OPC Server configuration utility) and also the I/O Group deadband for those tags (configurable in the tag configuration
    editor).
    If this doesn't resolve the issue, please let me know. Thanks.

  • GE fanuc PLC & Lookout

    I want to make SCADA system which will pool several machines with GE Fanuc PLCs. I intend to use National Instruments Lookout which support mentioned PLC but problem is that I can't access (force) I/O adresses from Lookout. In LM90 I must press F11 before forcing any variable and then F12. How this shall be done from Lookout or any other application?

    It can't be done unless NI updates the GE.CBX finally! They need to add manipulation of the control bit on the "forceable" registers. This feature is huge to Lookout's ability to have "Total" interface with the 90-30 and 90-70 PLC's.
    Come on guys at NI....how about a late Christmas present?
    Quote:
    The Ge Fanuc 90-30 and 90-70 PLC's were well in place throughout the world BEFORE Lookout at Georgetown Systems was ever on more than one floppy drive.
    I'm a big believer in open systems, but unfortunately, that is more of a dream than a reality.
    Consider that GE is not your run-of-the-mill startup company, that this product had a huge footing before anything like Lookout existed, and that they make huge amounts of cash pushing and selling someone else's s
    oftware package "Cimplicity" by a major automation software competitor.
    I'm not saying that Lookout should write proprietary drivers for all the PLC's that come along in the future. It just seems that since they already paid GE Fanuc for the rights to interface their product, that they would just get it done right.
    Lastly, It really did take Steph a matter of an less than an hour to modify and email a major CBX mod to me back in 1997. I didn't expect any guarantees. I was just so happy to get the new features.
    I'm frustrated to know that the reason Lookout has not taken off is mostly for the lack of people such as Steph along with a corporate commitment to make it happen.
    It is very sad to know that the LabVIEW name has to have all the attention and funding, even when Lookout will always have a separate and still unmatched role in the automation software scene. National Instruments can't ever expect Lookout to take-off with this attitude, and I'm convinced that
    NI does not want Lookout to succeed in some weird way.
    Lastly, NI promised to release the GE Fanuc source code to me years ago but it never happened. Something is not right in Austin.
    Cheers,
    Ed"

  • How do I process serial port strings as bits

    In response to my commands, my instrument is sending bytes to my serial
    port. In one instance, 2 bytes are received. I want to treat these 2 bytes
    as a group of 16 bits.
    The VISA and Compatibility Serial functions return these bytes from the
    serial port to Labview clearly labelled a "string".
    Everything I can find in the way of Labview functions and .vis don't want to
    do bit twiddling, bit swapping, and bit dropping, with "string" data.
    I thought "hex string to number" could be used here, but I can't find a way.
    The 2 Bytes in question can be represented as hex, but the data are not the
    ASCII codes for the hex representation of a binary number, they are the
    binary number. This "hex string to number" seems to want ASCII c
    odes.
    You can feed a hex number typed into a "control" box wired into "hex string
    to number" and you get a meaningful number. You can feed the 2 bytes from
    the serial port into an "indicator" set to read in hex and you get a hex
    number that is a correct representation. But that is Labview handing them
    around to itself. I need to get my "hands" on them.
    I can't feed those same bytes that show up as a correct hex representation
    in an indicator into the "hex string to number" or anything else, so far,
    and get a number that is useful for further processing.
    I thought "variant to data", but I can't find enough reference material to
    understand how to use it. A boolean array seems like a bit of a weird
    approach, so I thought I'd ask before I looked into that.
    I'm used to dealing directly with binary numbers on the processor stack, I
    call them whatever I want, and turn them into anything I feel like.
    I'm sure I'm staring the solution in the face, but I can't find any way to
    persuade
    Labview to treat this "string" data as 16 bits.
    I've got the 16 bits, which is better than not having them, but I don't have
    much hair left.

    duh, well I finally discovered the "Unflatten from String" function. A guy
    just feeds in the bytes he's collected from his serial port that Labview
    thinks are a "string", and out come lovely little unsigned 16 bit numbers,
    or whatever other type of number he wants to turn the bytes into. And there
    are great little bit twiddlers available after that, like "swap bytes", and
    you can mask out bits with the logic operators, why this is fun. There's
    nothing like being a moron...... fly me to the moon...................
    "David Lewis" wrote in message
    news:[email protected]..
    > The two bytes would come from a serial port read.vi in Labview, classed as
    a
    > string. For instance, D3 and 02. The output wou
    ld swap the two bytes,
    i.e.
    > to 02 and D3, consider the two swapped bytes as 16 bits, drop the six most
    > significant bits, and output the ten bits that are left as an integer
    > classed by Labview as some kind of number, not a string.
    >
    > Your example StringToBits_Converter.vi I found on the ni.com site
    > unfortunately gives an error message and refuses to open on my system
    saying
    > it comes from a newer version of Labview 6 than I am running. Mine says
    > 6.0.1b3. Thank you very much anyway.
    >
    > "FightOnSCTrojan" wrote in message
    > news:[email protected]..
    > > In another words, you want to create a VI in which the input is 2
    > > strings (i.e. AB) and the output is the converted array bits (e.g.
    > > 1010101010101010)?
    >
    >

  • I want to enter ICE numbers that can be seen from the lock screen of my iPhone. Any solutions?

    ICE numbers (In Case of Emergency) help first responders make important phone calls on your behalf in the event of an emergency. Typically these are stored as ICE numbers in your contacts, however, your contacts can't be accessed from a lock screen if you have set a password to get into your phone. Any wonderful solutions? I have developed one solution that works: I entered my ICE information into "Notes", and then took a screen shot of the information. Then I mailed it to myself and saved it in my camera roll. Once I selected the "photo" of my ICE information, I saved this as my lock screen photo. Now when anyone turns on my phone, they see my ICE numbers and they do not have access to any other parts of my phone.

    Having access to contact numbers stored in a phone without knowing the passcode sort of nullifies part of the reason for the passcode protection in the first place.  However, I do support giving users more freedom, os long as they understand the possible negative consequences.

  • How can I set specific bits in a 16-bit integer?

    Hello everyone,
    as the title says I need to modify or rather to set a specific bit in a string which then is sent to a motor. I need to be sure that my command is correct as I am experiencing troubles with that motor and need to identify if its source.
    First of all my strings have to be in the Little Endian order. Then the structure of the string should be the following:
    Change Velocity command ‘V’xxCR 056h + one unsigned short (16-bit) integer + 0Dh (Note: Uppercase ‘V’)
    Note: The lower 15 bits (Bit 14 through 0) contain the velocity value. The high-order bit (Bit 15) is used to indicate the microstep-to-step resolution: 0 = 10, 1 = 50 uSteps/step.
    Until now, I used Flatten To String to convert 32 bit integers into bytes of the correct order. I thought I could use the Join Numbers function, but that only works for at least 8 bit numbers and there is no "1 bit number". I searched for an option to build a a string and set the bits via a Boolean Cluster, but I did not really understand how to transfer this to my problem.
    How can I build up the correct 16-bit integer (e.g. set the velocity to "10000" with a resolution of 50 µSteps/step)
    I would like to add the "V" and the CR via Concatenate Strings to the 16-bit integer, but other possibilites are also welcome.
    I have seens the examples for bit manipulation in C-code, but I wish to do this with LabView as I am not familiar with C,matlab and so on.
    Thank you very much for your help!
    Solved!
    Go to Solution.

    You really need to learn Boolean logic and how to shift bits around.
    AND is really good for masking out bits (forcing them to 0) and OR is really good for adding bit values.  Then Logical Shift is used to get the bits in the right places before doing the AND and OR.
    NOTE: Rate is an enum with 10 being a value of 0 and 50 being 1.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Bit Packing.png ‏15 KB

  • Single string tag expanded into 100 plc register values

    I found a way to read 100 registers of plc data with a single string tag, if you can guarantee that none of the plc registers are zero. A register value of zero acts like a Null ascii terminator and truncates the string. Define the string tag as 200 bytes and uncheck the text data only box. Use the code in the attached picture to convert the string bytes back into decimal register values.
    Attachments:
    string tag to 100 U16 values.gif ‏2 KB

    Ins't that code exactly the same as typecasting to an U16 Array?
    Message Edited by altenbach on 05-19-2005 07:24 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    CastingU16.gif ‏3 KB

  • PID (FPGA) - Convert 16bit output to 24 bit

    Hello,
    Im controlling a device in Closed Loop using the PID (FPGA) vi.  The output of this is I16, but it would be nice if I could convert it to a I24, or I32.  The output of this vi is sent to two different Analog Outs on my Op-Amp circuitry.  The op-amps are set up as summing amps, so it would be ideal to send the lower 8 bits to the first AO (fine output), and the top 16 (15+sign) to the second AO (coarse output).
    I haven't found an easy way to do this yet.  So far, I've been thinking of just using the I16 output value, split it into two 8 bit numbers.  The lower 8 bit controls my first AO.  The upper 8 bits is combined with another 8 bit register that gets incremented everytime the 8-bit number becomes saturated, and controls the 2nd AO.
    I've attached a pic to make it more clear (I hope?).  If I should be doing this another way, please let me know.
    Or maybe I need to be controlling this system with 2 parrallel PIDs? One controlling the coarse control, the other the fine.  The feedback would be the same for both PIDs. 

    Hi bones,
    even on FPGA you should be able to use "split numbers" and "join numbers" functions...
    And you can use boolean operators on numbers. Like sign = I16 && 0x8000
    That way you would get away from all those number TO boolean array TO cluster of booleans TO unbundle TO bundle TO boolean array TO number conversions (also called RubeGoldberg, search for this in the forum!).
    Message Edited by GerdW on 12-07-2009 08:52 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

Maybe you are looking for

  • Can't install mac os x on macbook pro anymore(now i have windows 7 on it)

    Hi, I have a macbook pro, there was mac os x 10.6.4 on it, after that I installed windows 7 on it, because i wanted a clean instal, but I couldn't because a kernel panic when i booted from the dvd mac os x 10.5(that I received when i bought the macbo

  • Attachments show as text

    It happened to me twice now that I got an email with attachments, I believe both times it was through my .mac account, which makes Mail painfully slow to begin with. Anyway, it has this funky behavior of showing a message before the attachments are d

  • LinkSys and CCM5

    Hello Do i need third party license to register Linksys SIP phone with CCM5? Thanx Vlad

  • HT4972 i have the  version 3.1.3. how can i update

    I have the version 3.1.3 iphone 3G.I have accidently dowloaded whatsapp with the same number on iphone into another device android. I have deleted it. I cannot recuperate the whatsapp on my iphone because I have to updatei it. Please help

  • Recovering photos from time machine

    I just bought a new MacBook Pro and find that time machine is not allowing me to retrieve photos from Time Machine through iPhoto like it used to.  How do I do that now?