Labview color interpretation

i am creating a color *.bmp file in windows using paint, FTP it over into unix, dump the picture into Xview using XWD > commmand, and then import the picture into labview to create a picture in labview. to create the labview picture i use edit, import picture from file, then choose the file. the problem is, when i view the picture in labview, all of the colors are mixed up ie a yellow box in the .bmp file is a cyan box in the labview window. And vice versa, a cyan box in the .bmp file is a yellow box in labview. this color mix-up pertains to every color used. ie a red box in .bmp is another color in labview.
fahlers suggested: It looks like the sequence of the 3 RGB values that make up a color value is reversed somehow. Yellow (normally RGB = hex 00 FF FF) is in LV represented as hex FF FF 00). If you reverse FF FF 00 to 00 FF FF, you'll have cyan in LV.
is there a practical way of doing this in labview, changing the way labview interprets color hex values? In other words, labview is reading hex color values as "33 22 11", and i need it to read hex color values as "11 22 33."

If your importing the picture into a LabVIEW piture control I would take the output color array and transpose the colors. The RGB VIs can be found in "Graphics & Sounds >> Picture Functions"...
See attatched picture:
Attachments:
Snap.jpg ‏10 KB

Similar Messages

  • How to get the color interpretation ( transparency/opacity)

    Hi all,
    I am trying to translate colors that may be affected by opacity & transparency into a strong color.
    Input colors may be RGB, CMYK and LAB.
    Right now, I am :
    1) duplicating source color (because Paper/Black can't be tweaked, I need to work on a copy)
    2) Setting duplicated color space to CMYK
    3) creating a dummy color in CMYK
    4) affect CMYK values of the duplicated color by transparency and opacity
    ie : source color is Black [0,0,0,100] at 50% opacity and 50% transparent ( frame effects) => [ 0*0.5*0.5 , 0*0.5*0.5 , 0*0.5*0.5 , 100*0.5*0.5 ] = > [0 , 0, 0, 25 ];
    5) setting dummy color space to source color space
    6) Do my computations based on dummyColor
    6) remove duplicated color
    7) remove dummy color
    The process is working but deltas on teh colors are a serious issue. Not big deltas but enough to corrupt the trust on the color interpretation.
    SO question is : Is there any way to get a correct interpretation of a color affected by opacity/transparency settings ?
    Do you have any link towars an understandable paper about how lab/rgb and CMYK colors are affected by opacity/transparency ?
    Thanks in advance,
    best,
    Loic

    @Loic – nothing much to offer here from my side. Just a source by Adobe which is very likely outdated, but you can look for hints doing research on the theme:
    "Transparency in PDF, Adobe Developer Technologies, Technical Note #5407" from 30 May 2000 at:
    http://www.planetpdf.com/mainpage.asp?webpageid=837
    Especially chapters:
    3 Color Compositing Computations
    4 Shape and Opacity Computations
    5 Groups
    6 Soft Masks
    7 Color Space and Color Rendering Issues
    should get you going. At least a bit. See for yourself…
    Uwe

  • Problem with LabVIEW color options?

    I've tried to set the front panel and menu colors to the predefined system color 'Panel & Object' in Tools > Options > Colors dialog. When LabVIEW is restarted the front panel and menu is black rather than the expected grey!
    I use LabVIEW 6.1 on Windows NT.
    I guess that this is a bug. Is there a fix or work-around?
    Selecting the color from the palette works fine, but will not make the vi adapt to system colors.
    Kind regards / Med venlig hilsen
    Torben

    I think that there is a workaround that will work for the PC and the Mac, but not on XWindows. You just go into your ini file and find the place where the front panel (or whatever) color is set. In your case I think it will be set to 000016. Change this and set it to 01000016. You can do this with any symbolic color that you like and it should work.
    Good Luck.

  • What is the standard labview color palette

    Hi i am using an VI icon extracter to obtain the icon of a VI. I wish to save this icon as a *.bmp but i do not know what to use for the palette. If i send the image to the LabVIEW picture functions they draw it correctly as they contain the default colour palette. Does anybody know the default colour palette or how to get at it, thanks. Tom.

    Tom;
    Once you draw the icon in the picture indicator, you can use the following vi to extract all the information you need from the picture indicator to save your icon as a .bmp file:
    Picture to Pixmap
    Just recall the size of the icon is 32x32 pixels.
    If you are using Moore's icon tools, you can use the vi attached. Moore's tools can be found at:
    http://www.mooregoodideas.com/Downloads/Downloads.htm
    Regards;
    Enrique
    www.vartortech.com
    Attachments:
    icon_test_1.vi ‏76 KB

  • How can I convert a bitmap image into an array in LabVIEW 6.0.2, then to a spreadsheet for further analysis without NI-IMAQ or Vision?

    I do not have NI-IMAQ or NI Vision.
    I acquired the image using a picolo board then just saved it as a bitmap file.

    You want to convert it to an array of what? Of numbers? Of LabVIEW colors?
    The "Read BMP File.vi" VI outputs a 1-D array of the pixels, but I suspect it is not exactly in the format that you need. I am NOT sure, but I think that each group of 3 elements of that 1-D array (which is inside the cluster "image data" that the VI outputs) represents the red, green and blue levels of a single pixel (but it may be hue, saturation and lum.). Also, since it is a 1-D array, you would have to divide it by the width (which is also included in the "image data" cluster) to get some sort of 2-D array of data.
    You can find the "Read BMP File.vi" VI in the functions palete> "Graphics & sound">"Graphics Formats".

  • External memory allocation and management using C / LabVIEW 8.20 poor scalability

    Hi,
    I have multiple C functions that I need to interface. I need
    to support numeric scalars, strings and booleans and 1-4 dimensional
    arrays of these. The programming problem I try to avoid is that I have
    multiple different functions in my DLLs that all take as an input or
    return all these datatypes. Now I can create a polymorphic interface
    for all these functions, but I end-up having about 100 interface VIs
    for each of my C function. This was still somehow acceptable in LabVIEW
    8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
    gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
    open the project and some 150 MB of memory is consumed instantly. I
    still need to expand my C interface library and LabVIEW doesn't simply
    scale up to meet the needs of my project anymore.
    I now
    reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
    I then initialize the allocated memory blocks correctly and return the
    handles to LabVIEW. LabVIEW complier interprets Call Library Function
    Node terminals of my memory block as a specific data type.
    So
    what I thought was following. I don't want LabVIEW compiler to
    interpret the data type at compile time. What I want to do is to return
    a handle to the memory structure together with some metadata describing
    the data type. Then all of my many functions would return this kind of
    handle. Let's call this a data handle. Then I can later convert this
    handle into a real datatype either by typecasting it somehow or by
    passing it back to C code and expecting a certain type as a return.
    This way I can reduce the number of needed interface VIs close to 100
    which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
    So
    I practically need a similar functionality as variant has. I cannot use
    variants, since I need to avoid making memory copies and when I convert
    to and from variant, my memory consumption increases to three fold. I
    handle arrays that consume almos all available memory and I cannot
    accept that memory is consumed ineffectively.
    The question is,
    can I use DSNewPtr and DSNewHandle functions to reserve a memory block
    but not to return a LabVIEW structure of that size. Does LabVIEW
    carbage collection automatically decide to dispose my block if I don't
    correctly return it from my C immediately but only later at next call
    to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
    If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
    Regards,
    Tomi
    Tomi Maila

    I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
    See the attached image for details.
    Tomi Maila
    Attachments:
    variant.PNG ‏9 KB

  • Programming a c application which is calling a *.so built by LabVIEW.

    Hello all,
    This question probably has been asked, but I can't find the answer.  So here's my question:
    I have built a LabVIEW *.so in Linux and I want to call it from a "c" application.  The LabVIEW *.so is returning a cluster of strings and I want to know how to call it from my c application (memory allocation?).
    Here's the *.so source code, the function is named "testvi":
    Here's my c application source code:
    #include <stdio.h>
    #include <string.h>
    #include "testclusterofstrings.h"
    int main()
    Cluster_Of_Strings clusterofstrings;
    Testvi(&clusterofstrings);
    printf("-------------\n");
    printf("String_A: %s", (*clusterofstrings.String_A)->str);
    printf("-------------\n");
    return 0;
    I'm getting the following output when calling my application:
    LabVIEW caught fatal signal
    13.0 - Received SIGSEGV
    Reason: address not mapped to object
    Attempt to reference address: 0x0x19f5c381
    Segmentation fault (core dumped)
    So, what is the proper way to do this?
    Thanks,
    Michel
    Solved!
    Go to Solution.

    smithd wrote:
    If its a labview-built dll and you're passing parameters by ref, I'm not too surprised you have to initialize it (although I would expect labview to be friendly enough to allocate the data structures for you). Maybe if you passed it a null pointer instead it would work? From your original post, maybe try this:
    int main() {
    Cluster_Of_Strings * clusterofstrings = NULL;
    Testvi(clusterofstrings);
    For some reason I remember reading that labview will interpret the null as a sign that it needs to allocate the structure, but I could be totally insane on that point.
    If that doesn't work, then yes you'll have to allocate all the handles as appropriate. From <labview>\cintools\extcode.h you can see that a string is defined as follows:
    typedef struct {
    int32 cnt; /* number of bytes that follow */
    uChar str[1]; /* cnt bytes */
    } LStr, *LStrPtr, **LStrHandle;
    Since you have size-0 arrays I think you really just need to call DSNewHClr(sizeof(int32)) which will allocate a handle with all 0s, and 0 is what you want. Final result would be...
    int main()
    Cluster_Of_Strings MeasInfo;
    MeasInfo.String_A = (LStrHandle)DSNewHClr(sizeof(int32));
    MeasInfo.String_B = (LStrHandle)DSNewHClr(sizeof(int32));
    Testvi(&MeasInfo);
     Oh, and for the string functions, make sure you look at the built-in functions first before you make your own.
    Actually, the whole thing is both a little easier and more complicated at the same time. LabVIEW is fully managed with its data types but you have to follow that management contract when you interface to LabVIEW code from C.
    First, the first attempt with allocating a string handle with sizeof(int32) + sizeof(uChar) bytes without initializing the length element is wrong. That length element could contain any value and cause LabVIEW to wrongly assume that the handle is already big enough to fill in its data and not do anything and then writing over the end of the allocated buffer.
    Also initialisation of the structure with NULL is not going to work. This cluster has to be provided by the caller as it is a fixed size data area passed in as a pointer. However initialisation of the string handles inside the cluster with NULL should work fine, since LabVIEW considers NULL handles as the canonical zero length handle.
    However after you have called the LabVIEW DLL function you are the owner of any memory that was allocated by that function and returned to you, just as you would be if you had allocated those handles yourself before the call. So proper etiquete is to also deallocate it and this is not optional but a requirement or you create memory leaks. It doesn't get noticed here since your test program terminates anyhow right after but it will bite you badly in a bigger application if you forget this.
    The code could then look something like this:
    int main()
    Cluster_Of_Strings MeasInfo;
    MeasInfo.String_A = NULL;
    MeasInfo.String_B = NULL;
    Testvi(&MeasInfo);
    printf("-------------\n");
    printf("String_A: %s\n", LV_str_to_C_str(MeasInfo.String_A));
    printf("String_B: %s\n", LV_str_to_C_str(MeasInfo.String_B));
    printf("size %d", (sizeof(int32) + sizeof(uChar)));
    printf("-------------\n");
      if (MeasInfo.String_A)
    DSDisposeHandle(MeasInfo.String_A);
    if (MeasInfo.String_B)
    DSDisposeHandle(MeasInfo.String_B);
    return 0;
    // Returns the pointer to the string buffer in a LabVIEW string handle that has been made
    // sure to be zero terminated.
    char *LV_str_to_C_str(LStrHandle lv_str)
    if (lv_str && !NumericArrayResize(uB, 1, (UHandle*)&lv_str, LStrLen(*lv_str) + 1))
    LStrBuf(*lv_str)[LStrLen(*lv_str)] = 0;
    return LStrBuf(*lv_str);
    return NULL;
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Can Not Import Apple ProRes 422 HQ Files Into Color

    I am able to navigate to the files I want to import and they then show up in the Setup window, but when I highlight a clip and then select the Import button, nothing happens. What am I doing wrong? These clips are from a Canon 7D and then converted to Apple ProRes 422 HQ.

    Check your "Shots" bin to see if they are importing, but are not in the timeline window. They could be importing, but simply aren't where you expect them to be. The timeline preview doesn't adapt, so it could simply be a matter of zooming or scrolling around to find them.
    If they really, really aren't importing, then its likely some kind of hiccup in the conversion that COLOR doesn't like. When you select the clip in the COLOR browser it is identified and may or may not qualify for import. What is COLOR interpreting the media to be?
    Is your timeline locked?
    jPo

  • Gray scale color table

    I want to make a picture out of an 8-bit unsigned integer 2D matrix of data. I can successfully do that using the "draw unflattened pixmap" function. However, it requires a color map. If you don't wire the color map, it uses the default labview color map. I want to use a gray scale color map (numbers between 0 and 255 corresponding to colors ranging from white to black). I am not sure how to make this color map and implement it. I appreciate any help.
    Solved!
    Go to Solution.

    Jahan wrote:
    The problem seems to be solved this way. Although it seemed to work (I got a gray scale image), I am not sure if what I have done is technically correct (if there is a better way of doing it). If someone wants to comment on it, I would appreciate.
    Yes, this is technically correct (top). (Typically you can do it once, then right-click the indicator and "change to constant" and delete the loop code.)
    You might also flatten the rgb tool code to the diagram (bottom). This has the advantage that the entirel loop will be calculated once at compile time and internally folded into a constant so the loop actually never spins during execution of the program (See also). (Of course this particular loop is trivial and nothing to worry about, performance wise. ) 
    Message Edited by altenbach on 03-14-2009 09:57 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    greyscale.png ‏6 KB

  • OLE Color Conversion

    I became acorss the function below.  What is the relationship between OLE and RGB color?  Can you give me some background on them?  For the function below, is the input supposed to be OLE or RGB?  Thanks!
    Convert OLE Color VI
    Owning Palette: 3D Graph Properties VIs
    Requires: Base Package (Windows)
    Converts LabVIEW colors to or from colors used by ActiveX.
    You can use the property page for the 3D graph to set all of the colors. Color conversion is necessary for use with ActiveX Property Nodes.
    Kudos and Accepted as Solution are welcome!

    If you would read throught he examples and source code you would have found your answer! MSDN has to say this http://msdn.microsoft.com/en-us/library/windows/desktop/ms694353%28v=vs.85%29.aspx. Basically if the MSB is 0, the other 3 bytes are directly RGB colors, otherwise they are either a system color index or palette index. The LabVIEW Color is built in a similar way, but the meaning of the MSB is different.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to interpret Line Returns in a text box

    Hello,
    I have a text file that contains several sentences that are tab delimited:
    For Example:
    The cat sat on the mat.   The dog sat on the cat.   [Where the two sentences are delimited]
    My first objective is to read these two statements into two separate text boxes… and I can do this.
    My second objective is to add a line return to each statement so that when in their respective text boxes they will appear as:
    The cat sat                    The dog sat
    on the mat.                    on the cat.
    I don’t know how to do this.  Any ideas…?  Ideally I want to insert something like “/n” into the sentence so that LabVIEW can interpret it as a new/return line command when it outputs it to the text box.
    Kind regards,

    Here's one possible solution (note the string format display settings, LV 2010).
    I use an array for the output for simplicity. You can index individual elements if really needed.
    First we create an array of sentences (tab is delimiter).
    Then, for each sentence, we create an array of words (space delimiter)
    We reshape the array of words to two rows (pad if the number of words is odd).
    We form the two line sentence using array to spreadsheet string with space as delimiter.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    ProcessSentences.png ‏11 KB

  • What do the colored wires and boxes mean

    I remember years back finding a National Instruments webpage that showed all the Labview colored "wires" and "boxes" and what they meant.  Does anyone know where that is now and/or if it still exsists??  For example a "magenta" box was a string...green was a boolean..
    Thanks.
    Solved!
    Go to Solution.

    It was the LabVIEW Quick Reference Card. At one point they stopped putting the datatypes on the card. Not sure which version stopped it. You can get the cards online: http://search.ni.com/nisearch/app/main/p/bot/no/ap​/global/lang/en/pg/1/q/labview%20quick%20reference​...

  • Missing help information?

    I find that some labview objects do not link to any description or help information. For instance, when I drop a control, say the "file path control" from the "string & path" palette onto the front panel of a vi, there is "no description" available in the context help panel, there is no "help" option on the associated drop down menu, and there appears to be no corresponding entry in the labview help documentation when searching for "file path control". This seems to be true for most, but not all of the front panel controls. It is also fairly common to have difficulty finding results on search terms entered into the "search the labview help" dialogue. Very often I find quoted help entries on the labview web site that I can't locate on my machine. For instance, I spent a very long time searching the installed help documentation trying to figure out how to convert between the numeric value returned by a color box constant and the actual color contained within. A quick google search returned a Knowledge Base entry on your website titled "How Does a LabVIEW Color Box Interpret a Numeric Value for Color?" This entry quotes help documentation from a Labview product manual that explains how the color box constant is encoded. I realize that the entry on my machine may not be exactly the same as the entry quoted from your published manual, but darned if I can find anything that gives the slightest hint in my installed help documents. Is there something wrong with my labview installation?
    global variables make robots angry

    Sorry, I wasn't very clear in my first post. When I place a pre-written function on the block diagram, a handy description pops up in the help panel,  a detailed description is available in the labview help documentation and it has a nice help option on its drop down menu that I can click on for info. The controls that I select for the front panel are, of course, pre-written functions as well. That is to say; they were created by someone other than me and they do things in a specific fashion that might not be totally apparent (to a dummy like me). Because the drag and drop interface for building the block diagram is similar to the drag and drop interface for building the front panel, it instinctively feels like there should be some sort of description there of the control functions that come with labview, just like the descriptions of the block diagram functions that come with labview. I think what I was looking for might be the help button that I've managed to find by right clicking on the control object, then selecting properties from the drop down, then looking in the lower right hand corner of the properties section. Funny thing is, I read the path properties entry from the file path control that I dropped on the front panel, and I still can't figure out how the control selects the default search folder if it isn't specified, and I'm still not sure whether I can somehow programmatically set the default folder for all "file path controls" on the front panel at once. I'm still searching for that answer in the installed documentation because I feel like I should avoid posting lots of questions to the forum unless I'm sure that the answer isn't sitting right there in the documentation somewhere, which leads me to the second example, the color box constant conversion thing.
    I read that same help description that you found which is why I was so confused to find that the number I was looking at was not a hexidecimal and had a different number of bytes than the help file specified. It turns out that if you create a color box constant on the block diagram and pick a color, any color, nothing up my sleeve- then create an indicator to output that color, you'll find that the color is not a hexidecimal alphanumeric with six values and three bytes, it is instead represented by a thirty two bit integer. that's one more byte than in the help description, and the integer didn't seem to have an obivious pattern in it that would give away how to turn a red green blue, three byte number into the 32-bit integer that the color box constant shoots out. So here's the official description from national instruments that tells me what the heck is going on with the color box color to integer conversion:
    "A color is represented by a 32-bit integer, with the lower three bytes representing the red, green, and blue components of the color. For a range of blue colors, create an array of 32-bit integers where only the values of the low bytes change (the low byte contains the blue component). To create a range of gray colors, you need to create an array of 32-bit integers where the red, green, and blue values of each element are the same."
    so you see there are four bytes with one byte empty that you have to assemble into an integer. or decode by doing the reverse.
    and the quoted description is from the LabVIEW Picture Control Toolkit Reference Manual Version 1.0. But of, course, the description of how to convert from an integer into a color and vice versa must be in the regular documentation somewhere, because it's a fairly fundamental thing. So I looked in my installed help documentation again to see if I could construct a search phrase to find it, so I would be a little wiser about how to search for specific info in the help files, but I'm just not able to locate it. And, of course if I create a color box control and go into the properties and find the properties help button for the Framed Color Box Properties Dialog Box it doesn't actually say anything about what happens between you picking a color and it shooting a number over to the block diagram, which makes me think "Ok. there is a description of this pre-made control somewhere else, and I'm just not looking in the right spot to find it, because who would give you a big ol' palette of these pre-made controls to use, and leave you wondering what the heck some of the controls are doing when you use them?"
    So I was thinking that I'm not looking in the right spot for documentation, or I'm not searching in the best way, or that maybe I'm missing part of the help documentation or something.
    global variables make robots angry

  • How can I convert OPENGL canvas graphics into bitmap image format.

    I used the following code... but no conversion made... plz tell the write way..
    public static void savePaintingJPG(GLCanvas canvas, String fileName) {
            // saves the content of the 'panel' in a file 'file'.jpg
            String name = "";
    int framewidth = canvas.getSize().width; // get the canvas' dimensions
    int frameheight = canvas.getSize().height;
    System.out.println(framewidth);
    System.out.println(frameheight);
      java.nio.ByteBuffer pixelsRGB = BufferUtils.newByteBuffer(framewidth * frameheight * 3); // create a ByteBuffer to hold the image data
    GL gl = canvas.getGL(); // acquire our GL Object
          // read the Frame back into our ByteBuffer
    gl.glReadBuffer(GL.GL_BACK);
    gl.glPixelStorei(GL.GL_PACK_ALIGNMENT, 1);
    gl.glReadPixels(0, 0, framewidth, frameheight, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, pixelsRGB);
    int[] pixelInts = new int[framewidth * frameheight*3];       // Convert RGB bytes to ARGB ints with no transparency. Flip image vertically by reading the
         // rows of pixels in the byte buffer in reverse - (0,0) is at bottom left in OpenGL.
    int p = framewidth * frameheight * 3; // Points to first byte (red) in each row.
         int q;   // Index into ByteBuffer
      int i = 0;   // Index into target int[]
      int w3 = framewidth*3;    // Number of bytes in each row
      for (int row = 0; row < frameheight; row++) {
        p -= w3;
        q = p;
        for (int col = 0; col < framewidth; col++) {
          int iR = pixelsRGB.get(q++);
          int iG = pixelsRGB.get(q++);
          int iB = pixelsRGB.get(q++);
          pixelInts[i++] = 0xFF000000 | ((iR & 0x000000FF) << 16) | ((iG & 0x000000FF) << 8) | (iB & 0x000000FF);
            BufferedImage bimg = new BufferedImage(framewidth,frameheight,BufferedImage.TYPE_INT_ARGB);
              bimg.setRGB(0,0,framewidth, frameheight, pixelInts, 0, framewidth);
            try
                File imageFile = new File("final.jpg");
                File imagf = new File("final.png");
                File image = new File("final.bmp");
                ImageIO.write(bimg, "jpg", imageFile); // saves files
                ImageIO.write(bimg, "png", imagf);
                ImageIO.write(bimg, "bmp", image);
                   System.out.println("Success");
            catch (IOException e) {
                e.printStackTrace();
        }

    You want to convert it to an array of what? Of numbers? Of LabVIEW colors?
    The "Read BMP File.vi" VI outputs a 1-D array of the pixels, but I suspect it is not exactly in the format that you need. I am NOT sure, but I think that each group of 3 elements of that 1-D array (which is inside the cluster "image data" that the VI outputs) represents the red, green and blue levels of a single pixel (but it may be hue, saturation and lum.). Also, since it is a 1-D array, you would have to divide it by the width (which is also included in the "image data" cluster) to get some sort of 2-D array of data.
    You can find the "Read BMP File.vi" VI in the functions palete> "Graphics & sound">"Graphics Formats".

  • Program similar to 'watch' available in repositories?

    On other linux distros, a program I often use to monitor lsof in "real time" is 'watch'. Sadly, it appears that this app is not available in the Arch repos. Does anyone know of a similar app that performs the same function?
    Here's the man page from 'watch' from an ubuntu box:
    WATCH(1) User Commands WATCH(1)
    NAME
    watch - execute a program periodically, showing output fullscreen
    SYNOPSIS
    watch [options] command
    DESCRIPTION
    watch runs command repeatedly, displaying its output and errors (the first screenfull). This allows you to watch the program output
    change over time. By default, the program is run every 2 seconds. By default, watch will run until interrupted.
    OPTIONS
    -d, --differences [permanent]
    Highlight the differences between successive updates. Option will read optional argument that changes highlight to be perma‐
    nent, allowing to see what has changed at least once since first iteration.
    -n, --interval seconds
    Specify update interval. The command will not allow quicker than 0.1 second interval, in which the smaller values are con‐
    verted.
    -p, --precise
    Make watch attempt to run command every interval seconds. Try it with ntptime and notice how the fractional seconds stays
    (nearly) the same, as opposed to normal mode where they continuously increase.
    -t, --no-title
    Turn off the header showing the interval, command, and current time at the top of the display, as well as the following blank
    line.
    -b, --beep
    Beep if command has a non-zero exit.
    -e, --errexit
    Freeze updates on command error, and exit after a key press.
    -g, --chgexit
    Exit when the output of command changes.
    -c, --color
    Interpret ANSI color sequences.
    -x, --exec
    command is given to sh -c which means that you may need to use extra quoting to get the desired effect. This with the --exec
    option, which passes the command to exec(2) instead.
    -h, --help
    Display help text and exit.
    -v, --version
    Display version information and exit.
    NOTE
    Note that POSIX option processing is used (i.e., option processing stops at the first non-option argument). This means that flags
    after command don't get interpreted by watch itself.
    EXAMPLES
    To watch for mail, you might do
    watch -n 60 from
    To watch the contents of a directory change, you could use
    watch -d ls -l
    If you're only interested in files owned by user joe, you might use
    watch -d 'ls -l | fgrep joe'
    To see the effects of quoting, try these out
    watch echo $$
    watch echo '$$'
    watch echo "'"'$$'"'"
    To see the effect of precision time keeping, try adding -p to
    watch -n 10 sleep 1
    You can watch for your administrator to install the latest kernel with
    watch uname -r
    (Note that -p isn't guaranteed to work across reboots, especially in the face of ntpdate or other bootup time-changing mechanisms)
    BUGS
    Upon terminal resize, the screen will not be correctly repainted until the next scheduled update. All --differences highlighting is
    lost on that update as well.
    Non-printing characters are stripped from program output. Use "cat -v" as part of the command pipeline if you want to see them.
    Combining Characters that are supposed to display on the character at the last column on the screen may display one column early, or
    they may not display at all.
    Combining Characters never count as different in --differences mode. Only the base character counts.
    Blank lines directly after a line which ends in the last column do not display.
    --precise mode doesn't yet have advanced temporal distortion technology to compensate for a command that takes more than interval sec‐
    onds to execute. watch also can get into a state where it rapid-fires as many executions of command as it can to catch up from a pre‐
    vious executions running longer than interval (for example, netstat taking ages on a DNS lookup).
    EXIT STATUS
    0 Success.
    1 Various failures.
    2 Forking the process to watch failed.
    3 Replacing child process stdout with write side pipe failed.
    4 Command execution failed.
    5 Closign child process write pipe failed.
    7 IPC pipe creation failed.
    8 Getting child process return value with waitpid(2) failed, or command exited up on error.
    other The watch will propagate command exit status as child exit status.
    AUTHORS
    The original watch was written by Tony Rems ⟨[email protected]⟩ in 1991, with mods and corrections by Francois Pinard. It was
    reworked and new features added by Mike Coleman ⟨[email protected]⟩ in 1999. The beep, exec, and error handling features were added by Morty
    Abzug ⟨[email protected]⟩ in 2008. On a not so dark and stormy morning in March of 2003, Anthony DeRobertis ⟨[email protected]⟩ got
    sick of his watches that should update every minute eventually updating many seconds after the minute started, and added microsecond
    precision. Unicode support was added in 2009 by Jarrod Lowe ⟨[email protected]
    procps-ng June 2011 WATCH(1)

    bzpnbx wrote: I was interested to know if it was just from know-how or some info was presented from that output that also indicated that. I just wanted to make sure that that's how he knew to check /usr/bin and not that I just assume that.
    All user-installed binaries on GNU/Linux systems are placed in the /usr/bin directory. Yes, this is common knowledge, hence Trilby basically asking "Where else would they be?"
    parchd wrote:Isn't procps-ng in base anyway, and therefore almost certainly installed already?
    It is in the base group, but it doesn't appear to be a dependency of any other base package. So if you installed your system using 'pacstrap -i' to manually select base packages and didn't explicitly select procps-ng, it wouldn't have been installed.

Maybe you are looking for

  • My home button has stopped working how do I fix it

    I have sync'd with my pc, and restored the factory settings - nothing has helped fix the problem Anyone know how to get the home button working again?

  • Agent Phone is not ringing (outbound)

    Hi, I am using ICM 7.5.7. I have configured agent based campaign for outbound dialing. When dialer place any call to agent desktop agent can accept it and the call is also going to the customer. Both of they are able to talk each other. Here the issu

  • 10.2.0.3 to 10.2.0.4 upgrade-Real Application cluster Component INVALID

    HI Guys just upgraded the standalone database from 10.2.0.3 to 10.2.0.4 ...Everything looks good after running the catupgrd.sql except the Real Application Cluster Component is INVALID.. Is it because its not a RAC database or some other reason?Plz h

  • How can i tell if my iMac has a virus

    My 8 year old son was told he could watch a film for free i he visited a site called put locker. when he did this, safari opened lots of pop ups to say Contact Apple Support malicious activity.  How do i check if my iMac now has a virus?

  • I cant activate new iphone 4

    my problems started with issues I described here - https://discussions.apple.com/message/17248744#17248744 but It came further  - after reseting all content and settings I can't activate it - I see message "your iphone could not be activated because