Resolution of teperature measurement

Dear All,
I am us 1102 SCXI module & RTD sensor for temperature measurement. My requirement of measurement resolution is 0.01 Deg.C but I am getting only 0.1 Deg.C in labview program. when I use calibration wizard available in new daqmx I saw 0.01Deg.C of resolution. How to get the same resolution in Labview program?

Uuhhh, you don't need to shout
What type of sensor (Pt100, NTC ???) are you using?
How is your setup (gain, exitation, connector block,  ... )?
Just because you have a 10mK resolution in the calibration tool, that doesn't mean you can get this resolution with your setup.
If you use a Thermistor 0.06K sounds reasonable in respect to some exitation and gain settings.  
Message Edited by Henrik Volkers on 06-14-2006 09:47 AM
Greetings from Germany
Henrik
LV since v3.1
“ground” is a convenient fantasy
'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

Similar Messages

  • Resolution vs size of playback -- whats the difference?

    Hello,
      I was under the impression the resolution is a measure of how clear the playback is, and the playback size is the size of the actualy playback on your screen.  But when you open a new project, you are asked to set the size, which turns out to be the "resolution" in the project dialogue box. Are they the same thing?
    Also, does it even matter what the size of the project is if you publish it as "scalable" ? When you do that, the user can make it as big as he wants anyway, right?
    Thanks for any insight!
    Ryan

    If you are on a Windows box, right click on your desktop and choose Resolution from the context menu.  What are you setting here?  When you set the resolution of your computer monitor, you are setting the number of pixels of the height and the width.
    In Captivate, the concept is the same. You have to define the exact height and width of your project in pixels when you create the project file, and that's meant to be the exact size at which the content is displayed in the browser after publishing.  If you change that display size and play the content at more or less than 100% you will lose some clarity.  That's the nature of the beast. 
    Scalable content can mean different things.  Some content can be scaled to whatever the height and width of the available space on the monitor is, which means that the content itself is stretched out to fit the size.  This almost always means lost clarity.  Other examples of "scalable" content leave the actual stage area the same size and just scale the browser window, filling the space around the actual stage with blank space.  So it depends what you mean by "scalable".
    In my experience, if the crispness and clarity of the content is of utmost importance, you DO NOT scale it.

  • Frequeny resolution

    Hello everybody,
    I have an agilent 53181A , I use it to count frequency.
    When I measure a frequency it displays all digits (example 14628.67 Hz) and since I communicate it via GPIB it change the reading and it displays only 14,62 kHz . And I need to see my hertz range.
    I attached the drivers for agilent .
    Can any body make changes to have this resolution of frequency.
    Please help me.
    Attachments:
    hp53181a.zip ‏331 KB

    Hello,
    a few suggestions:
    the driver contains a vi named arm freq,period,ratio. By setting the
    choice to 'Digits' you can alter the number of displayed digits.
    Personally I do not know that device, but I am using the HP34401. There
    you can change the resolution for frequency measurements manually. If I
    change to remote control and get the value, the resolution is only
    altered if I reset the device.
    Do you configure / reset the device before taking measurements? If you
    are using the initialize.vi and do not wire the reset-connector, the
    device will be always cleared and a default setup will be etablished.
    As I looked through the config measurement vi only sets the changes the
    measurement and not the resolution.
    Try changing the frequency counter resolution manually and then taking
    the measurement without initializing the device (VISA communication
    should work without the init)
    lynxlynx

  • True Resolution for Image

    Hi, as I search the internet for a particular popular image, when it pops up in Google image search, it shows up with a bunch of different resolutions / sizes / whatever you call it. I assume that when the image was originally created, it had one resolution it was created it, the one where it's the best quality. Is there a way in photoshop to figure this out? ie what is the optimal resolution that an image should be displayed at?
    For instance, I'm doing an image search on "success kid" which is a popular meme. Of all the images that pop up, should I take the one with the highest resolution? But however could that have meant that it was resized to be very big?
    Just a little confused about resolution in general.

    charles badland wrote:
    Well, you have the right to define words in any way you wish.
    No, I don't. (see above Merriam Webster definition) 
    Resolution describes the detail an image holds. In MLTHO that concept is more accurately described by number of pixels, not size of pixels. The only thing image  PPI indicates is how big pixels will print IF OR WHEN an image is ever printed. Period.
    Image PPI describes printed pixel SIZE.
    when people talk about "high res" or "low res" images...  99% of the time they are talking about pixel dimension (i.e., total number of pixels)
    As you well point out, the Photoshop Image Size dialog contributes to this confusion.
    There is more to it then number of pixels. You can have billions of pixels if the lens use was not in focus you have no details you have a blur.  PPI is pixel size while different size UXGA displays all display 1600x1200 pixels each display size has a single PPI resolution.  My 20" UXGA  PPI resolution is 100PPI my  15" UXGA PPI resolution is 133PPI the same 1600x1200 image displayed on both is sharper and smaller on the 15" display then on the 20" display. 12"x9" on the 15" display 16"x12" on the 20" display.  Printers can print different size pixels they are not confined to a single pixel size like displays so my Epson 4800 could print both the 12"x9" 133DPI image and the 16"x12" 100ppi image.  PPI is pixel density it applies to displays sensors any device the displays or captures pixels. Resolution is also measured in resolving power how well a lens can focus a point or something like that.  Displaying an image at different sizes on a display is done by scaling the number of pixels used to display the image with the display's single pixel size.
    Photoshop Image Size Dialog supports two types of resizing controlled by the RESAMPLE check box. 
    With RESAMPLE Un-check you will see that the dialog top area is grayed out and can not be changed.  The number of Pixels can not be changed and the Images Aspect Ratio can cot be changed. You will see that the Width, Height and Resolution are linked together in the middle print size area all you need do is to change one of these and Photoshop will calculate the other two.  When RESAMPLE is un-checked the other controls in the bottom section are not relevant.
    With RESAMPLE CHECKED the current Document Image will be replaced with a new generated image not a single pixel will remain intact.  The number of pixel in the new imaged depend on the other setting used. The bottom section setting controls the resize operation. Interpolation how the new pixels are generated. Constrain when checked links the Documents Width and Height the Documents Aspect Ratio will not change and the image will not be distorted into a new aspect ratio. Scale Styles may or may not be grayed out. Layer Styles have absolute pixels settings not relative settings if you change the number of Pixels in an image layer style may look quite different if you do not have Photoshop scale the styles setting during an Image Size resize.  With RESAMPLE CHECK it best to also check Constrain to prevent distortion.  If you resizing for the web set the width or height in the top section that you want. Resizing for print first with RESAMPLE UN_CHECKED set the print width or height that you want. If the resolution falls below the resolution you need for printing CHECK RESAMPLE and set in the RESOLUTION you want the image printed at and set the interpolation method you wan used. 
    Printing:  Your Printer resolution setting is a print quality setting. As long as you image DPI resolution is lower then the Printer's resolution setting all is fine.  Printers use their higher resolution capabilities to paint in you image larger pixels. It like paint by the numbers fill an image pixel with drops of different color inks to produce the correctly colored pixel. There is no one to one correlation.

  • Acquiring High resolution data from usb mouse movement

    Is there a way of acquiring high resolution data from the movement of a usb mouse?  I have a NI PCI 6221 daq card to use. I have a Pentium 4 PC with 1 Gb of RAM. I need to get the position, velocity and acceleration of the mouse.
    Is there a way to do it with the above hardware.
    Thanks in advance

    I don't see how you could use a PCI-6221 to get high resolution mouse movement measurements. The PCI-6221 can acquire voltages at up to 250kS/s, but what voltage would you measure? It could also read in digital data at 1MHz, but you would have to be able to understand the USB communication protocol your mouse uses, and I doubt that your mouse vendor will give out that information. You might be able to take your mouse apart and hook up leads to the sensors around the trackball and do a sort of quadrature encoder application, but there's no guarantee we're dealing with TTL digital signals here (you might even be using an optical mouse!).
    Your best option - and I have no idea how good this option is - is to use the driver already installed for your usb mouse. What software application are you going to use to program your application that will measure the mouse movements?
    If you would consider using LabVIEW, you could set up an event structure to capture mouse movements across the front panel. Each event registered includes a timestamp from the CPU's millisecond clock as well as the coordinates of the mouse. If you stored up a buffer of previous mouse positions and times, you could infer a velocity, perhaps with the help of interpolation functions.
    All of this would have somewhere on the level of millisecond timing accuracy. Were you envisioning something else?
    Jarrod S.
    National Instruments

  • Measure separation between two signals that may be coincident

    Hi
    I need to determine the time period between two pulse signals. I'm going to assume for now that I can get these pulses as TTL.
    I was initially drawn to the 'Two-Signal Edge-Separation' method using two counter inputs. However, it's quite possible that the signals would be coincident some of the time. Could anyone please answer the following questions:
    If my signals were coincident, would the 'two-signal edge-separation' measurement be armed on the edge of signal 1 (= same time as edge of signal 2) and continue counting until the next edge of signal 2 as shown below? Or is it re-armed on the next edge of signal 1?
    Is there any other method I could use to log the time between my two pulses? Either as simultaneous counter outputs or something completely different (non-counter method maybe)?
    (Probably not relevant at this stage, but I will be using Measurement Studio to implement this - hardware as yet unselected).
    Thanks in advnace for any help.
    CAS
    Solved!
    Go to Solution.

    Hi CAS,
    The count is not re-armed on the next edge of signal 1 using the two-edge separation mode.  Once the counter is armed the first time, it will continue counting until signal 2 is detected.  
    Alternate Approach 1:
    If you want the count to reset on every edge of signal 1, you could configure an edge count task (counting the internal timebase) using signal 1 as the count reset terminal and signal 2 as the sample clock.  You still would have uncertainty of which signal is detected first if they occur at precisely the same time, so your measured result might be close to 0, or it might be close to 1 full period of the signal in the case that signal 1 and signal 2 are identical.
    If you wanted to remove this uncertainty, you can actually delay signal 2 by enabling the PFI filter for the signal 2 terminal.  The original intent of this feature was to be able to add a debounce filter to avoid picking up multiple edges on transitions, but a result of the implementation is that the signal is delayed by some amount of time (between the pulse width guaranteed to pass and the pulse width guaranteed to not pass).  The best case scenario would be X Series using the 100 MHz timebase, you would add 10 ns of jitter but you could delay the signal by an arbitrary amount.  So, you can add the delay and account for it in your reported values, but you would run into problems if the delay caused signal 2 to occur after the 2nd edge of signal 1 (i.e. if the signals were already close to 1 full period apart).  You'd have to have an idea of the maximum frequency of the signals as well as the maximum delay between them to determine if this would work or not.
    Alternate Approach 2:
    You could use two counters configured as edge count tasks.  Count the fastest internal timebase.  Sample the first counter off of signal 1, and the second counter off of signal 2.  If you arm the counters together and ensure that signal 1 and signal 2 start at the same time, then you can simply subtract your buffered samples of counter 1 from your buffered samples of counter 2 to get an array of differences.
    There are a number of other ways you could get similar results, but I think the above 2 suggestions are probably the easiest to implement.  Alternate Approach 1 has the advantage of still only requiring 1 counter and you don't have to worry as much with arming the counters and starting the sampling together (which could be a problem with Alternate Approach 2 if signal 1 and signal 2 are free-running).
    I would recommend X Series DAQ for this task for the following reasons:
    The count reset feature mentioned in Alternate Approach 1 is only currently available on X Series and 2nd Generation cDAQ.  It will hopefully be added in the somewhat near future to M Series with a driver update but I can't make any guarantees.
    The 100 MHz timebase on X Series gives a 10 ns resolution to your measurement.  M Series and cDAQ use an 80 MHz timebase (12.5 ns resolution), and E Series uses a 20 MHz timebase (50 ns resolution).
    X Series have the most flexible digital filters on their PFI lines and the PFI filters introduce the lowest jitter (compared to M Series and 660x that is--E Series devices do not have digital filters at all).
    You didn't mention what frequency you would be using, but X Series have on-board FIFOs which will help you avoid errors from samples being overwritten if your external frequency is relatively fast.
    X Series are priced similarly to their M Series equivalents.  All X Series have the same counter features with the lowest cost X Series being the PCIe-6320.
    Best Regards,
    John Passiak

  • Blurry Text When Using Video out With FX 5900U

    I just got a MSI Geforce FX5900Ultra VTD256, and i was trying to play games on my TV.  I used the S-Video to video (the yellow Cable) adapter and when im in the desktop, all the text is real blurry, i cant even read it.  I tried lowering the resolution to see if that would make it easyer to read, but it didnt work.  Can someone please help.  Maybe the is somthing wronge with me TV???

    Hi,
    TV sets have lower resolution and bandwidth than PC monitors - the two are sort of related, but are not quite the same thing.
    Resolution is often measured in "lines", meaning the number of individual vertical lines that could be displayed across the width of the TV and still be distinguishable as individual lines. A figure of 800 lines was considered pretty good, and could usually only be achieved in a TV studio, from a camera direct to a monitor. So 800 pixels is about as much as a TV (especially a smaller one) can reasonably display. (The vertical resolution depends on the TV standard, and is about 600 lines or pixels for PAL, and about 500 for NTSC)
    The other factor is Bandwidth, which is measured in Megahertz (MHz). Basically, the finer the (horizontal) detail, the more bandwidth you need in the path between the source and the picture tube to display it faithfully. Broadcast TV is deliberately limited to about 5MHz, whereas a computer monitor will have several tens of MHz of bandwidth.
    Also, the chrominance (colour) component of a composite or S-Video signal is limited to less than 2 MHz, so fine colour detail will be worse off (Black on white or vice-versa will give best results).
    Anyway, the practical outcome of all this is that the highest resolution you can use for TV out is 800x600, and even then you need to use larger fonts (so the vertical lines are thicker, and there are a reasonable number of horizontal lines in a character).  The maximum number of characters per line should be less than 60-80.
    There are some scan-converters available which use fancy techniques to render a reasonable image at higher resolutions, but they are fairly expensive.
    S-Video should give better results than composite, but you will still be up against bandwidth and resolution limits
    Cheers

  • Improve the TV out ?

    Is there anyway i can improve the text when using TV out.
    I understand that CRT tvs cannot produce exellent text but i m asking for basic/
    Currently my TV out, the text is so blurred.

    Hi,
    The main limitation is to do with the bandwidth that the TV system allows - this is related to horizontal resolution.
    TV resolution is sometimes measured in lines - which is the number of vertical black-and white lines you can fit across the width of the screen and still be able to resolve each line. A studio camera going direct to a monitor might achieve 800 lines, by the time you broadcast it and recieve it on your home TV sety it's down to about 500-600 lines, and if you record it on VHS tape it's somewhere around 200 (roughly, I can't remember the exact figures).
    Colour resolution is worse, so if you had vertical green stripes on a red background, you might only resolve about 100 lines or so (NTSC is worse than PAL in this regard).
    Anyway, as a rough guide, the maximum resolution you can get away with on TV out is 800x600, and even then you need larger fonts. Try using a font size of around 16-20 points, and BOLD (this increases the horizontal thickness so it stands a better chance of being seen - vertical thickness is also increased, but that's usually not a problem). Use a black or white font, and lightly saturated backgrounds (not too highly coloured) for best results.
    Hope this helps.
    Cheers

  • Execution duration

    I'd like to display the execution duration of some abap statement in abap program ...
    Look like this...
    program started at .......
    program ended at ....
    But it is for performance issue thus MUST BE VERY PRECISE!
    Use of sy-uzeit only if no better solution!
    Thanks for the reply

    GET RUN TIME FIELD
    Basic form 5
    GET RUN TIME FIELD f.
    Effect
    Relative runtime in microseconds. The first call sets (initializes) the field f to zero. For each subsequent call, f contains the runtime in microseconds since the first call. The field f should be of type I.
    The ABAP statements that lie between two calls of the GET RUN TIME statement are known as the runtime. The time from the first to the second call is known as the measurement interval.
    Notes
    Use SET RUN TIME CLOCK RESOLUTION to specify the precision with which the time is to be measured, before you call GET RUN TIME FIELD for the first time. The default setting is high precision.
    Example
    Measuring Runtime for the MOVE Statement
    DATA: T1   TYPE I,
          T2   TYPE I,
          TMIN TYPE I.
    DATA: F1(4000), F2 LIKE F1.
    TMIN = 1000000.
    DO 10 TIMES.
      GET RUN TIME FIELD T1.
        MOVE F1 TO F2.
      GET RUN TIME FIELD T2.
      T2 = T2 - T1.
      IF T2 < TMIN.
        TMIN = T2.
      ENDIF.
    ENDDO.
    WRITE: 'MOVE of 4000 byte:', TMIN, 'microseconds'.
    Note
    SAP recommends that you measure the runtime several times and take the minimum result.
    Related
    To perform runtime measurements of complex processes, use Runtime analysis (Transaction SE30).
    Related
    SET RUN TIME CLOCK RESOLUTION
    Additional help
    Measuring Runtimes of Program Segments

  • Help: IMAQdx scale image size

    HI all,
    i'm new in LebVIEW and have a question.
    Im using the functions of IMAQdx for a camera (XCD-U100 UXGA black&white).
    Is it possible to take images with smaller size? I want to increase the frame-rate.
    I can scale the image, after it was taken in max-size (1600 x 1200). But i want the camera to take in custom size, no in max-size.
    I hope someone can help me

    Hi NEOS,
    i is possible to change the resolution in NI Measurement&Automation Explorer.
    Go to Devices->IMAQdx Devices here you will find your Camera.
    On the bottom of the MAX Window you will find Acquisition Attributes. Here you can change the Resolution of the cam.
    Did this help?
    Regards
    Daniel

  • Impact of changing decimal of UOM from 2 to 3 decimal

    Hi Folks,
    I need to know if there is systemwide impact of changing the 'decimal place' of unit of measurement from 2 to 3.
    for example "EA" has decimal places 2 if  change it to 3 what would be the system impact overall.
    Thanks
    Shine

    Hi Shine,
    If you increase the decimals, your increasing the resolution of the measurement. So from measurement point of view, you will be more accurate.
    But before proceeding, you need to analyze which unit your modifying, what is the need for doing the same. If the need justifies, then check which materials are using this unit of measure. Also check if there is any open items, like PO, SFO, PR etc. Analyze by changing what will be the impact on them.
    Next you need to assess from Finance point of view, because if you increase the resolution, your allowing the user to store the qty at even lower value, so during cost roll-up the value might go below the permissible levels, so thereby the value would be 0.
    So based on the above, you should be able to analyze & manage the issue.
    Hope the above answers your query.
    Regards,
    Vivek

  • Unpredictable loadjava execution duration

    We are using the following batch file to run loadjava on Windows clients loading classes to Dec and AIX servers:
    if (%OS%) == (Windows_NT) setlocal
    SET CLASSPATH=.\serverExtras;.\libraries.jar;.\Alliance.jar;%CLASSPATH%
    CALL loadjava -u %1 -v -resolve serverExtras.jar
    CALL loadjava -u %1 -v -resolve libraries.jar
    CALL loadjava -u %1 -v -f -resolve Alliance.jar
    if (%OS%) == (Windows_NT) endlocal
    The "serverExtras" and "libraries" files contain all the dependencies of our application code, which is in the "Alliance" jar file.
    Loading data to the same instance of Oracle, this script will usually run for 1-2hrs. However, once in a while, it runs for 12-24hrs or more and we wind up killing it.
    I'm not sure what to look at to figure out what is going on here. Are there any parameters or circumstances that are likely causing this?
    Thanks for any suggestions.

    GET RUN TIME FIELD
    Basic form 5
    GET RUN TIME FIELD f.
    Effect
    Relative runtime in microseconds. The first call sets (initializes) the field f to zero. For each subsequent call, f contains the runtime in microseconds since the first call. The field f should be of type I.
    The ABAP statements that lie between two calls of the GET RUN TIME statement are known as the runtime. The time from the first to the second call is known as the measurement interval.
    Notes
    Use SET RUN TIME CLOCK RESOLUTION to specify the precision with which the time is to be measured, before you call GET RUN TIME FIELD for the first time. The default setting is high precision.
    Example
    Measuring Runtime for the MOVE Statement
    DATA: T1   TYPE I,
          T2   TYPE I,
          TMIN TYPE I.
    DATA: F1(4000), F2 LIKE F1.
    TMIN = 1000000.
    DO 10 TIMES.
      GET RUN TIME FIELD T1.
        MOVE F1 TO F2.
      GET RUN TIME FIELD T2.
      T2 = T2 - T1.
      IF T2 < TMIN.
        TMIN = T2.
      ENDIF.
    ENDDO.
    WRITE: 'MOVE of 4000 byte:', TMIN, 'microseconds'.
    Note
    SAP recommends that you measure the runtime several times and take the minimum result.
    Related
    To perform runtime measurements of complex processes, use Runtime analysis (Transaction SE30).
    Related
    SET RUN TIME CLOCK RESOLUTION
    Additional help
    Measuring Runtimes of Program Segments

  • Can someone tell me how can an image be 72 by 72 px/inch, another e.g. 139 by 54 px/inch?

    I guess 72 by 72 means that in 1 inch lenght there are 72 pixels on booth x, and y axises. So resolution is not measures in square inches? It would be logical though... Thanks for the help!

    If get it right if I create some artwork in a raster based program, then its ppi value is constant lets say 150 ppi.
    When you create a raster image, values which determine it's intended size (in actual linear measure) may be written into the file, thereby determining its PPI (PPI is nothing but a scaling factor). Whether this occurs depends on the program creating the image and/or the format in which it is being saved.
    That value may or may not be respected by another program which imports it as an object to be assembled with other objects on a layout.
    If someone grabs my image and scales is disproportionally, you're no more able to tell the exact ppi value of that image right?
    Again, that depends on the specific program. If someone "grabs" (opens) your image in another raster imaging program, selects some or all of its pixels and then "scales" the selection, he is not actually scaling anything, he is re-rasterizing the image to the resolution of the current image document in which he is working.
    That is related to one of the common misconceptions among beginners who mistakenly think they don't need a vector drawing program or a page layout program, because they can "do everything in Photoshop using Layers." They don't understand that, yes, while working in Photoshop (or similar programs), you can have multiple raster images stacked in Layers, but all those "individual" raster images have the same resolution; that of the document.
    If someone "grabs" (imports) your image into an object-based illustration or page-layout program, selects its bounding box and then scales it, then he is scaling the existing pixels of the imported image, not re-rasterizing it. Such a program can do that because the document doesn't have a resolution. Each raster image in the document can have its own resolution.
    If that person performs that scaling disproportionally, its pixels will be rectangular, but not square. So its horizontal PPI will differ from its vertical PPI. But that doesn't necessarily (again, depending on the program) mean that he's "no more able to tell the exact ppi value of that image." Most object-based graphics programs provide at least dimensions, or "scale" values expressed as percentages, or even pixel counts somewhere in the interface as object attributes. You can see that information scattered about in Illustrator's interface in the Links palette's flyout menu>Link Information... modal dialog, or in the grab-bag Document Info palette (after tediously setting Selection Only and either Embedded Images or Linked Images in its flyout menu).
    Some programs make this easier and more intuitive than others. FreeHand, for example, respected and displayed the PPI instruction (if any) stored in the raster image file, and considered the image scaled to 100% if it was not scaled differently. Illustrator only considers a raster image to be "100%" when it is scaled to 72 PPI (a resolution usually inappropriate for print anyway). This is one of the many (many, many) stumbling blocks with which FreeHand users struggled in Illustrator's awkward interface.
    JET

  • How to use the CJC channel with 9211 in labview?

    Hello there,
    I have cDAQ-9172 with the 9211 module for some thermocouple measures.
    I want to use 1 channel for the CJC and the others for normal teperature measurements.
    I have the VI attached.
    Thanks,
    Sandro
    Attachments:
    Cont Acq Thermocouple Samples-HW Timed2.vi ‏46 KB

    Hi Sandro,
    By "I want to use 1 channel for the CJC", are you talking about using the NI 9211's built-in CJC channel, _cjtemp?
    If so, go back to the original example code and set the CJC Source control to Built-In. This will cause DAQmx to use _cjtemp as the CJC source. You won't see the CJC temperature on the graph by default: Reading the CJC Values with the Thermocouple Measurements.
    If you have an external CJC sensor connected to one of the thermocouple channels (ai0:3), then create a global channel in MAX (thermistor, or voltage w/custom scale), specify the name of that channel in the CJC Channel control, and set CJC Source to Channel. It's better to use the built-in CJC sensor unless the external CJC sensor is definitely more accurate, and not less.
    Regarding the Start button you added to the example VI, I would recommend moving the call to DAQmx Create Channel out of the polling loop so that it doesn't leak memory by creating DAQmx tasks that never get used. Adding a call to Wait Until Next ms Multiple (from the Timing palette) or Wait For Front Panel Activity (from the Dialog & User Interface palette) would prevent the polling loop from wasting CPU cycles.
    Brad
    Brad Keryan
    NI R&D

  • Which Parallel processing is faster?

    There are various ways in which parallel processing can be implemented e.g.
    1. Call function STARTING NEW TASK which uses dialog work process.
    2. Using background RFC (trfc) .. call function IN BACKGROUND TASK AS SEPARATE UNIT
    3. Using submit via jobs
    I want to know which technique is fastest and why?

    soadyp wrote:
    The throughput of the various workprocess is NOT identical.
    > I was a little surprised to discover this.  Is seems the DISP+WORK behaves differently under different workprocess types.
    > Clearly a background process does need/ doesnt support dialog processing and the CLASSIC PBO/PAI process.
    >
    > All I can say, is TEST it on your kernel.
    >
    > We have done testing on this since every millisecond is import to us.
    >
    > Dialog processes are sometimes twice as slow as baclground tasks depending on what is happening.
    > Im talking abap the ABAP execution time here.Not DB, RFC, external call , PURE ABAP execution times.
    >
    > DIALOG was simply slower than Background processes.
    >
    > TRY it:  Build a report. SUBMIT report in background.
    > Run the same abap in dialog.
    > Include GET Runtime statements to measure the execution time microseconds.
    > fill it with  PERFORM X using Y , CALL FUNCTION B or CALL CLASS->Method_M
    > set the clock resolution high, and measure the total execution time.
    >
    > When running HUGE interfaces, processing 10 of millions of steps every day, this is of genuine consideration.
    >
    > ALSO NOTE:
    > The cost of open JOB   submit via job  close should also be measured.
    > If your packets are too small, then background speed is lost to the overhead of submission.
    >
    > The new Background RFC functions should also be tested with the SAME CODE.
    >
    > happy testing
    > Phil
    Dialog might be slower only due to the GUI communication or difference in ABAP source code. In some standard SAP applications there is a different processing depending on SY-BATCH flag.
    Technically, as it was already mentioned several time above, both work processes (dialog and batch) are pretty identical (with slight differences in memory allocation).
    So please don't confuse the community.

Maybe you are looking for