Calibration interval question

I have at work a spare PXI 4065 which is calibrated until April 2015.
On the production line I have another PXI 4065 which is calibrated until February 2015.
I never use the spare PXI 4065 until February 2015. Question: If I put the spare PXI 4065 into the production line in February 2015, that means according to calibration certificate that my spare DMM is calibrated only for 2 months?Then I have to resend it to calibration?
Please help!I looked at the calibration documents and the 2 labels with due date for calibration and I'm confused...  

Hi,
NI recommends calibrating your NI PXI-4065 every year (NI 4065 Calibration Procedure), but the calibration interval is really determined by the specifications you require. Regardless of how the device is being used, the specifications of the NI PXI-4065 are dependent upon the time interval since last calibration. This means that the 1-year specifications for your spare PXI-4065 are only guaranteed until April 2015, even if it is not used until February 2015. We typically list specifications for 24 hours since last calibration, 90 days since last calibration, and 1 year since last calibration. Outside of one year, we do not guarantee any specifications. For more information regarding guaranteed specifications, please see the NI 4065 specifications.
Hopefully, you find this explanation helpful.
Regards,
Mike Watts
Product Marketing - Modular Instruments

Similar Messages

  • IDS signature tuning... interval questions.

    Just starting out trying to tune some signatures to fit our environment, and looking for clarification on some parameters of IDS signatures.
    For example: 2152 - ICMP flood
    It uses the "Flood Host" engine with the action parameters:
    Limit type: percentage (100)
    Rate: 25
    Event count: 1
    Event count key: victim address
    Specify interval: No
    Summary mode: Fire all
    Threshold: 10000
    Interval: 30
    Global threshold: 20000
    Summary key: victim address
    Can someone translate into english?
    I'm guessing 25 packets/sec of ICMP traffic to the same destination would trigger the "event". And the 100% limit means...? 25 in a row?
    And the summaries?
    At least the "flood host" has a clear interval, but many of the scans do not. For example, 3002 or 3030 - TCP SYN port sweep. This specifies a number of "unique" packets with the same key (attacker address, or attacker and victim, or other combination) but does not specify the interval. Is this also per-second? The documentation simply says "The unique parameter triggers the alert when more than the unique number of ports or hosts is seen on the address set within the time period."
    What is the "time period" and where is it set? For these alerts (as well as the previous) the "Specify Alert Interval" is set to "No".

    I can't claim to understand some of the "scan" signatures either...most of ours are disabled.
    The limit type and percentage would only seem applicable if you're using the "request rate limit" action in inline mode. I don't think they have anything to do with alarming.
    For this particular signature I believe the most relevant variable is rate, which you already seem to understand.
    The alert frequency settings allow you change the summary mode from "fire all" to "summarize" or "global summarize" based on the number of alerts being generated. This probably has other uses, but the one that immediately comes to mind is to prevent the monitoring system from being overloaded with spurious alarms.
    As far as 3030 - TCP SYN port sweep...I don't understand it either. Do a search for it on the forums, there have been other questions.

  • Calibration hardware question

    I have suggested to my boss that we get some sort of calibration device for our monitors at work. my questions are, Does one brand stand out?, and can one puck be used on multiple monitors in the shop? any suggestions are appreciated.

    My Mitsubishi has it's own gun control. I didn't know about pre cal.
    OTOH, I also have a LaCie 19" with no gun adjustments and the results are quite close. Not perfect, however.
    I had an interesting experience with both. When I was running the 9600, the monitor which came with the system was a Dell. There is a definite difference between my Mitsu and the Dell, which is why I borrowed the Eye one which came with the 9600 package. I tan both calibrators on both monitors, and in each case, no difference was detectable between ColorVision and Eye one, yet they didn't match, particularly on separation in shadows, which would not necessarily be the cal HW/SW. Yet, when I took the prints home and checked them against my monitor, they matched.
    I learned not to tweak the settings I saw on the Dell. But again, if I did tweak a customer's image, which we did continually, the prints matche the Dell.
    Unfortunately, before I could design some further testing routines, the long time owners (50 years) decided to retire and sold off the equipment. I wound up with the Dell computer but not the monitor.
    If I took the time to set each gun so that the differential values cancelled, I actually did not have to proceed with the rest of the tuning.

  • Camera Raw 5.5 will not install?  And Camera calibration button question.

    I just built a new computer with 7 PRO and installed all my software. So when it comes time to instal 5.5 raw it goes about half way through and tells me to shut off Bridge.  As far as I know the bridge window and program are off and not running.  I even unpined it off the taskbar.
    In CS4 when opening a Raw file and I go to my camera calibration botton there are only two options that come up, Arc 3.3 and 4.4. Where are all the other options at?  Do they come with the 5.5 update?  Thanks Keig

    Keiger wrote:
    As far as I know the bridge window and program are off and not running.  I even unpined it off the taskbar.
    Odds are you've got Bridge set to auto-launch at startup...make sure it's not running when you try to install the update.,.and yes, the update should install the addition DNG Profile options.

  • A Calibration.Gamma Question

    I've been calibrating a fairly new 21 Inch Samsung LCD, with the Eye One Display 2. Generally speaking, I am very happy with the results. The colors seems true and the monitor is not too bright. The problem is the Gamma. The target chosen is 2.2; yet, each time I calibrate the result is 1.9. The color target of 6500 and brightness of 120 are achieved.
    So, what effect does this slight variance have, and any ideas why it can't be achieved?
    I adjust the brightness/contrast and RGB values using the software and the LCD OSD controls. Each step I get the values almost dead on in the center, so I can't figure out what's going on.

    Now i'm confused, so let me get this straight,
    I said:
    >As long as what you're seeing between LR, PS and web syncs up, that's as accurate as you're going to get.
    you:
    >There's no reason why LR/PS and web colors should sync up, unless you're using a color-managed browser such as Safari.
    and then,
    >Which is why I always see a difference, sometimes ever so slight, between CM and nonCM apps.
    So,
    > As accurate as you're going to get.
    > Ever so slight.
    To me these mean the same thing.
    There are subtle differences between applications, but getting within 3% of expected is essentially, accurate color.
    Why sweat those last few %? It's going to look different on everyone else's monitor, so why kill yourself?
    I apologize if my language caused a minor fire, but i just found your statement to be slightly offensive. Telling a user that they're not supposed to have consistent color seems like an attempt to scare them. I don't know about you, but i make my living producing color. I work on multiple systems spread throughout several agencies and photographers. In each of them, i've walked in, set up an unfamiliar system and had consistent and reasonably accurate color up and running within the first hour (mac only). And the process is always the same. Unless the monitor you're working on is beyond repair, once you've calibrated, getting consistent color between apps is simply a workflow issue. Of course, the moment you bring an output device (other than the monitor) into the mix, everything goes to hell. Fortunately, the majority of my work these days is destined for the web. But if i had to tell my clients, "oh sorry, these files that you just spend a couple hundred thousand producing are going to look like inconsistent dog meat because some browsers aren't color managed," they would fire me. Seriously. This issue was resolved last century. I can bring up my work in any image editor, and ALL browsers and not see any obvious differences. This is how it should be and is.
    In case you haven't tried this, calibrate to 2.2/65K. Now, open LR and PS. Take an image from LR and export as sRGB. Open in PS. Notice any major differences between? You shouldn't. Next, Save for Web from PS. This will take your sRGB image and strip its profile (if unchecked). Save it. Drop the jpg into all your browsers. What do you see? That's right, consistent color! And yes, it is possible to produce out of gamut color which shifts after being squeezed into sRGB. Fortunately, this is rarely the case.

  • ACR Calibration Results Question

    I recently purchased a D300 and have been shooting in RAW format. I am working my way through Bruce and Jeff's book and decided to try to calibrate my camera.
    I shot a MacBeth (now X-Rite) color checker and followed the directions from the RWCR CS3 book.
    All went well with the calibration and the results I ended up with are:
    No Shadows correction
    R Hue -15 Sat +38
    G Hue -16 Sat +33
    B Hue +11 Sat +5
    The results of applying these corrections is kind of mixed. Overall the color looks pretty good but the greens esp in grass are pretty saturated and kind of faked looking. Skin tones seem a little on the red side as well.
    Overall the image seems a bit too saturated. I guess I could adjust my corrections by eye and save those as a new preset, but I was wondering if my results were normal or perhaps I went astray somewhere.
    I intend to run some of the auto calibrators like the Fors script or Rags script to compare.
    Thanks,
    Les

    Les,
    I cant really comment on your current results. But I am currently working on a new update that you might want to help me Beta test. If so, please contact me offline.
    I am also always looking for test images from new camera models. I jumped on the D3 bandwagon and I am delighted. I could use a D300 image for my test suite.
    Cheers, Rags :-)

  • Calibration Lightroom Question

    Hi folks,
    I've calibrated my MacBook Pro monitor using a Spyder2 Pro which has produced nice colours on the OSX desktop.
    When I open Lightroom I find that all my images are supersaturated and that the colours just seem wrong. Blues look quite purple and greens are really intense with the saturation and vibrance sliders at zero. When I output to jpegs the colours are all extremely weak and the vibrant colours of the image don't seem to be there.
    I am a colour management novice and am wondering what I am doing wrong.... Any ideas?
    Kind Regards
    Mark

    I am not an expert in this, but it seems like you have not actually applied the Spyder Calibration to the Monitor. LR is Color managed and would use it, where I am not at all sure the OSX desktop is.
    Seems more like what ever Monitor Profile you are currently using is corrupt.
    As to the Jpeg, what Profile are you exporting them in? sRGB is what they should be. The flatness could be from aRGB or ProPhotoRGB if you are viewing them in a Browser.
    Don
    Don Ricklin, MacBook 2Ghz Duo 2 Core running 10.5.1 & Win XP, Pentax *ist D See LR Links list at http://donricklin.blogspot.com for related sites.

  • Reorganization interval question

    Experts,
    Our client deletes or reorganizes all the old PIR weekly in legacy system.
    In simple terms, on every Monday, they delete/reorganize the previous weeks data till sunday.
    so, mapping this scenario in SAP, should I use reorganizing interval as 1 day in customization and run the background job every Monday of the week?
    what should be the reorganization interval period? 1 day or 7 days?
    also what is delete history means? Thanks for any help

    In your case, reorganization interval would be 1 day.   PIR values go on reducing based on the logic of the planning strategy used.   Once PIR is reduced to zero, its usage is complete.   In reorganization, whichever PIR are not reduced to zero, are set to zero.
    Each PIR entry will have two values, one is reduced/balance PIR, and other consumption value (which remain as history) - the total of both these values will be the original PIR vaule.   So in deleting history, you decide whether you want to delete the "consumed" PIR values also.
    - Chetan

  • Calibration of PXI-5695 RF Attenuator

    NI PXI-5695 Specifications, RF Attenuator (NI document 375125C-01) indicates a calibration interval 1 year. 
    1. Is there a published National Instruments calibration procedure for the PXI-5695 RF Attenuator?
    2. The PXI-5695's that we have purchased did not come with calibration data or a calibration certificate.  Is there an option to purchase the PXI-5695 with calibration data?
    Thanks,
    Darrow Gervais

    Hello Darrow,
    There is not a published calibration procedure for the PXI-5695 RF Attenuator. You would need to send in your PXI-5695's to be calibrated by National Instruments. Here is NI's Calibration Services page: http://www.ni.com/services/calibration.htm. If you choose to have your device calibrated, you can choose from several types of calibration reports: http://www.ni.com/services/calibration_compare.htm.
    For new devices, NIST certificates are kept in an online searchable database here. National Instruments products are calibrated at manufacturing, so a newly purchased device will be NIST traceable. The certificates are searchable by the serial number of the device. The following link has more details on NIST Traceability Certificates: http://digital.ni.com/public.nsf/allkb/3459F092CEDE62C6862575A0006900F6.
    Best regards,
     

  • NI 9219 calibratio​n

    I have an NI 9219 module that indicates the calibration expired 12/27/2008 (when looking at Measurement & Automation Explorer).  The manual for the 9219 also says it has a calibration interval of 1 year.  However, there is no information on the NI website about calibration for this module. 
    Is there a calibration for this module?  And how do we get it done?

    Hello hardwareguy,
    At this point, the best option for calibrating this module is our Factory Test Service. This service is comparable to our end-of-line production test and will adjust the unit back to "like new" condition. The service will also renew the calibration interval for the device and reset the date that is stored on the device (and displayed in MAX). However, this service does not include any detailed measurement data to demonstrate "As Found" or "As Left" measurement performance.
    To set up a Return Materials Authorization (RMA) for this service you can call 1-866-510-6285 in the US or visit the Contact NI page for other locations. I hope this is helpful, let us know if you have any other questions. 
    Matt Anderson
    Hardware Services Marketing Manager
    National Instruments

  • How differs soft proofing in View - Proof Colors and Save for Web - Preview?

    Hi, I'm currently confused with one inconsistency. My working space is Adobe RGB and I use calibrated monitor. After I finish my work on image I go to View -> Proof Colors -> Internet Standard RGB. Image looks terribly with the overall violet/purple hue. Then I open Save for Web dialogue, I check Convert to RGB and from Preview options I select again Internet Standard RGB. Now the previewed image looks as expected. The same results I get if I manually convert image to sRGB before soft proofing and saving for web. So... what's the difference between preview in Proof Colours and in Save for Web? Thank you for your opinions.

    Hi 21, thank you for your input. All what you say makes perfect sense, it is exactly how it should work and how I expected it works. My problem was, that while testing this theory in practice, I have come to different results. I expected, that if I stick to the theory (meaning keeping in mind all rules you perfectly described) I should get the same result in both soft proof and save for web preview. But... it was not the case. Save for web preview offered expected results while soft proof was completely out of any assumptions and colours were totally over-saturated with violet/purple hue. Also, Edit -> Assign Profile -> sRGB gave another result then Soft Proof -> Custom -> assign sRGB (preserve numbers), but the same as save for web preview.  What troubled me was why this is so.
    Today I've made tests on hardware calibrated monitor and... everything works exactly as you describe and as I expected.
    Then I went back to another monitor which is software calibrated (both monitors are calibrated with X-Rite i1 Display Pro). And again... I received strange results described above. So I did the last thing I thought and disabled colour calibration on that monitor. And suddenly... both soft proof and save for web preview gave the same result.
    Probable conclusion: soft proof and save for web preview (together with Edit -> Assign Profile) are programmed to use different algorithm which is evident on standard gamut monitors with software calibration. Question can be closed.
    Gene and 21, thank you for your effort.

  • PXI module DLL Documentation

    Hi,
        I was able to extract PXI module data that I ask before through this Link:
    But the VIs that are available on my function palette was somehow not complete, so
    I wanted to create a few VIs that will extract the data I wanted by using "call library function". 
    (The most important data for me right now are the Last External Calibration Dates and the Recommended
    Calibration Interval from the PXI Modules)
        This will be my first time to use "call library function" so need your help with some DLL documentation for below hardware.
        Really appreciate if someone could share me some documentation.
    Btw, I'm currently working with the following hardware:
    NI PXI-5922/5122  ------ niSCOPE
    NI PXI-6552          ------ niHSDIO.dll
    NI PXI-5406          ------ niFGEN
    NI PXI-4110/4130  ------ niDCPOWER
    Thanks in advance!

    Hi Versil1,
    Thanks for posting.  The functions that you are looking for are already available in the function palette for some of the devices that you are using, so using Call Library Function to access the DLLs should not be necessary.  Here are where these functions are located:
    For NI-FGEN: Right-click, then go to Measurement I/O » NI-FGEN » Calibration » Utility » niFgen Get Ext Cal Last Date and Time.vi and niFgen Get Ext Cal Recommended Interval.  
    For NI-DCPower: Right-click, then go to Measurement I/O » NI-DCPOWER » Calibration » Utility » niDCPower Get Ext Cal Last Date and Time.vi and niDCPower Get Ext Cal Recommended Interval.
    For NI-Scope: Right-click, then go to Measurement I/O » NI-SCOPE » Calibration » External Calibration » niScope Cal Fetch Date.vi.  There currently is no function to retrieve the recommended interval.  This information can be found within Measurement & Automation Explorer.
    For NI-HSDIO: There are currently no functions to retrieve the last external calibration date or the recommended interval.  This information can also be found within Measurement & Automation Explorer.
    I will be filing some product suggestions for adding functions to NI-Scope and NI-HSDIO to return this information.  Our R&D department will take a look at the suggestion for future improvements.  For now, these are the functions that we have available to use.  Hope this helps!
    Regards,
    Joe S.

  • OT: Congratulations to Beth Marshall

    Yesterday, Beth Marshall (aka Zabeth69) was appointed an Adobe Community Professional (ACP) for Dreamweaver.
    ACP is the new name for what used to be known as Adobe Community Experts and, before that, Team Macromedia (and if your memory goes back a very long way, Team Allaire). Members of the scheme are volunteers, whom Adobe recognizes as experts in a particular product or technology, and who provide help to the community in a variety of ways, including help in the forums. It's a one-year (renewable) appointment, and competition to be selected is pretty tough. So, well done, Beth.

    Hi Everyone
    David wrote -
    There's no shame in being a beginner, as long as there is a willingness to learn.
    You are correct in that there is no shame in being a beginner, and I was not suggesting to anyone that these people should not be helped, just that I wish a small but significant number of them would at least recognize that the more advanced techniques are often not a case of cut and past answers, and quite often that what they wish is something where they need to learn the basics first. But that said, there are also many who are willing to learn, and there is a certain satisfaction in helping these people, that does make up for the frustration caused by the others
    Yes David, I also remember those days when browsers did not even have javascript. This has probably made the learning process for those who started with the web early much simpler, as we did not need to learn everything in one go. Many who did start then have moved on to incorporating web standard, but some have not and still insist that tables, embedded style, etc. are better. It is also unfortunate that there are still a large number of sites on the web that show techniques that are pre. 2004 both in the tutorials and examples they offer, and completely ignore such things as accessibility, legal requirements, security and unobtrusive coding.
    I also think that this often leaves many beginners, some of who, post to the forum with, (to quote David) - "a widespread belief that web development is "easy".
    Osgood wrote -
    I do agree that it quite possibly is the calibre of questions that are
    asked which has lead to a rather 'blinkered' board in my view.
    Yes, this is part of what I was suggesting, but also that the forum does not really lend itself to helping with some of the 'more interesting' questions, that really require a 'tutorial' and not a simple 'do this, do that' approach, after all who wants to write a series of short answer when a complete explanation complete with example and code is necessary, (and how many have the time?) time for me often comes into question when the answer really combines dreamweaver/flash with actionscript/javascript, (jQuery)?
    Martin wrote-
    Now, back to my quest for WWW domination.  I will rule with fairness.
    You disappoint me, I thought you had already achieved this.
    Other thoughts -
    As for dreamweaver itself, I think it supports the beginner in the best way possible, by enforcing standards compliant code where possible, even if it does not have the more advanced features that have made me start looking for a better php ide, and having already forced me to move to visual studio for C#. Which also goes to prove, it is not completely aimed at the professional web designer/developer.
    Just as a matter of interest, what do you all think are the skills necessary for someone to call themselves a professional web designer/developer?
    Paula

  • Mid 2010 15" i5 Battery Calibration Questions

    Hi, I have a mid 2010 15" MacBook Pro 2.4GHz i5.
    Question 1: I didn't calibrate my battery when I first got my MacBook Pro (it didn't say in the manual that I had to). I've had it for about a month and am doing a calibration today, is that okay? I hope I haven't damaged my battery? The calibration is only to help the battery meter provide an accurate reading of how much life it has remaining, right?
    Question 2: After reading Apple's calibration guide, I decided to set the MacBook Pro to never go to sleep (in Energy Saver System Preference) and leave it on overnight so it would run out of power and go to sleep, then I'd leave it in that state for at least 5 hours before charging it. When I woke up, the light on the front wasn't illuminated. It usually pulsates when in Sleep. Expectedly, it wouldn't wake when pressing buttons on the keyboard. So, what's happened? Is this Safe Sleep? I didn't see any "Your Mac is on reserve battery and will shut down" dialogues or anything similar, as I was asleep! I've left it in this state while I'm at work and will charge it this afternoon. Was my described method okay for calibration or should I have done something different?
    Question 3: Does it matter how quickly you drain your battery when doing a calibration? i.e is it okay to drain it quickly (by running HD video, Photo Booth with effects etc) or slowly (by leaving it idle or running light apps)?
    Thanks.
    Message was edited by: Fresh J

    Fresh J:
    A1. You're fine calibrating the battery now. You might have gotten more accurate readings during the first month if you'd done it sooner, but no harm has been done.
    A2. Your machine has NOT shut down; it has done exactly what it was supposed to do. When the power became critically low, it first wrote the contents of RAM to the hard drive, then went to sleep. When the battery was completely drained some time later, the MBP went into hibernation and the slepp light stopped pulsing and turned off. In that state the machine was using no power at all, but the contents of your RAM were still saved. Once the AC adapter was connected, a press of the power button would cause those contents to be reloaded, and the machine would pick up again exactly where you left off. It is not necessary to wait for the battery to be fully charged before using the machine on AC power, but do leave the AC adapter connected for at least two hours after the battery is fully charged. Nothing that you say you've done was wrong, and nothing that you say has happened was wrong.
    A3. No, it does not matter.

  • PXI 2527 & PXI 4071 -Questions about EMF considerations for high accuracy measurements and EMF calibration schemes?

    Hi!
    I need to perform an in-depth analysis of the overall system accuracy for a proposed system. I'm well underway using the extensive documentation in the start-menu National Instruments\NI-DMM\ and ..\NI-Switch\ Documenation folders...
    While typing the question, I think I partially answered myself while cross-referencing NI documents... However a couple of questions remain:
    If I connect a DMM to a 2 by X arranged switch/mux, each DMM probe will see twice the listed internal "Differential thermal EMF" at a typical value of 2.5uV and a max value of less than 12uV (per relay). So the total effect on the DMM uncertainty caused by the switch EMF would be 2*2.5uV = 5uV? Or should these be added as RSS: = sqrt(2.5^2+2.5^2) since you can not know if the two relays have the same emf?
    Is there anything that can be done to characterize or account for this EMF (software cal, etc?)?
    For example, assuming the following:
    * Instruments and standards are powered on for several hours to allow thermal stability inside of the rack and enclosures
    * temperature in room outside of rack is constant
    Is there a reliable way of measureing/zeroing the effect of system emf? Could this be done by applying a high quality, low emf short at the point where the DUT would normally be located, followed by a series of long-aperture voltage average measurements at the lowest DMM range, where the end result (say (+)8.9....uV) could be taken as a system calibration constant accurate to the spec's of the DMM?
    What would the accuracy of the 4071 DMM be, can I calculate it as follows, using 8.9uV +-700.16nV using 90 days and 8.9uV +- 700.16nV + 150nV due to "Additional noise error" assuming integration time of 1 (aperture) for ease of reading the chart, and a multiplier of 15 for the 100mV range. (Is this equivalent to averaging a reading of 1 aperture 100 times?)
    So, given the above assumptions, would it be correct to say that I could characterize the system EMF to within  8.5uV+- [700.16nV (DMM cal data) + 0.025ppm*15 (RMS noise, assuming aperture time of 100*100ms = 10s)] = +-[700.16nV+37.5nV] = +- 737.66nV? Or should the ppm accuracy uncertainties be RSS as such: 8.5uV +- sqrt[700.16nV^2 + 37.5nV^2] = 8.5uV +-701.16nV??
     As evident by my above line of thought, I am not at all sure how to properly sum the uncertainties (I think you always do RSS for uncertainties from different sources?) and more importantly, how to read and use the graph/table in the NI 4071 Specifications.pdf on page 3. What exactly does it entail to have an integration time larger than 1? Should I adjust the aperture time or would it be more accurate to just leave aperture at default (100ms for current range) and just average multiple readings, say average 10 to get a 10x aperture equivalent?
    The below text includes what was going to be the post until I think I answered myself. I left it in as it is relevant to the problem above and includes what I hope to be correct statements. If you are tired of reading now, just stop, if you are bored, feel free to comment on the below section as well.
    The problem I have is one of fully understanding part of this documenation. In particular, since a relay consists of (at least) 2 dissimilar metal junctions (as mentioned in the NI Switch help\Fundamentals\General Switching Considerations\Thermal EMF and Offset Voltage section) and because of the thermo-couple effect (Seebeck voltage), it seems that there would be an offset voltage generated inside each of the relays at the point of the junction. It refeers the "Thermocouple Measurements" section (in the same help document) for further details, but this is where my confusion starts to creep up.
    In equation (1) it gives the expression for determining E_EMF which for my application is what I care about, I think (see below for details on my application).
    What confuses me is this: If my goal is to, as accurately as possible, determine the overall uncertainty in a system consisting of a DMM and a Switch module, do I use the "Differential thermal EMF" as found in the switch data-sheet, or do I need to try and estimate temperatures in the switch and use the equation?
    *MY answer to my own question:
    By carefully re-reading the example in the thermocouple section of the switch, I realized that they calculate 2 EMF's, one for the internal switch, calculated as 2.5uV (given in the spec sheet of the switch as the typical value) and one for the actual thermocouple. I say actual, because I think my initial confusion stems from the fact that the documenation talks about the relay/switch junctions as thermocouples in one section, and then talks about an external "probe" thermocouple in the next and I got them confused.
    As such, if I can ensure low temperatures inside the switch at the location of the junctions (by adequate ventilation and powering down latching relays), I should be able to use 2.5uV as my EMF from the switch module, or to be conservative, <12uV max (from data sheet of 2527 again).
    I guess now I have a hard time believeing the 2.5uV typical value listed.. They say the junctions in the relays are typically an iron-nickel alloy against a copper-alloy. Well, those combinations are not explicitly listed in the documenation table for Seebeck coefficients, but even a very small value, like 0.3uV/C adds up to 7.5uV at 25degC. I'm thinking maybe the table values in the NI documentation reffers to the Seebeck values at 25C?
    Project Engineer
    LabVIEW 2009
    Run LabVIEW on WinXP and Vista system.
    Used LabVIEW since May 2005
    Certifications: CLD and CPI certified
    Currently employed.

    Seebeck EMV needs temperature gradients , in your relays you hopefully have low temperature gradients ... however in a switching contact you can have all kind diffusions and 'funny' effects, keeping them on same temperature is the best you can do. 
    Since you work with a multiplexer and with TCs, you need a good Cold junction ( for serious calibrations at 0°C ) and there is the good place for your short cut to measure the zero EMV. Another good test is loop the 'hot junction' back to the cold junction and observe the residual EMV.  Touching (or heating/cooling) the TC loop gives another number for the uncertainty calculation: the inhomogeneous material of the TC itself..
    A good source for TC knowledge:
    Manual on the use of thermocouples in temperature measurement,
    ASTM PCN: 28-012093-40,
    ISBN 0-8031-1466-4 
    (Page1): 'Regardless
    of how many facts are presented herein and regardless of the percentage
    retained,
                    all will be for naught unless one simple important fact is
    kept firmly in mind.
                    The thermocouple reports only what it "feels."
    This may or may not the temperature of interest'
    Message Edited by Henrik Volkers on 04-27-2009 09:36 AM
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

Maybe you are looking for