Strange temperature and voltage readings

I've had my pc for 2 years or so, it's a KT3 Ultra-ARU with Athlon XP 1800+ and it has worked fine until now.  I have just bought an mp3 player (Creative Zen Touch) so now I finally have a USB2.0 device.
I opened up the case to make sure the front USB ports on my case (Lian Li PC60) were connected properly and connected to a USB2 header.  I pulled out the network card so I could see/reach the USB headers and made sure things were connected properly.
I booted into Windows XP and everything seemed fine except PC Alert 4 which now shows some freaky results.
CPU Temperature : -44C
Sys Temperature  : 0C
Vcore : 4.08v
3.3V   : 4.08v
+5V   : 6.85v
+12V : 15.30v
CPU and system fans are normal (6000 and 1500 rpm).
I rebooted into the BIOS and the BIOS also shows similar absurd information except temperature which is "Not Present".
The only thing I've tried so far is to unplug the power cord and leave it unplugged overnight, I didn't want to try anything too drastic until I got some opinions.
I'm still using the original BIOS that came with the board, I've had no issues and so no need to update the BIOS.
For what it's worth, I have tested copying files to the mp3 player using the USB2 port and it worked fine.
David.

"I opened up the case to make sure the front USB ports on my case (Lian Li PC60) were connected properly and connected to a USB2 header. I pulled out the network card so I could see/reach the USB headers and made sure things were connected properly."
99,5% of people would have just connected USB device and if it did not work would check the USB connectors....but you did first the checking even removing a card ?
..."Perhaps I knocked something while fiddling around and fried a not so important component on the motherboard..." Did you ? Why do I feel you are trying to talk to yourself more than give information ?
..."the PC has been running for a few hours, even after playing a game (Deus Ex) for 45 minutes....".  Then if all is working good, go on as you are now and don't pay attention to those weird numbers.

Similar Messages

  • Can not measure temperature and voltage simultaneously

    I am a beginner with labview. I want to measre temperature and voltage simultaneously. When I run
    the VI, I can get temperature or voltage, but not togther. I attach my VI, please give me suggestion on
    how to make it work. 3X
    Attachments:
    heatflux.vi ‏1069 KB

    Since I don't know your exact configuration I will make some basic assumptions based on how it appears that you have configured the DAQ Assistant Express VIs.
    Assumptions:
    1. You have only one DAQ board in your system.
    2. You want to scan continously
    3. You want to acquire 3 temperature channels at a rate of 1000S/s and take 100 readings at a time
    4. You want to acquire 2 voltage channels at a rate of 1000S/s and take 1000 readings at a time.
    Based on this configuration your first problem is that you have configured the DAQ board to acquire continously in the first call to the DAQ Assistant (your first frame of the sequence structure). This ties up all the analog acquisition resources without releasing them. When you make your second call to the DAQ Assistant (your second frame of the sequence structure) you are creating a conflict because the DAQ board is already busy running your first request. At this point you are probably receiving an error but you might not see this since you are not doing error checking in your code. This is also why you are only getting one set of data. Next iteration of the while loop the first call to the DAQ Assistance reconfigures the board and executes again and so the cycle repeats itself.
    I don't have a DAQ board installed so I can't confirm with certainty if I am correct but you can do this by simply changing the DAQ Assistant properties. In the 'Task Timing' tab change from 'Acquire Continuosly' to 'Acquire N Samples'.
    Assuming this works all you have done is confirmed that my assumptions are correct and technically your program should work. So now some programming advice.
    It's ok to scan all channels at once even though they might not be of the same type so go ahead and configure all your channels in one DAQ Assistant call and get rid of the sequence structure. Decide on one set of parameters for Scan Rate and Samples to Read, in your case I doubt this will be a problem. Since you are performing the same analysis on all channels you don't need to parse your data simply pass the 'data' from your DAQ Assistant into a single 'Amplitude and Level Measurements' Express VI. You will now have a single array with all your Mean values based on the order the channels are configured. If you want to plot the data in different graphs all you need to do is parse your channels using the 'Split Signals' or 'Select Signals' Express VI.
    Hope this makes some sense.
    -Christer

  • These i7 4770K Temperatures And Voltage Values Are Normal With Z87-G43?

    Hello everyone,
    I recently upgraded my good old Core2Duo rig and bought a new cpu-motherboard-ram trio. My new specs is as follows:
    - i7 4770K @ stock speed + Coolermaster 212Evo cpu cooler
    - MSI z87 g43 motherboard
    - 2x4 GB G-Skill Ripjaws X 1866Mhz
    - Case: Corsair Carbide 400R
    The problem is I wasn't aware of that new Haswell cpus are running slightly hotter and now I'm little bit worried about my temperatures. Since I was also planning to overclock my cpu a bit and trying to find a point that doesn't need a vast voltage increase, I'm losing my sleeps over this situation at the moment.
    Anyways. I'm using softwares like HWmonitor, Coretemp, Realtemp and Intel Extreme Tuning Utility. Turbo Boost is also active. So the cpu is going up until 3.9 GHz. Other than that I'm on stock speeds, and motherboard's default values. Here are my temperatures:
    Ambient room temp: Varies between 23-25 °C
    Idle: 28-30 °C
    While playing demanding games like Battlefield 4: Max 58-65°C
    With Intel Extreme Tuning's stress test for 15 mins: max 65-70 °C
    With Prime 95 Blend and OCCT burn tests for 15 mins: max 78-82 °C
    I also run realtemp's sensor test and the values are identical since it's using Prime95 too.
    I also noticed that Prime 95 and OCCT is increasing my cpu voltage value from 1.156 to 1.21 while Intel Extreme Tuning's stress test and BurnInTest is using 1.156v. All these test are using %100 of the cores. Couldn't understand why there's a voltage increase on certain tests which leads my temps go even higher and higher. Will I encounter these kinds of random voltage increases during normal tasks? Like playing games, rendering some stuff etc..?
    On the other hand I tried motherboard's OC Genie future to see what happens. It overclocked the cpu automatically to 4.0 GHz @1.10v. With this setting I've seen max of 70 °C for a second and mostly 65-68°C under OCCT stress test. And also my voltage didn't increase at all and sit at 1.10. I'm a bit confused about these values. Since with default settings I'm getting hotter values and my voltage is going up to 1.21 with Turbo Boost under Prime95/OCCT burn tests. I also found out that my BIOS is v1.0. I don't really have a performance or a stability issue for now except this voltage thing. Does a BIOS update help on that situation? I don't really like to touch something already working OK and end up with a dead board.
    Also I'm wondering if my temperature values are normal with the cpu cooler i have (Coolermaster 212evo)?
    I also could buy some extra fans for my case (1 exhaust to top & 1 intake to side) and maybe a second fan for the cpu cooler if you guys think that these would help a bit.
    Sorry for my English by the way. I'm not a native speaker.
    Thanks for all your comments and suggestions already.

    Thanks for the reply Nichrome,
    I will follow your suggestions for the fans. Currently I don't have any fans on top. But I'm considering to buy some fans to top and side. So you get better results with top fans being exhaust right?
    Also which fan are you using as the second fan for the cpu heatsink? I would buy one of these as well since we have the same heatsink.
    I'm also using the default/auto voltages and settings at the moment. Just Turbo Boost is enabled and when it kicks in voltage is going up to 1.156. Which seems normal and doesnt produce dangerous level of heat. The thing is if I start running Prime95&OCCT the voltages going up to 1.21+ level at the same turbo boost speed (which is 3.9 GHz). And that produces a lot more heat than usual. But if I use BurnInTest or Intel Extreme Tuning Utility stress test the voltages sit at 1.156 and under full load on all cores. I'm wondering what's the reason of this difference and if it is software or motherboard related. Even with using the OC Genie @ 4.1 GHz temperatures and voltages seem lower than the stock&auto settings (idle 35-38 / stress test with OCCT 70C max, gaming 60-65C max). I'm not sure if a BIOS update would fix this. Since the whole BIOS flashing process is creeping me out. I don't like to bother with something that is already working OK. Don't want to end up with a dead board in the end :P Maybe I'm becoming a bit paranoid though, since this is a really hard earned upgrade after 6 years. :P

  • Recording Temperature and Voltage measurements using Keithley 2182 Nanovoltmeter

    Hello all,
    I am relatively new to LabView and looking to extend a vi I am currently using.
    I am trying to record voltage and temperature measurements from a Keithley 2182 nano voltmeter using a GPIB cable. I have a vi that can do this for either voltage or temperature not both. At the moment I only record what is shown on the display of the nano voltmeter.
    Could somebody explain how I could get labview either to change between voltage and temperature on the nano voltmeter or whether it is possible to have two simultaneous measurements of temperature and voltage and how I would achieve this.
    Thanks
    Mike

    Hi,
    For each read, no matter Temperature or Voltage there is a certain command that is send to the voltmeter.
    I don't think (actually I'm pretty sure) you cannot read it in parallel but you can do it successively: One read Voltage, one read Temperature and so on.
    There should be something like:
    while not STOP do
      1. send GPIB command for changing Keithley to Voltage Measurement
      2. send GPIB command for Voltage Read
      3. read GPIB -> Voltage
      4. send GPIB command for changing Keithley to Temperature Measurement
      5. send GPIB command for Temperature Read
      6. read GPIB -> Temperature
    end
    You can take a look in VI to see which are the commands send for Voltage and Temperature reads and to mixed them like I described it above.
    If you don't manage it share your VIs (for temp and volt.) maybe it will be easier for me (or something else) to give you some additional advices.
    Paul

  • Temperature and Voltage read by Keihtley 2182A

    Dear All,
    Thanks in advance for helping.
    I am using Ch1  and Ch2  of Keithley 2182 A for voltage and for temperature measurement, respectively. The question is that I wish to measure temperature as soon as voltage measurement is taken. Here is the vi I did.
    Any advice for update and correction is appriciated.
    Best.
    Attachments:
    K2182-ch1-dV-ch2-dT.vi ‏42 KB

    Hi,
    Here is a sample of a two case state machine using a shift register.  Keep all the opening of files and other one time only stuff out of the while loop.
    So you can run case one than case two in your code.  I assume the time to initialize channel 2 is acceptable, that will be the delay between the two measurements.
    I don't know that instrument, my first thought to take both measurements as close as possible would be to do all the initializations, then take one measurement and then the other.  I assumed buy the code that you had to initialize & measure each channel is series, thus using a two case state machine.
    Can this instrument take both measurements and then send you the results, that would most likely be faster.
    Mark Ramsdale
    Attachments:
    two case state machine.vi ‏7 KB

  • [Solved] lm-sensors fan speeds and voltage.

    After installing lm-sensors and running sensors-detect I have the following readings from sensors;
    ~$ sensors
    acpitz-virtual-0
    Adapter: Virtual device
    temp1: +42.0°C (crit = +80.0°C)
    coretemp-isa-0000
    Adapter: ISA adapter
    Core 0: +47.0°C (high = +78.0°C, crit = +100.0°C)
    coretemp-isa-0001
    Adapter: ISA adapter
    Core 1: +42.0°C (high = +78.0°C, crit = +100.0°C)
    I am sure in the recent past I was able to retrieve fan speed and voltage readings from this machine;
    |Abit FP-IN9 SLI | Intel(R) Core(TM)2 Duo CPU [email protected] | GeForce 9800 GT |
    I suspect this is something that I am doing wrong (or is it a kernel issue??) as I have the same problem on a similar machine running Ubuntu 10.04.
    Any ideas appreciated.
    Last edited by ancleessen4 (2010-03-09 08:31:15)

    Hi graysky,
    You were spot on!
    I followed the instructions in the wiki article to add
    acpi_enforce_resources=lax
    to /boot/grub/menu.lst
    Reboot...
    Voila!
    [neil@archbox ~]$ sensors
    acpitz-virtual-0
    Adapter: Virtual device
    temp1: +30.0°C (crit = +80.0°C)
    coretemp-isa-0000
    Adapter: ISA adapter
    Core 0: +36.0°C (high = +78.0°C, crit = +100.0°C)
    coretemp-isa-0001
    Adapter: ISA adapter
    Core 1: +34.0°C (high = +78.0°C, crit = +100.0°C)
    w83627dhg-isa-0290
    Adapter: ISA adapter
    Vcore: +1.12 V (min = +0.00 V, max = +1.74 V)
    in1: +0.91 V (min = +0.51 V, max = +0.21 V) ALARM
    AVCC: +3.30 V (min = +2.98 V, max = +3.63 V)
    VCC: +3.30 V (min = +2.98 V, max = +3.63 V)
    in4: +1.10 V (min = +0.00 V, max = +0.14 V) ALARM
    in5: +1.20 V (min = +0.13 V, max = +0.00 V) ALARM
    in6: +1.58 V (min = +0.02 V, max = +0.00 V) ALARM
    3VSB: +3.23 V (min = +2.98 V, max = +3.63 V)
    Vbat: +3.01 V (min = +2.70 V, max = +3.30 V)
    fan1: 0 RPM (min = 340 RPM, div = 128) ALARM
    fan2: 811 RPM (min = 332 RPM, div = 16)
    fan3: 0 RPM (min = 340 RPM, div = 128) ALARM
    fan4: 0 RPM (min = 340 RPM, div = 128) ALARM
    fan5: 0 RPM (min = 527 RPM, div = 128) ALARM
    temp1: +28.0°C (high = +72.0°C, hyst = +0.0°C) sensor = thermistor
    temp2: +29.0°C (high = +70.0°C, hyst = +65.0°C) sensor = diode
    temp3: +38.5°C (high = +70.0°C, hyst = +65.0°C) sensor = thermistor
    cpu0_vid: +0.000 V
    [neil@archbox ~]$
    :D

  • Simultaneo​us temperatur​e and voltage measuremen​t

    We have a NI-4351 to measure both the temperature and a voltage across a resistor. We have utilized two successive flat sequence structure frames, each containing an AI-Single Scan SubVI, to first scan the voltage off the thermocouple, and the second to scan the voltage off the resistor. Our voltage readings across the resistor are roughly 3V.
    When we run our VI without scanning the resistor voltage, our temperature readings are correct. However, once we connect the voltage supply to a channel on the terminal board, we notice two things:
    1) A constant error of 2500
    2) The voltage readings across the thermocouples shoot up three magnitudes of error to approximately 0.08 volts
    Would we need to separate the two scans into two separate buffers? If so, how would we do this?
    Much Appreciated!!

    Coolest,
    At first glance, this particular issue may be related to the fact that the 4351 can only be configured for one gain setting across all of its channels. I recommend that you connect both signals and run a program which only performs one measurement at a time, reconfiguring the board between each measurement. If you still have the problem, disconnect both signals and then measure each one at a time. Make sure that you are configuring the channels to be ground referenced. If you've tried all of this and are still experiencing the same problem, there might be some cross-talk occurring between the channels -- potentially due to a grounding issue.
    I hope that this is helpful to you.
    Thanks,
    Alan L
    Applications Engineer
    National Instruments

  • Why i cant see the temperature and GPU voltage

    hi ,
         I recently purchased MSI GEFORCE FX5200-T128.
    when i run 3D!Turbo Experience utility it only shows the clock speeds.
    why it does'nt shows the GPU, i/o  voltage and Temperature.
    I am using the drivers provided by MSI site ver : 56.72
       my system is also overheating (specially when i play games like farcry,splinter cell) it also heats up when i am not playing, my vid card came with a copper heat-sink. do i need to use a fan? .
    thanks in advance.

    Unfortunatly not all graphics cards support hardware monitoring, and it looks like your card does not support it either, so you won't get information regarding your temperature or voltages.
    If the heatsink gets really hot, then you most probably need a fan for the card. You could get by the overheating problem (if you have already not done so) by installing adequate cooling in your system, which includes system fans and good airflow. Insert intake fans from the front/side and extract fans at the top/rear.
    This way you can ensure that cool air is flowing over your motherboard components and graphics card.
    Also try to neaten the cables in your pc to help with better airflow.

  • Temperature, strain and voltage

    I need an assistance to add Temperature, Strain and Voltage seperately to my input code at FBG Lambda 1, this will allow the waveform of any of the input to varies. just to observe the variation on the input waveform

    "Dear Young, 
    I want to see how to add Strain and Voltage to the input waveform of Lambda1, my simulation is to just to study or observe a change in any of the waveform preferrably FBG waveform lambda1.
    here are the details of what i want to achieve thereafter: 
    Development of  simulation program to assess the capability of the proposed all-optical protection system to detect faults
    Test the model during investigation of different scenarios of fault within the zone and outside the zone of protection
    Investigate the influence and the effect of Temperature changes on the ability of the scheme to provide correct fault identification.
    Investigate the influence of Leakage current capacitance on Power Transmission Cable
    Development of Algorithm to provide fault status information having a binary information to indicate fault duration or no fault occurrence within the protection zone.
    Recieved from olusegunalfred"
    Thanks for your prompt reply. If it's okay, I have put your response here to keep it public and help people in the future.
    Joshua Young
    Applications Engineer 
    National Instruments UK

  • P45 Platinum Voltage Readings in Speedfan & HWMonitor

    I am having some problems reading the voltages (+12v, +5v, etc..) of my P45 Platinum motherboard in Speedfan and HWMonitor.
    Attached are some screenshots.
    I also did a search and found this thread where jack the newbie posted screenshots about his speedfan readings: https://forum-en.msi.com/index.php?topic=121987.msg922325#msg922325
    His voltage readings are also around the same range.
    Is this due to a problem with our motherboard or the reading of voltages by speedfan/hwmonitor?
    Could those who have the P45 Platinum do a test and see whether they get the same strange readings from the voltages too?
    Thanks in advance

    Quote from: Jack the Newbie on 17-December-08, 01:03:00
    Those readings are impossible readings.  No system would be able to function with the +12V level dropping to 1.23V.  It is simply impossible.  Both of the tools show wrong readings, which is not the fault of the board but a problem with those tools.  Check the voltage levels in BIOS, or, even better:  use a multimeter.
    Thanks Jack the newbie. And thank you youeffsee for testing on your board. That shld be proof that I'm not the only one facing this issue and my board is not bad. Phew...
    I knew those were ridiculous readings. The readings in BIOS are perfect. If those tools are bad, are there any that currently read the voltages correctly on the P45 Platinum? Besides MSI's own DualCore Center of course. Being able to monitor voltages in Windows without a multimeter sticking out of the casing would be nice.
    And does anyone know what the other voltages/temperature sensors are reading? Thanks again.

  • Where can I go to learn about scaling voltage readings from different AI devices?

    Ex. I have a Vaisala HMP 60 RHT sensor. How do I know what to scale the voltage readings to get the correct RH? I have seen values (such as 20 in the linear range for the Vaisala HMT 100), but I don't know where it comes from or why it is chosen. In case it matters, I am using several AI devices (thermocouples, MFC's, RHT's) on an NI-6343 USB board. I'd really like to learn more, but am finding information very tough/convoluted. Thank you for any/all help and direction.
    Solved!
    Go to Solution.

    Thank you for the input/help. I think what I was looking for was that if you have a 5V output (for the Vaisala HMP60), then the slope would be 20 so that the zero voltage reading corresponds to 0% RH and 5V output corresponds to 100% RH. Hopefully this is right and their is no drift.

  • RAM voltage readings with DMM vs. Software on X58 Pro-E

    I almost fell over with shock when HWmonitor and Speedfan both reported my RAM voltage spiking over 3volts when running Prime95. When idling the voltage readings fall to about 0.6v and jump around during use.
    I set my RAM voltage in BIOS to 1.5v but software doesn't report that voltage at all.
    So I grabbed my DMM and measured the voltage on the Mobo directly and found that the voltage is in fact exactly 1.5v
    How is it the software can show variations in voltage from 0.5 to 3.7 volts when it's ALWAYS 1.5v when measured directly from the board with a meter?
    I measure from the silver tab (center leg) of what I'm sure is a kind of transistor or FET, which I know is v output.
    How can I trust any of the other voltage readings then? Especially the CPU v.

    Quote from: Jack on 14-July-11, 21:56:29
    Neither one of these utilities is actually able to correctly monitor the actual memory voltage.  Just ignore those readings.  They are absolutely inaccurate no matter how you look at it (@3.7V you memory modules would probably simply vaporize and @0.6V your system would not be running anymore).  These readings are IMPOSSIBLE readings.
    Don't trust any memory related voltage readings that you get from third party software applications.
    There you go.
    That's what I figured, I wan't suspicious until my RAMv hit over 3.5v nearing 4v when stressing with Prime95, I couldn't understand how it was so high and still working.
    I mean, all the voltage readings on speedfan show what look like normal values when I know full well they're not the ones I manually set in the BIOS. It's frustrating.
    And yes they're all the latest releases and by BIOS is up to date with new firmware but NO my OS is still vista 64 SP1 with absolutely NO updates installed. Though I seriously doubt that'll change any voltage readings I get in speedfan.
    I'm almost tempted to colder connections to all the voltage points on the Mobo for easy DMM connection. But I probably won't cos it's not worth the trouble.

  • Measure current and voltage in Elvis and labview

    Hi!  We're setting up a control system setup for class where we need to monitor the current and voltage that we are supplying/drawing from the ELVIS hardware within a LABVIEW setup.  So far, we can accurately display voltage input and outputs from the AI and DAO terminals respectively.  However, we can't seem to find a direct way measure current draw.  I believe that we are supposed to be able to get DMM readings from the AI5 channel, but either we're not doing it correctly, or we still need to place a known resistance across it (which we cannot do, due to variable loads).  Any advise please?
    Cheers,
    Gerald

    Are you using Elvis I or II?  From your post, I will assume it is Elvis I but please correct me if this is not the case.  Page 38 of the manual shows that ACH5 is used for capacitor or diode measurements for the DMM.  To measure current, use the connectors on the front of the Elvis unit and then route the current HI/LO on the protoboard to either the oscilloscope or an analog input channel.  You can use the DMM Soft Front Panel to monitor current as well.
    Message Edited by h_baker on 07-09-2009 04:54 PM
    Regards,
    h_baker
    National Instruments
    Applications Engineer
    Digital Multimeter Resources

  • Correlated analog and encoder readings with M-series

    Hardware: PCI-6220
    Driver: NI-DAQmx
    Software: VC++ 6.0
    My goal is to collect synchronized (correlated) analog and encoder readings to provide position-based voltage and current information.
    First question is regarding the potential alternatives and which is the best approach between these options:
    1) A single task that reads required analog and counter information into the same buffer, seems easy to align the data using this approach.
    2) A task to read analog data and a second task to read counter information, complicated by the need to reliably align the informatio from seperate buffers.
    Next question is simple: can the PCI-6220 read 2 seperate encoders?
    Hopefully, last question: where can I find documentation about where to connect A, B, and Z? So far all my searches have resulted in E series or 6602 counter examples and nothing on M series.
    Thanks, Ed

    Hi Ed,
    To answer your questions:
    >>First question is regarding the potential alternatives and which is the best approach between these options:
    >>1) A single task that reads required analog and counter information into the same buffer, seems easy to >>align the data using this approach.
    >>2) A task to read analog data and a second task to read counter information, complicated by the need to >>reliably align the informatio from seperate buffers
    First of all, option one is not actually an option because you cannot have one task acquire two different types of data (eg analog and counter data). Therefore, option two is the way to go. It is not as hard as it seems. You can use Channel Z to trigger the analag input channels and once the analog and counter channels are triggered at the same time their data will automatically be aligned.
    >>Next question is simple: can the PCI-6220 read 2 seperate encoders?
    The PCI 6220 has two 32 bit counter channels, so it is possible to read two separate encouders. Here is a link to the product data sheet for this card: http://sine.ni.com/apps/we/nioc.vp?cid=14130⟨=US
    >>Hopefully, last question: where can I find documentation about where to connect A, B, and Z? So far all my searches have resulted in E series or 6602 counter examples and nothing on M series.
    You are right, it is difficult to find out where to connect channels A, B, and Z! The easiest to find this information is to create an NI-DAQmx counter Task in MAX (Measurement and Automation Explorer). Once you have followed all of the steps to create the task you will see the task information in the middle of the screen. There will be a settings tab and inside this tab it will tell you what pins to connect A, B and Z to. I went ahead and created tasks for ctr 0 and 1 in MAX to get the information on which pins to connect to A, B, and Z.
    For ctr0: For ctr1:
    A: PFI8 PFI3
    B: PFI10 PFI11
    Z: PFI9 PFI4
    Please let me know if you have any further questions. Have a great day!
    Jennifer

  • Measure and Record Temperature and Pressure

    Evening guys!
    I really need some help for my program.
    I am asked to write a program that can measure and record local temperature and air pressure. I have already set up the hardware, so I can get voltages for temperature from channel 0 of mydaq, and pressure from channel 1. However, I don't know how I am suppose to build the program. Can anyone help me on that?
    I need the value and graph showing at the front panel, and record them as a txt file as well.
    Thanks a lot!
    Best,
    Rookie R

    I would recommend looking at the online LabVIEW tutorials
    LabVIEW Introduction Course - Three Hours
    LabVIEW Introduction Course - Six Hours

Maybe you are looking for

  • Address Book on laptop is corrupt: which settings need to be reset?

    The Address Book on each of my 2 laptops is corrupt (i.e. the AB when opened quits unexpectedly, without giving much options other than Report to Apple, and quit). But the Contacts data on MobileMe (and on the iPhone) is correct. I'd like to delete a

  • Problem with contacts after upgrading to iOS 7 (iphone 5)

    After upgrading to iOS 7 (iphone 5), the contacts (loaded from icloud) in greek language aren't sorted!!! They are under the # symbol!! Anyone has a solution. I didnt have this problem with previous iOS versions!!!!

  • Z table middleware replication from R/3 to CRM

    Looking for a CRM Middleware Guru out there !!! I'm trying to replicate a simple R/3 table (list of countries in my example below. The replication must include both initial and delta downloads. My problem is the following: When doing an initial load

  • White lines appear on OAM placed in InDesign

    Hello. This is my first post in the forums. I've just placed my first OAM file into an InDesign folio. The initial/downstage image views great in both Edge Animate and InDesign but once the folio is uploaded and viewed on a tablet device horizontal w

  • Site Assets Document Library Error

    Hi, After migration from SP 2010 to SP2013, the OOTB SiteAssets was replaced by Site%20Assets. Now Title, Description and Logo is having error. According to ULS log: System.Runtime.InteropServices.COMException: A list, survey, discussion board, or do