Bad RTD Readings with cFP-RTD-12​4

Presently I'm trying to determine if I have bad RTD's or if my cFP-RTD-124 boards are out of calibration.  I have a total of 20 SRTD-1 RTD's of various lengths from Omega Engineering.  They are connected to 3 prong uncompensated plugs which is plugged into RTD extension wire which runs to the cFP-CB-1.  I should mention that I have had these boards for over a year so technically they are out of calibration but this is the first time they've even been taken out of the box.  When I measure the resistance at the plugs and the screw terminals of the NI connector board I typically get 111-113 Ohms which according to my RTD conversion chart is around 85-100 degrees F for a room right around 73 degrees.  Reading the values in MAX I get temperatures from 76-79 degees F.  Can this type of variance be expected when connecting 3 wire RTD to a 4 wire board or is something else at play?
Any suggestions?
Thanks.
LabVIEW 2012 - Windows 7
CLAD

Hi MeCoOp,
Have you confirmed that your RTDs are wired properly according to the manual? It sounds like the values aren't correct going into the module.
Regards,
Hillary E
National Instruments

Similar Messages

  • Bad readings with CFP-RTD-124

    Hi,
    I have a compact field point system to record t° measurements on a machine (to replace a YOKOGAWA paper recorder system)
    This machine is equiped with Thermocouple and RTD sensor
    TC acquisition boards are CFP-TC-120. Readings are ok
    RTD acquisition boards are CFP-RTD-124. Reading are nok  : see picture attached
    With the previous YOKO system readings were stable and did not show any fast variations.
    If i take resistance measurements, value is also stable.
    CFP-RTD-124 used is revision H (187208H-02)
    In advance thanks for your support.
    Regards
    Stéphane
    Attachments:
    CFP RTD 124.JPG ‏107 KB

    To me, it looks like just a little bit of electrical noise.  You've collected 10's to 100's of data points in a 2 second period and it looks like the results are +/- 1 degree.  (The VI isn't clear as to whether this is °C or °F)
    I might be wrong, but I think the thermocouple modules have some internal filtering to eliminate noise.  I've never used the RTD module to know, but perhaps it doesn't.  Or maybe there is a setting in MAX to turn it on.
    Does your temperature change so rapidly that you'd need to read numerous data points per second?  The screenshot doesn't look like it is.  One possibility would be to so some averaging across multiple data points and plot that result.

  • Bad temp readings when sc2345 connetor fully inserted into chasis

    I get bad temp readings when daq cable is fully inserted into "New" SC2345 chassis in LabVIEW, but in NImax numbers apear to be good.  When cable is partialy inserted numbers seem to be ok all around. I have tried an identicale cable and dac card (DAQCard-6062E) on same laptop and got same error. If i hook up to an older SC2345 chasis the problem is not there. I have also hooked up a PC with a PCI-MIO-16E-4  daq card and the appropriate cable for this card and have no problems on the "New" SC2345.

    OLD EQ PC
    NEW EQ PC
    OLD EQ LT
    NEW EQ LT
    TEMP
    81.6
    87.7
    81.2
    86.8
    TEMP V
    -0.000049
    -0.000320
    -0.000049
    -0.000322
    CJC TEMP
    83.5
    101.7
    83.7
    101.9
    CJC V
    1.093750
    0.847500
    1.093750
    0.846100
    Temp LV
    81.6
    87.6
    64.7
    36.5
    REF TEMP
    82.6
    82.6
    82.3
    82.0
    Temp (Temperature in max), Temp V (Temperature voltage in max), CJC Temp (CJC temperature in max), CJC V (CJC Voltage in max), Temp LV (Temperature in LabVIEW application)
    Refrence temp was with an Omega Hand held unit.
    Old EQ PC (Old SC2345 chasis with PCI-MIO-16E-4 Daq card and P/N 184749C-10 (10 meter cable)  PC)
    New EQ PC (New SC2345 chasis with PCI-MIO-16E-4 Daq card and P/N 184749C-10 (10 meter cable) PC)
    Old EQ LT (Old SC2345 chasis with DAQCard-6062E Daq card and P/N 192061B-02 (2 meter cable) Lap Top)
    New EQ LT (New SC2345 chasis with DAQCard-6062E Daq card and P/N 192061B-02 (2 meter cable) Lap Top)
    There are no bent pins on any of the componets. 
    The LabVIEW application and daq settings are identicle.
    The same CJC Map was used for all test.
    Thanks in advance for any help you can give.
    Travis

  • I would like to know how i can create a bell graph with out using sub VIs, the data that i created consists in 500 readings with values of 0 to 100, i calculated the mean value and standard diviation. I hope some one can help me

    I would like to know how i can create a bell graph with out using sub VIs, the data that i created consists in 500 readings with values of 0 to 100, i calculated the mean value and standard diviation. I hope some one can help me

    Here's a quick example I threw together that generates a sort-of-bell-curve shaped data distribution, then performs the binning and plotting.
    -Kevin P.
    Message Edited by Kevin Price on 12-01-2006 02:42 PM
    Attachments:
    Binning example.vi ‏51 KB
    Binning example.png ‏12 KB

  • Bad service HP with license code windows 8

    I have bought a all in one 23 F210ED in august last year.
    I never changed anything in it.
    Now i get when i install the recovery from HP, that i am not activated ?
    I called HP for it, and everytime they will call me back, what they do not do.
    So i to call HP  myself several times it costs me 0,10 euro cent per minute the amounbt is now already over 30 euro !
    And every time they have to ask another collague abou it, pleae wait etc.
    And finally because HP was not calling back everytime what they promised , with a solution.
    I called Microsoft myself and they said . HP is using a oem version of windows 8, they have to make it work.
    But after pushing from my side, they did the activation by phone.
    And i have now a contact person with MS that will do that for me when i call him.
    But this is not the way normally he said.
    HP has to get the solution to this problem.
    Like a new key.
    But you allready understand i do not get a new key from HP.
    Because they blame it on MS.
    So now my money is gone, and i still have to call MS to activate it.
    Custumer service finally called me on the phone after a letter i wrote to HP Netherlands.
    And pointed me out to the warranty rules.
    Yeah right.
    Bad bad service from HP.
    For me never again HP.

    I have received an email,
    but i see nothing else then this ?
    Hello Awil25,You have 1 update for your HP Support Forum Subscriptions.
    Subscription to Topic: Bad service HP with license code windows 8(1 Update)
    There was a new Reply.
    Subject:   Re: Bad service HP with license code windows 8Author: george-p (Moderator) Date: 03-06-2014 09:51 AMView
    Did this reply resolve your post or question?
    If yes, then share the good news! Let others know this reply was helpful by accepting this solution.
    You can also show your appreciation by giving kudos.
    We appreciate your feedback. It's what keeps our community such a helpful, vibrant place for our members.
    Thanks for being a member!
    The HP Support Forum Team\
    What must i do with that ?

  • I need to count intermitte​nt high speed pulses from an outside source with cFP-CTR-50​2 and labview.

    I need to count intermittent high speed pulses from an outside source with cFP-CTR-502 and Labview 8.2 . I've found example code for generating pulses and creating intricate count setups but no straightforward examples of a simple counter. Any suggestions?

    Hello tinfish,
    I could not find a simple example that implements simple counting either, but it should be straightforward enough for us to try. Do you have the CTR module configured properly in MAX? If so, can you monitor the channels on your CTR 502 for input? Try connecting a square wave or some other digital pulse to the terminal to test the functionality of the counter module first (before programming). If you monitor the input channels with somethign connected you should see the count increment each time it sees a rising edge (assuming default configuration).
    Once you've verified that everything works in MAX, you can set up your CTR module in a LV 8.2 project. If you need help with this, refer to the help document (look in the "Configuring FieldPoint in LabVIEW" section):
    C:\Program Files\National Instruments\FieldPoint\documentation\Online Help\fplv.chm
    You should be able to just read a channel tag from your CTR 502 using an FP Read VI. (Simply drag the channel from your project onto the block diagram). Since counting is the default behavior of the 502, there is no special programming involved to make it work.
    I hope this helps -- if it's too high-level we can talk details about specific questions you have.  Have a good one!
    Charlie S.
    Visit ni.com/gettingstarted for step-by-step help in setting up your system

  • Can I use to Labview 6.1 with cFP-1804?

    I have Labview 6.1, Can I use to it wiht with cFP-1804?

    Hello,
    Yes, you can use LabVIEW 6.1 with cFP 1804.  According to:  What Version of the FieldPoint Driver Added Support for the cFP-180x?  Support for the cFP-180x was added in FieldPoint 4.1.2. You must have version 4.1.2 or newer in order to see the network module in MAX and use it in LabVIEW. In your case, it cannot be a newer version because the newer versions don't have support for LabVIEW 6.1.    Check which version of the FieldPoint software is included in the  CD that came with your cFP 1804. In case it is 4.1.2 then you are all set; in case it is 5.0 or a newer one you will have to download version you need.  You can find drivers at ni.com --> Product Support --> Drivers and Updates.  For Product Line select Distributed I/O  FieldPoint.  For Software  select NI-FieldPoint.   For version, you will see v4.1 and then it skips to v5.0.  Try version 4.1. 
    If it doesn't work. You will have to contact an Application Engineer  in order to get a service number and then we will provide version 4.1.2. Dial the free telephone number according to your country or region. You can find the number at  ni.com.  The number for Mexico is 01 800 010 0793.
    The following link is a Quick Start Guide for Compact FieldPoint  cFP 180x: http://www.ni.com/pdf/manuals/374176b.pdf
    Have a nice day!,
    Pablo Bernal
    AE Mexico
    Message Edited by Pablo Bernal on 07-04-2007 02:13 PM

  • Speeding the Keithley 6485 readings with LabView

    Hi All,
    I'm new with LabView and with current measurements. I'm using a Keithley 6485 Pico-Amp to recored the current changes in microchannels. I downloaded the instrument drivers from the NI website and I tried to control the Pico-Amp current readings with LabView 7.1 but the rate of readings from the Pico Amp was slow ( a reading every 0.5 s). I tried to change a lot of parameters and I found the main problem causing such slow measurements is the read VI for the Pico-Amp was very slow in the while loop.
    is there any way to recored fast continuous measurements from the Pico-Amp using the read VI in a while loop with LabView 7.1?.  or if there is another to do fast recordings form the Pico-Amp with LabView? 
    I appreciate your help and suggestions.
    thanks

    thanks a lot guys for your suggestions and comments.
    at the present time I'm using the analog output of the Pico -Amp to recored the readings via a Daq since I couldn't control it form the drivers. So,  I only control the Pico Amp from the front panel.
    Dennis, thanks for your suggestions and it is true that fetch multi-point is faster but it is still not fast enough since I want reading at a rate of 40-50 Hz. if you know of a way to increase the readings rate to the frequency I need it will be helpful.
    F. Schubert, thanks for comments. I don't know how to change the NPLC since when I change the value in the VI an error message appears and Labview terminates. the NPLC setting is 1 and PLC 60 Hz.
    I attached a sample of VI I'm using to find the readings from the Pico-Amp and I would appreciate any suggestions to improve the readings rate. also how can I change the NPLC value of the device?
    Thanks
    Message Edited by Zeyad on 09-13-2007 11:51 AM
    Attachments:
    Pico_AMP_tests_2.vi ‏125 KB

  • Spikes in readings from cFP-2000

    Hello,
    I'm hoping some one will be able to help with a problem we are experiencing with our temperature and voltage measurements. We are noticing large spikes in temperature being logged that are not physically possible, and hence must be some form of interference effect. The spike seem to be entirely random, and I can't seem to be able to replicate the problem. They may happen once a week on average, and its not at the same time every week so its not as if another event is causing it.
    Our setup is:
    - Type-R thermocouples connected to TC-125 modules via compensating cable (some via long runs)
    - 3 cFP-2000 units in an enclosure, with power supplied by individual Quint 24VDC power supplies. These are connected to the mains via a UPS.
    - Also there are AI-110 modules connected to the same field point banks. These also seem to be affected too. These monitor -/+10V voltages from LVDT signal conditioning, and the cabling come in through the same bunch of cables as the thermocouple compensating cable.
    - Each thermocouple is connected in parallel to a Eurotherm process controller (2416 and 3216 models) and the cFP.
    - The LabView software is set to take a reading every 10 seconds, so I presume the spikes we are seeing are when we've been unlucky enough to catch when the interference is happening.
    I'm hoping someone will be able to help! I haven't a clue where to start making changes to cure the problem, especially when I can't replicate the problem. Any suggestions would be approriate.
    Andy.

    Thanks for your quick reply,
    Yes the spiking has always been there. I hadn't even considered it could be a software problem, I didn't think this was very likely. all the program does is checks readings every 10 seconds - if the reading has changed from the previous value by an amount greater than a set deadband it logs a new point on the graph. Theres no control or processing element to it. My initial thought was it was EMI in the cabling.
    Just for further information the cable lengths vary because the field points are centrally located in one room. The cabling runs to another 4 rooms with the test rigs in.
    Signal conditioning is being used for the LVDT measurements to produce a -/+10V output. The TC-125 module's own cold junction compensation is being used for temperature measurements. I've checked the field points are correctly wired and everything appears to be correct.
    As an experiment we've fitted out three of the test rigs with ferrite blocks on several of the cables in an attempt to suppress any EMI.

  • "Logon failure: unknown user name or bad password" even with correct Credentials

    I have networked PCs before many times successfully, so this is not my first time trying to network PCs in a home environment. Though I’m wondering if Windows 8.1 is part of the problem. 
    I would have thought that for sure, until one of the new laptops running W8.1 would not connect to any of the other three PCs/Laptops running W8.1. Yet these other three W8.1 PCs/Laptops CAN connect to this laptop. Then it gets a little more interesting:
    this same laptop that couldn’t connect to those three W8.1 PCs/Laptops, CAN connect to a Windows 7 desktop, and a XP Laptop, and those two can also connect back to it without issue. It’s almost like my network is divided in half, and only half can talk to
    each other. But then when I thought it couldn’t get any more interesting, I realized the first three W8.1 PCs/Laptops can talk to the others, it’s just that the others (W8.1 Laptop, W7 Desktop, XP Laptop) can’t talk back to them without getting the error,
    "Logon failure: unknown user name or bad password” even though the username and password are 100% correct.
    I don’t fully understand this error, because on the surface, it’s just WRONG! 
    My username and password are correct, but it appears something somewhere is interfering or hijacking the authentication process. Three of the computers (laptops) are brand new, just purchased last week and setup this week. The HostPC is also fairly new,
    just purchased last month.
    I am not using a HomeGroup, and have removed all computers that were part of a HomeGroup. I have enabled file sharing and network discovery and enabled “Use user accounts and passwords to connect to other computers” on all PCs.
    I have DSL and am using the wireless modem provided by my ISP which has router functionality built into it. It is a Sagemcom Model: F@ST 1704N.
    All computers are connected wirelessly. Time is correct on all PCs. I cannot use Group Policy, since they're all Standard or Home edition. DHCP is enabled and all computers are on the same subnet, using the 192.168.254.x range of ip addresses.
    The six computers are as follows: (I figured this may make is easier to visualize the layout)
    HostPC: HP Desktop W8.1           
    PC Name: DrsBlend
    U/N: DrsBlend  p/w: 123456 (not showing my real password)
    PC1: HP Laptop W8.1
    PC Name: DrsBlend-1
    U/N: DrsBlend    P/W: 123456
    PC2: HP Laptop W8.1
    PC Name: DrsBlend-2
    U/N: DrsBlend    P/W: 123456
    PC3: HP Laptop W8.1
    PC Name: DrsBlend-3
    U/N: DrsBlend    P/W: 123456
    PC4: HP Desktop W7 SP1
    PC Name: DrsBlend-4
    U/N: DrsBlend    P/W: 123456
    PC5: Dell Laptop XP SP3
    PC Name: DrsBlend-5
    U/N: DrsBlend    P/W: 123456
    Every PC stated above has the same user name and password and is logged-in with the username, DrsBlend and the password 123456. The "Logon failure: unknown user name or bad password” happens when trying to access HostPC, PC1, or PC2 from PC3, PC4, or
    PC5.
    The HostPC can see and connect to all the PCs, but only PC1 and PC2 can talk back or access the HostPC. 
    It’s like the HostPC and PC1, and PC2 are in their own little clique, and can talk back and forth to each other. Those three PCs can also talk to PC3, PC4, and PC5 as well, but PC3, PC4, and PC5 cannot talk back to them (HostPC, PC1, PC2).
    Profile corruption? I would have entertained that thought, but the fact the first three PCs can access and talk to one another kind of defeats that idea, and the fact the PCs were just recently setup.
    Firewall? Disabled, and disabled TrendMicro with no change. With them on/off, the first three PCs can still talk to each other and the rest of the PCs.
    Anyone have any additional suggestions?

    Hi,
    How did you connect to other PCs? Do you use RDP to connect to other PCs? If so, check the version of the RDP, as I know, some low version RDP can't connect to higher Windows like 8.1.
    And could you please tell us the detailed information about how the six PCs connect to the home network?
    Can PC1, PC2, PC3 ping back to host PC, PC1 and PC2?
    You can also run command " rundll32.exe keymgr.dll, KRShowKeyMgr " view the credentials stored in your PC,check whether this issue is related with some old credentials stored in your system.
    Yolanda Zhu
    TechNet Community Support

  • MSI X58 boards bad chipset cooling - with proof and fix proposed for high IOH

    Hello,
    First of all I like MSI boards this is not an anti marketing thread. I own one and with a FIX I don't regret buying it.
    I will go straight to the point, the cooling of chipsets on the MSI Eclipse/PRO is totally disastrous. Here is the proof :
    Since we see this problem all around, and never "good" response from tech support, here it is - in color. You can't deny it.
    I will stay with constructive remarks - the TIM or thermal paste violet is dry and hard as rock ! The bolts are not strong enough also, and doesn't center the heatsink firmly on the north bridge. So if for everyone with high temps of IOH this is why. Please stop telling us that from Intel spec it is OK since it's specified to go up to 100°C. Mine was about >85°C and after a replacement it is under 50°C. Heat causes instabilities and long term wear damage to any chip.
    It is a problem of engineering. Bad TIM was selected. MSI will not admit it since it will cost too much to take back all theese boards.
    Heat dissipation is so bad as if there were no heatsink on the chip.
    Nothing is perfect, if it was MSI would send me a free tube of MX2  
    MSI Eclipse SLI FIX - will work for other models certainly
    Needed :
    Thermal Paste - Arctic Cooling MX2 (Non conductive, long run, one of the best)
    Optional :
    Heatsink - Thermalright HR-05 / IFX
    Tighter bolts, plastic screws.
    How to:
    -You need to take off the heatsink that cover both northbridge aka IOH and southbridge.
    -The heatsink "DrMOS" over VRMs doesn't get that hot. The TIM on it is like straps and doesn't really require replacement.
    If you don't have tighter bolts or screws, you can use existing and add some rings on it to heighten the pressure that will need the heatsink. Warning : If you use metal screws be sure to use a spring or it may do serious damage to the chip.
    -Apply a small amount but enough of thermal paste to cover the violet rectangle of both heatsinks. Replace them on the board.
    How to (with heatsink replacement) :
    Your IOH will go down 40°C. Yes you've read right.
    -With this method either you need A) to loose one part of the heatsink, B) or loose two SATA ports and will not be able to use SLI with long cards but for majority of people it will be fine and provide tremendous cooling performance to south bridge also. You can also buy a second heatsink if you wish to keep original untouched but I couldn't find any that would suits the board dimensions or provide improvements for the southbridge.
    A)If you don't care about the original heatsink, you will need to cut the two heatpipes. Cut it near the southbridge. It doesn't need high cooling as IOH. Apply thermal paste to it and install the new Thermalright heatsink on the northbridge chipset.
    B)You will need to bend the heatsink ! It seems extreme but well works in fact. Bend it towards you and try doing it gentle so it is exactly at 90° angle :
    -Before installing it, clean the small heatsink of the old violet "thermal paste". Apply the new thermal paste on the chipset. Now we can install the heatsink but upside down. It seems extreme again but it exactly fits. It won't apply weight pressure on the board since it touch the plastic IDE connector. Warning : be sure, absolutely sure that the heatsink doesn't touch any electrical components.
    -Mount the Thermalright IFX and your done.
    Here are picture of the final board, and with a mounted system :
    (Clic on pics if you need high resolution picture)
    The case is an Antec 1200. With modified +5V voltage on fans and with lowspeed enermax fan on processor it is whisper quiet.
    Temperatures with overclocked BCLK at 175 MHz :
    Idle/Load : IOH : 45°C / 50°C
    Idle/Load : System : 35°C / 39°C
    Idle/Load : CPU avg : 38°C / 67°C
    Now this board is a jewel - impressive low power consumption, very high overclocking capability, stability.
    Please MSI, fix your TIM and retention system. Maybe for the new eclipse will it be good ?
    I hope this thread has cleared some questions and helped.

    hi guys , just an add on to evr999 thread, ive also now changed both n/s bridge heatsinks, with excellant results some pictures for you all to look at.
    temps before

  • Css3 border radius bad on iPad with retina display

    Hey,
    If you visit http://www.teet.be you will see a 'coming soon'-image with green borders.
    Those borders seem not to work on my iPad (64GB with retina display, iOS 6.1.2).
    I tested Safari and Chrome on this iPad, but with bad result.
    All the rest of my browser-testing on Windows-PC shows it correctly (IExplorer, Opera, Chrome)
    Is this a bug or am i wrong coding?
    Thanks in advance.

    One solution can be that i integrate the border in my image, so i don't need to use css3, but the problem of css3 in Safari isn't fixed that way...

  • 10.6.2: Bad image quality with SIPS

    I used to convert my PDF documents with SIPS like this:
    /usr/bin/sips --setProperty format jpeg --setProperty formatOptions high -z 400 200 sourcefilename.pdf --out targetfilename.jpg
    This results in a very bad image quality after updating to 10.6.2 (the text contained in the source pdf file is nearly unreadable in the resulting image, especially when using the -z attribute for downsizing).
    I could reconstruct this behaviour with several machines today: when using 10.6.1 the quality is fine, but after the update to 10.6.2 the resulting image quality is unacceptable.
    Any ideas? Or could this be a bug? The man page for sips does not contain any information about new parameters and the image quality for other target formats (tif, png, ...) seems to be ok.

    That picture looks like it was taken in a dimly lit room.
    You could try using night mode but you will need a very steady hand.
    Most basic phone cameras just cannot produce good pictures indoors when not in brightly lit areas.  LED flashes just cannot do a good enough job when compared to real cameras.
    Megapixels don't equal quality, it's the lens and flash that make the biggest difference.

  • Low and bad screen resolution with radeonhd [SOLVED]

    Hi Archers,
    I just setted up a brand new PC with an ATI HD4550 GPU and a 22'' 1680x1050 monitor.
    I can run Xorg and even Gnome using the radeonhd drivers, but the max resolution I can get is 1280x1024 (which isn't even the correct ratio).
    The lower suggested resolutions' ratios are bad as well.
    I didn't write an xorg.conf file.
    When I try to run X with a configuration file generated by X -configure, the monitor displays an error message like "Unsupported video signal".
    What can I do?
    Last edited by monsieur moche (2009-10-15 13:44:01)

    monsieur moche wrote:Here is the xorg.conf of my last test: (...)
    As you can see, I uncommented some lines and I added Option "PreferredMode" "1680x1050" to the monitor section.
    After a lot of gasping, moaning and countless tries, this xorg.conf works here, with an analogue monitor. Look at the bottom for how I set the mode choices. Good luck!
    Section "ServerLayout"
    Identifier "X.org Configured"
    Screen 0 "Screen0" 0 0
    InputDevice "Mouse0" "CorePointer"
    InputDevice "Keyboard0" "CoreKeyboard"
    EndSection
    Section "Files"
    ModulePath "/usr/lib/xorg/modules"
    FontPath "/usr/share/fonts/misc"
    FontPath "/usr/share/fonts/100dpi:unscaled"
    FontPath "/usr/share/fonts/75dpi:unscaled"
    FontPath "/usr/share/fonts/TTF"
    FontPath "/usr/share/fonts/Type1"
    EndSection
    Section "Module"
    Load "evdev"
    Load "glx"
    Load "extmod"
    Load "record"
    Load "dri2"
    Load "dbe"
    Load "dri"
    Load "drm"
    EndSection
    Section "DRI"
    Group "video"
    Mode 0666
    EndSection
    Section "InputDevice"
    Identifier "Keyboard0"
    Driver "kbd"
    Option "XkbOptions" "terminate:cntrl_alt_bksp"
    EndSection
    Section "InputDevice"
    Identifier "Mouse0"
    Driver "mouse"
    Option "Protocol" "IMPS/2"
    Option "Device" "/dev/input/mice"
    # Option "ZAxisMapping" "4 5 6 7" # Xorg's oppsett
    Option "Buttons" "5"
    Option "ZAxisMapping" "4 5"
    Option "ButtonMapping" "1 2 3 6 7"
    EndSection
    Section "Monitor"
    Identifier "Monitor0"
    VendorName "Monitor Vendor"
    ModelName "Monitor Model"
    HorizSync 30.0 - 100.0
    VertRefresh 50.0 - 100.0
    EndSection
    Section "Device"
    ### Available Driver options are:-
    ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
    ### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
    ### [arg]: arg optional
    #Option "NoAccel" # [<bool>]
    Option "AccelMethod" "exa" # [<str>]
    #Option "offscreensize" # [<str>]
    #Option "SWcursor" # [<bool>]
    #Option "ignoreconnector" # [<str>]
    #Option "forcereduced" # [<bool>]
    #Option "forcedpi" # <i>
    #Option "useconfiguredmonitor" # [<bool>]
    #Option "HPD" # <str>
    #Option "NoRandr" # [<bool>]
    #Option "RROutputOrder" # [<str>]
    Option "DRI" "on" # [<bool>]
    #Option "TVMode" # [<str>]
    #Option "ScaleType" # [<str>]
    #Option "UseAtomBIOS" # [<bool>]
    #Option "AtomBIOS" # [<str>]
    #Option "UnverifiedFeatures" # [<bool>]
    #Option "Audio" # [<bool>]
    #Option "HDMI" # [<str>]
    #Option "COHERENT" # [<str>]
    Identifier "Card0"
    Driver "radeonhd"
    VendorName "ATI Technologies Inc"
    BoardName "RV770 [Radeon HD 4850]"
    BusID "PCI:2:0:0"
    EndSection
    Section "Screen"
    Identifier "Screen0"
    Device "Card0"
    Monitor "Monitor0"
    DefaultDepth 24
    SubSection "Display"
    Viewport 0 0
    Depth 1
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 4
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 8
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 15
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 16
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 24
    Modes "1024x768" "800x600" "640x480"
    EndSubSection
    EndSection
    Section "Extensions"
    Option "Composite" "Enable"
    EndSection

  • RAM voltage readings with DMM vs. Software on X58 Pro-E

    I almost fell over with shock when HWmonitor and Speedfan both reported my RAM voltage spiking over 3volts when running Prime95. When idling the voltage readings fall to about 0.6v and jump around during use.
    I set my RAM voltage in BIOS to 1.5v but software doesn't report that voltage at all.
    So I grabbed my DMM and measured the voltage on the Mobo directly and found that the voltage is in fact exactly 1.5v
    How is it the software can show variations in voltage from 0.5 to 3.7 volts when it's ALWAYS 1.5v when measured directly from the board with a meter?
    I measure from the silver tab (center leg) of what I'm sure is a kind of transistor or FET, which I know is v output.
    How can I trust any of the other voltage readings then? Especially the CPU v.

    Quote from: Jack on 14-July-11, 21:56:29
    Neither one of these utilities is actually able to correctly monitor the actual memory voltage.  Just ignore those readings.  They are absolutely inaccurate no matter how you look at it (@3.7V you memory modules would probably simply vaporize and @0.6V your system would not be running anymore).  These readings are IMPOSSIBLE readings.
    Don't trust any memory related voltage readings that you get from third party software applications.
    There you go.
    That's what I figured, I wan't suspicious until my RAMv hit over 3.5v nearing 4v when stressing with Prime95, I couldn't understand how it was so high and still working.
    I mean, all the voltage readings on speedfan show what look like normal values when I know full well they're not the ones I manually set in the BIOS. It's frustrating.
    And yes they're all the latest releases and by BIOS is up to date with new firmware but NO my OS is still vista 64 SP1 with absolutely NO updates installed. Though I seriously doubt that'll change any voltage readings I get in speedfan.
    I'm almost tempted to colder connections to all the voltage points on the Mobo for easy DMM connection. But I probably won't cos it's not worth the trouble.

Maybe you are looking for