Calibration NI 9219 et NI 6210

Bonjour tous le monde,
Est ce qu'il y a quelqu'un qui peut m'aider concernant la calibaration des  modules NI 9219 et NI 6210?
J'ai pas réussi à trouver l'astuce de calibration sachant que je les utilise pour acquérir les déformations à partir des jauges.
Merci Bien,

bonjour,
les ni-9219 et ni-9401 sont des cartes d'acquisitions (un spécifique mesures tensions isolés et un spécifique port numérique). vous pouvez déjà voir dans les exemples de labview, seul certains paramètres peuvent changer. aller dans aide / recherche d'exemple / E/S matérielles / DAQmx .
Cordialement
L.MICOU

Similar Messages

  • Aide urgent: Acquisitio​n des données analogique​s sur Labview

    Bonjour,
    Je voulais savoir si je peux utiliser plusieurs modules de NI 9219 sur un même chassis  NI cDAQ-9174.
    L'acquisition des données ( déformations: entrées analogiques) se fait par l'intermédiaire du logiciel Labview Signal Express 2013.
    Je pose cette question parce que j'ai essayé d'utiliser deux cartes différentes NI 9219 et NI 6210 et j'ai obtenu une erreur.
    Merci de me répondre,
    Résolu !
    Accéder à la solution.

    Bonjour Eloise,
    Merci pour votre réponse. L'erreur est 201426. J'ai joint l'impime écran de cette erreur.
    J'ai une autre question pour vous si ca ne vous dérange pas! Lorsque je fais mon acquisition avec USB-6210 j'ai un bruit énorme au niveau des courbes. J'ai cherché la solution de ce problème, j'ai trouvé que cette carte d'acquisition ne fonctionne pas sur Windows XP. J'ai changé le système d'exploitation à windows 7 mais le problème persiste.
    Même si je fesais l'acquisition sans manip j'ai des fluctuations.
    Je vous remercie pour votre disponibilité,
    M.A.H
    Pièces jointes :
    Nouveau Image bitmap.jpg ‏366 KB

  • How necessary is an official NI Calibration or any DAQ hardware calibration?

    How much can a USB DAQ like a 6218 or a 6210 really drift in terms of it's analog voltage read over time? Is this even a thing? As long as I feed it 5V and read 5V with little drift over time, can I assume it is "calibrated"? What will NI do differently?

    labview12110 wrote:
    I'm confused abou that, i mean it just reads voltage, do daqs fall out of calibration ever? They aren't exactly a sensor in the traditional sense...What component would degrade in performance over time in a daq?
    If you do a measurement, it's for naugth unless you don't know the uncertaincy.  Almost all DAQ devices use an internal voltage reference in conjuction with a ADC and signal conditioning. If any of these blocks change due to temp, overvoltage, shock, time, dirt on the PCB, radiation, chemical&physical processes, .... you can't trust the number you read.
    The manufactor of the metering device (hopefully)  has the experience how to choose, select, place and handle the components and after some testing he declares the specifications and the intervall and procedure how to check it. However statistics can beat and bite you, so the only way to be shure is regular calibration.
    After some time and multiple calibrations you can start to trust your device. 
    Since I do my own calibrations (the standards I can access have less uncertaincy than most other cal labs ) I can tell you that I found NI devices out of cal spec that needed justification to meet their spec for the next periode.
    In modern DAQs there are usually only two 'screws' you can access and 'adjust' : The internal reference timebase and the value of the internal reference.
    While the crystal of the timebase can be trimmed (varicap and DAC) or not (in cheap devices) , the internal voltage reference value is internally measured,calculated and stored by applying a known voltage. Details are found in the calibration documentation.
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

  • Issue with fixed point number output from 9219 module for full bridge measurement (cRIO/FPGA)

    Hi,
    I have a question regarding the fixed point output acquired from a 9219 module (in FPGA on a cRIO) when setup to acquire a strain measurement (full bridge).
    Software: Labview 2009
    Hardware: cRIO-9012 (NI-RIO 3.2.1), NI-9219 module
    The 9219 module is configured in the project as follow:
    Ch0: Full-Bridge +/- 7.8mV/V
    Ch1: Voltage +/- 4V
    Ch2: Voltage +/- 15V
    Ch3: Voltage +/- 60V
    The calibration mode is 'Calibrated' so the FPGA outputs fixed point numbers. 
    My issue is that the precision of the fixed point number for Ch0 (strain) is "(fixed point <+/-32,-1>[-2.5e1,2.5e1]:1.16e-10)", which indicates that the fixed point number is a value between +/- 0.25 and not the +7.8mV/V as I expected?
    The fixed point number does not change in precision whether the range for the strain measurement is set at +/- 7.8mV/V or +/-64mV/V (the two available options).
    As the fixed point number doesn't change precision I'm assuming that changing the range of the strain measurement changes the resolution of the acquired number? And as such I will need to perform additional scaling on the fixed point number to convert it to the expected range?
    There is no mention of scaling of the voltage or strain measurements mentioned in any documentation or examples, with the only scaling example provided for the thermocouple measurements.
    Any help/clarification is much appreciated.
    Regards,
    Mike

    Hello Mike,
    Hopefully I can help clarify some of the behavior you are seeing.
    My issue is that the precision of the fixed point number for Ch0 (strain) is "(fixed point <+/-32,-1>[-2.5e1,2.5e1]:1.16e-10)", which indicates that the fixed point number is a value between +/- 0.25 and not the +7.8mV/V as I expected?
    For calibrated values on the FPGA VI, the returned data is a Voltage measurement, not a directly calculated strain value. Based on the specified ranges for the 9219 in a Full bridge configuration, 250 mV will encompass all possible input values, at the module provided excitation (2-2.7V dependent on the sensor gage resistance).
    The fixed point number does not change in precision whether the range for the strain measurement is set at +/- 7.8mV/V or +/-64mV/V (the two available options).
    As you may notice from the fix-point definition, the fixed point data contains 32-bit precision which is larger than the acquired precision of 24-bits provided by the 9219. The fixed point data-type is coded to accept input encompassing both the range and precision of the instrument; such that, no additional coercion of the input data values is required based on user-defined software settings, i.e. the bridge sensitivities +/- 7.8mV/V or +/-64mV/V. 
    As the fixed point number doesn't change precision I'm assuming that changing the range of the strain measurement changes the resolution of the acquired number? And as such I will need to perform additional scaling on the fixed point number to convert it to the expected range?
    The documentation does not clearly define that by varying the discrete levels of strain input (+/- 7.8mV/V or +/-64mV/V) the range of the ADC on the module is also adjusted. I am working to follow up further on this topic, to provide a clarification on the module documentation. As for scaling, the voltage values acquired regardless of the ADC resolution will still be related strain via the bridge sensitivity. The resolution of the ADC will simply define the smallest measurable change in the strain value. 
    There is no mention of scaling of the voltage or strain measurements mentioned in any documentation or examples, with the only scaling example provided for the thermocouple measurements.
    For converting the acquired voltage values to a strain measurement, I would recommend the documentation linked here for a detailed explanation on strain calculation. Often, users will forward the acquired voltage data as fixed point values through a DMA FIFO to the RT controller on the Compact RIO; such that, they may handle conversion from voltage to strain using floating point math in real-time. 
    I hopes these responses provided a bit of clarity. I will continue to work to provide additional information on the 9219 specification information. Please post back any further questions.
    Cheers!
     Edit: Forgot to add the link.
    Message Edited by Pcorcs on 04-14-2010 04:55 PM
    Patrick Corcoran
    Application Engineering Specialist | Control
    National Instruments

  • NI 9219 with load cell

    Hello,
    I am using the NI 9219 with a Futek Load Cell whose specifications are as follows:
    Name
    Min
    Typ
    Max
    Unit
     Channel: 1
     Compensated Temperature
     60
     160
    F
     Excitation
     1
     20
    Vdc
     Hysteresis
     -.25
     .25
    % of R.O.
     Input Resistance
     744
    Ohms nom.
     Insulation Resistance
     500
    Mohms @ 50 Vdc
     Nonlinearity
     -.25
     .25
    % of R.O.
     Nonrepeatability
     -.05
     .05
    % of R.O.
     Operating Temperature
     -60
     200
    F
     Output Resistance
     700
    Ohms nom.
     Safe Overload
     150
    % of R.O.
     Temperature Shift Span
     -.01
     .01
    % of Load/F
     Temperature Shift Zero
     -.01
     .01
    % of R.O./F
     Zero Balance
     -1
     1
    % of R.O.
     Capacity
     500
    lbs
     Rated Output
     2
    mV/V nom.
     Calibration Excitation
     10
    Vdc
    I am getting a signal from my load cell which is great, but I was wondering how you can adjust the signal output to have minimal interference from noise, vibrations, etc. and how to program the load cell that it can be calibrated to zero (0) before any data is aquired?
    Any help is greatly appreciated! 
    Thanks,
    Yatsco

    Hello Yatsco,
    Thank you for using NI forums.  One thing you will probably want to look into is using the filtering VI's in LabVIEW.  A low-pass filter can help to eliminate any high frequency noise from your measurement.  Also, take a look at the Field Wiring and Noise Considerations article for some more information on eliminating noise.  Lastly, this community example shows how you could calculate the DC offset and compensate for it before measuring your signal.  Please let me know if you have any additional questions concerning this application.
    Regards,

  • NI 9219 calibratio​n

    I have an NI 9219 module that indicates the calibration expired 12/27/2008 (when looking at Measurement & Automation Explorer).  The manual for the 9219 also says it has a calibration interval of 1 year.  However, there is no information on the NI website about calibration for this module. 
    Is there a calibration for this module?  And how do we get it done?

    Hello hardwareguy,
    At this point, the best option for calibrating this module is our Factory Test Service. This service is comparable to our end-of-line production test and will adjust the unit back to "like new" condition. The service will also renew the calibration interval for the device and reset the date that is stored on the device (and displayed in MAX). However, this service does not include any detailed measurement data to demonstrate "As Found" or "As Left" measurement performance.
    To set up a Return Materials Authorization (RMA) for this service you can call 1-866-510-6285 in the US or visit the Contact NI page for other locations. I hope this is helpful, let us know if you have any other questions. 
    Matt Anderson
    Hardware Services Marketing Manager
    National Instruments

  • Pinouts for Connecting Thermistor to USB-6210

    I want to connect a thermistor to a USB-6210 for cold junction calibration.  I cannot find which pinouts to use.  Can you please help?

    Hello mhschmid.  In terms of connecting your thermistor to the USB-6210, you can simply take a differential measurement across two analog input channels (for example pins 15 and 16).  Then, you can configure a DAQmx task to automatically convert that voltage into a temperature by creating a DAQmx temperature task. 
    In regards to using CJC with a thermistor, that isn't necessary.  A thermistor is a resistor that varies on temperature and thus does not require CJC.  However, a thermocouple requires CJC to be completely accurate since the thermocouple is composed of two different metals introducing a voltage difference at the junction of the two metals.  So, if you want to connect a thermistor to the 6210, CJC is not required.  If you want to connect a thermocouple to the 6210, then you can configure the CJC only in software (Measurement and Automation Explorer or LabVIEW) since the 6210 doesn't have a CJC sensor on it.  I hope this helps!  If anything needs clarification, I will be monitoring this forum.
    Brian Fleissner
    National Instruments

  • Mid 2010 15" i5 Battery Calibration Questions

    Hi, I have a mid 2010 15" MacBook Pro 2.4GHz i5.
    Question 1: I didn't calibrate my battery when I first got my MacBook Pro (it didn't say in the manual that I had to). I've had it for about a month and am doing a calibration today, is that okay? I hope I haven't damaged my battery? The calibration is only to help the battery meter provide an accurate reading of how much life it has remaining, right?
    Question 2: After reading Apple's calibration guide, I decided to set the MacBook Pro to never go to sleep (in Energy Saver System Preference) and leave it on overnight so it would run out of power and go to sleep, then I'd leave it in that state for at least 5 hours before charging it. When I woke up, the light on the front wasn't illuminated. It usually pulsates when in Sleep. Expectedly, it wouldn't wake when pressing buttons on the keyboard. So, what's happened? Is this Safe Sleep? I didn't see any "Your Mac is on reserve battery and will shut down" dialogues or anything similar, as I was asleep! I've left it in this state while I'm at work and will charge it this afternoon. Was my described method okay for calibration or should I have done something different?
    Question 3: Does it matter how quickly you drain your battery when doing a calibration? i.e is it okay to drain it quickly (by running HD video, Photo Booth with effects etc) or slowly (by leaving it idle or running light apps)?
    Thanks.
    Message was edited by: Fresh J

    Fresh J:
    A1. You're fine calibrating the battery now. You might have gotten more accurate readings during the first month if you'd done it sooner, but no harm has been done.
    A2. Your machine has NOT shut down; it has done exactly what it was supposed to do. When the power became critically low, it first wrote the contents of RAM to the hard drive, then went to sleep. When the battery was completely drained some time later, the MBP went into hibernation and the slepp light stopped pulsing and turned off. In that state the machine was using no power at all, but the contents of your RAM were still saved. Once the AC adapter was connected, a press of the power button would cause those contents to be reloaded, and the machine would pick up again exactly where you left off. It is not necessary to wait for the battery to be fully charged before using the machine on AC power, but do leave the AC adapter connected for at least two hours after the battery is fully charged. Nothing that you say you've done was wrong, and nothing that you say has happened was wrong.
    A3. No, it does not matter.

  • USB 6009 - Calibration of Analog Input (Mac)

    Hi,
    I have recently acquired a NI USB 6009. I'm using it under labview 7.1 on Mac OS X 10.4 (Tiger).
    I can run the software examples and the datalogger without problems. However, the device seems
    not to be properly calibrated, and I can't figure out how to do it from the manual. When I connect
    pin 1&2 (analog in 0) to pin 32 and 31 (GND and 5V ref voltage) the software claims that the
    measured voltage it ~3.7V (a multimeter confirms that it is in fact 5.0V). Can anyone help
    trouble shooting this ?
    Thank you in advance,
    Niels

    A little update : After thinking some more I realized that it's because
    the USB 6009 is in differential mode, and I hadn't connected one pin. Now
    however, I have trouble figuring out where to change/set this. None of
    the programs (e.q. Acquire One Voltage.vi) seems to define this, so where
    is this defined ?
    Thanks,
    Niels

  • I have LabView but I do not have the Calibration and Configuration Palette,and I could not download it, how can I download it or if i cannot,can I work with the NI-DAQ Calibration_1200?

    I have read in a tutorial for the board 1200 that I can calibrate it with the Calibration and Configuration Palette in LabVIEW, but I do not have them and I could not download it to access its libraries, so I can only download the NI-DAQ software,What's my best choice and if it is to download the palette with its libraries, how can download it with them?I'd appreciate your answers

    If you wish to use your 1200 device in LabVIEW, you must download and install NI-DAQ. When you install NI-DAQ, it will ask you if you would like to install support for LabVIEW. By installing this support, you will then have access to the DAQ pallette in LabVIEW. The DAQ pallette requires that you have NI-DAQ installed.
    For more information on installing and using your device, you can refer to the DAQ Quick Start Guide. You can download it from:
    http://digital.ni.com/manuals.nsf/14807683e3b2dd8f8625677b006643f0/0eca53fe80911b1f862568560068295d
    Regards,
    Erin

  • PXI 2527 & PXI 4071 -Questions about EMF considerations for high accuracy measurements and EMF calibration schemes?

    Hi!
    I need to perform an in-depth analysis of the overall system accuracy for a proposed system. I'm well underway using the extensive documentation in the start-menu National Instruments\NI-DMM\ and ..\NI-Switch\ Documenation folders...
    While typing the question, I think I partially answered myself while cross-referencing NI documents... However a couple of questions remain:
    If I connect a DMM to a 2 by X arranged switch/mux, each DMM probe will see twice the listed internal "Differential thermal EMF" at a typical value of 2.5uV and a max value of less than 12uV (per relay). So the total effect on the DMM uncertainty caused by the switch EMF would be 2*2.5uV = 5uV? Or should these be added as RSS: = sqrt(2.5^2+2.5^2) since you can not know if the two relays have the same emf?
    Is there anything that can be done to characterize or account for this EMF (software cal, etc?)?
    For example, assuming the following:
    * Instruments and standards are powered on for several hours to allow thermal stability inside of the rack and enclosures
    * temperature in room outside of rack is constant
    Is there a reliable way of measureing/zeroing the effect of system emf? Could this be done by applying a high quality, low emf short at the point where the DUT would normally be located, followed by a series of long-aperture voltage average measurements at the lowest DMM range, where the end result (say (+)8.9....uV) could be taken as a system calibration constant accurate to the spec's of the DMM?
    What would the accuracy of the 4071 DMM be, can I calculate it as follows, using 8.9uV +-700.16nV using 90 days and 8.9uV +- 700.16nV + 150nV due to "Additional noise error" assuming integration time of 1 (aperture) for ease of reading the chart, and a multiplier of 15 for the 100mV range. (Is this equivalent to averaging a reading of 1 aperture 100 times?)
    So, given the above assumptions, would it be correct to say that I could characterize the system EMF to within  8.5uV+- [700.16nV (DMM cal data) + 0.025ppm*15 (RMS noise, assuming aperture time of 100*100ms = 10s)] = +-[700.16nV+37.5nV] = +- 737.66nV? Or should the ppm accuracy uncertainties be RSS as such: 8.5uV +- sqrt[700.16nV^2 + 37.5nV^2] = 8.5uV +-701.16nV??
     As evident by my above line of thought, I am not at all sure how to properly sum the uncertainties (I think you always do RSS for uncertainties from different sources?) and more importantly, how to read and use the graph/table in the NI 4071 Specifications.pdf on page 3. What exactly does it entail to have an integration time larger than 1? Should I adjust the aperture time or would it be more accurate to just leave aperture at default (100ms for current range) and just average multiple readings, say average 10 to get a 10x aperture equivalent?
    The below text includes what was going to be the post until I think I answered myself. I left it in as it is relevant to the problem above and includes what I hope to be correct statements. If you are tired of reading now, just stop, if you are bored, feel free to comment on the below section as well.
    The problem I have is one of fully understanding part of this documenation. In particular, since a relay consists of (at least) 2 dissimilar metal junctions (as mentioned in the NI Switch help\Fundamentals\General Switching Considerations\Thermal EMF and Offset Voltage section) and because of the thermo-couple effect (Seebeck voltage), it seems that there would be an offset voltage generated inside each of the relays at the point of the junction. It refeers the "Thermocouple Measurements" section (in the same help document) for further details, but this is where my confusion starts to creep up.
    In equation (1) it gives the expression for determining E_EMF which for my application is what I care about, I think (see below for details on my application).
    What confuses me is this: If my goal is to, as accurately as possible, determine the overall uncertainty in a system consisting of a DMM and a Switch module, do I use the "Differential thermal EMF" as found in the switch data-sheet, or do I need to try and estimate temperatures in the switch and use the equation?
    *MY answer to my own question:
    By carefully re-reading the example in the thermocouple section of the switch, I realized that they calculate 2 EMF's, one for the internal switch, calculated as 2.5uV (given in the spec sheet of the switch as the typical value) and one for the actual thermocouple. I say actual, because I think my initial confusion stems from the fact that the documenation talks about the relay/switch junctions as thermocouples in one section, and then talks about an external "probe" thermocouple in the next and I got them confused.
    As such, if I can ensure low temperatures inside the switch at the location of the junctions (by adequate ventilation and powering down latching relays), I should be able to use 2.5uV as my EMF from the switch module, or to be conservative, <12uV max (from data sheet of 2527 again).
    I guess now I have a hard time believeing the 2.5uV typical value listed.. They say the junctions in the relays are typically an iron-nickel alloy against a copper-alloy. Well, those combinations are not explicitly listed in the documenation table for Seebeck coefficients, but even a very small value, like 0.3uV/C adds up to 7.5uV at 25degC. I'm thinking maybe the table values in the NI documentation reffers to the Seebeck values at 25C?
    Project Engineer
    LabVIEW 2009
    Run LabVIEW on WinXP and Vista system.
    Used LabVIEW since May 2005
    Certifications: CLD and CPI certified
    Currently employed.

    Seebeck EMV needs temperature gradients , in your relays you hopefully have low temperature gradients ... however in a switching contact you can have all kind diffusions and 'funny' effects, keeping them on same temperature is the best you can do. 
    Since you work with a multiplexer and with TCs, you need a good Cold junction ( for serious calibrations at 0°C ) and there is the good place for your short cut to measure the zero EMV. Another good test is loop the 'hot junction' back to the cold junction and observe the residual EMV.  Touching (or heating/cooling) the TC loop gives another number for the uncertainty calculation: the inhomogeneous material of the TC itself..
    A good source for TC knowledge:
    Manual on the use of thermocouples in temperature measurement,
    ASTM PCN: 28-012093-40,
    ISBN 0-8031-1466-4 
    (Page1): 'Regardless
    of how many facts are presented herein and regardless of the percentage
    retained,
                    all will be for naught unless one simple important fact is
    kept firmly in mind.
                    The thermocouple reports only what it "feels."
    This may or may not the temperature of interest'
    Message Edited by Henrik Volkers on 04-27-2009 09:36 AM
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

  • Need to develop a calibration report

    Hi experts,
    i need to develop a report for calibration
    input screen parameters are
    equipment no.
    date
    output screen
    Equipment no.
    equipment description
    Functional location
    order no.
    inspection lot no.
    inspection reading before calibration
    inspection reading after calibration
    Usage Descion
    I am new to PaM module kindly help me so that i can give proper details of tables to abaper.put
    and kindly tell me from which table i can pass equipment no. and get inspect lot no. and other details as per output screen.

    Hi all,
    i am facing an issue while developing a report.
    i am using a table QAMV (LOGIC  stated below)
    pass inspection lot no. pick MIC , MIC Description
    issue is that against single inspection lot no. multiple MIC are assigned. How i can pick all MIC , MIC Description by entering inspection lot no.
    ABAPer is asking for another primary key to fetch all MIC. According to abper he is unable to fetch all MIC on the basis of single inspection lot no.
    format of report and table (qamr) screen shot attached below.

  • Null offset from calibration not being applied to output

    I am measuring some load cells using a I 9237.  I am setting the null offset using Device > Bridge Calibration in the DAQ Assistant.  I measure the offet and hit calibrate.  It says "Calibration successful" and I hit finish, but the offset value is not applied to the data.
    How do I get this to work?

    Hello IamRubber,
    How can you see the calibration not being applied?  Are you running the same DAQ Assistant to see the offset?  If possible, please attach a screenshot from before/after running bridge calibration.
    Patrick W.
    Applications Engineer
    National Instruments

  • Mid-2010 Macbook Pro Hires glossy screen calibration

    Hello,
    I have the new Hires glossy screen on the latest i5/i7 macbook pro range. Has anyone calibrated this screen using a Spyder or similar? If so can they post a link to download the calibrated color profile?
    Thanks a lot, I had a calibrated profile for my '08 core2duo macbook and it made a huge difference to how natural the color felt - I want to get that feeling on my new one now
    Cheers,
    Richard.

    bump

  • Macbook Battery Calibration Questions

    Greetings!
    I just bought a MacBook today (the mid-range white model) but I am a complete tyro in the Apple world. I have a few questions with respect to the calibration process. The manual is lucid with respect to everything except for three issues:
    1. The instructions make no mention of whether or not I can turn on my MacBook and use it during the first charge cycle (that is, the charging cycle before the two hour normal use stage).
    2. The instructions do not indicate whether or not I have to wait until the battery is completely dead during the "die-down" stage (the supposed five hour sleep stage) before I plug in the charger and fully charge the battery again. Is it all right to plug in the power adapter after the computer has slept for five hours while the computer is still in sleep mode?
    3. If for some reason I screwed up my first calibration (perhaps due to mishandling the process in (1) or (2) above), is it detrimental to immediately run another calibration process (i.e. treat the second "power up" cycle as step one of the calibration process)?
    The reason why I raise these issues is because I am currently calibrating my MacBook as I speak. I plugged my adapter into the computer, turned it on, and ran programs (iTunes, CD ripping, Wifi) while the initial charge cycle was running. I have let the computer fully charge and have left it fully charged for a little over two hours. I am about to disconnect it and move to the "die-down" stage (that is, I will let it die down into sleep mode). I am concerned about whether or not I have faulted at any point in the calibration process. Any help regarding my questions would be much appreciated. Thank you for your time and consideration.
    -- BibleJordan

    And yes, it is ok the plug the adaptor while the computer is in sleep mode. The battery will charge as normal.
    No, I think this is wrong. You need to let it sleep without the power adapter. This causes the battery to deplete most of the remaining reserve charge. If you look in the manual on pages 23-24, steps 4-6 are critical. Basically it says to drain your battery till it goes to sleep (step 4). Then let it sleep for 5 hours (step 5). Then plug in the ac adapter and leave it in till it is fully charged (step 6).
    So do not plug in the AC adapter until it has either been off for 5 hours or in sleep mode for 5 hours.
    Personally I don't see how you can shut it off instead of sleep mode for the 5 hours, as once it goes to sleep mode, from low battery, you cannot wake it to turn it off unless you plug in the AC and that defeats the whole purpose...
    Regards,
    RacerX
    MacBook 2.0Ghz, 2GB RAM   Mac OS X (10.4.7)  

Maybe you are looking for

  • Number ported out from Google Voice to Verizon not working

    I am a new Verizon Wireless customer and I ported my number from Google Voice to Verizon. (number ending in 8466). However, Verizon keeps saying that they are waiting on Google while Google says they have released the number. Message from Google: Goo

  • Exception error  in PL/SQL block

    Hi, do the following conditions in a PL/SQL block cause an exception error to occur ? A- Select statement does not return a row. B- Select statement returns more than one row. Thank you.

  • OS or Windows and scanner forms

    This is a PC vs iMac question.   I am required to submit a "scanner form".  I do have a Canon copy/scan/fax printer.  I have not ever used a scanner form,  and I am not clear about how this is done.   I am trying to learn if this is something only PC

  • Data Streaming Destination in SQL Server 2014?

    I would like to publish a SSIS package as a SQL view. I'm working with SQL Server 2014 Developer edition, and SSDT-BI for Visual Studio 2013. The instructions for SQL Server 2012 are here: http://msdn.microsoft.com/en-us/library/dn600376(v=sql.110).a

  • Display Reconciliation a/c

    hello, I have an issue like after creation of vendor master, reconciliation account in the change mode should be having display status. so at the time of creation of vendor master, reconciliation account should be able to input the field and at the t