Calibration is the process of comparing the measurements produced by an instrument to the known values of a reference standard, and documenting the relationship between them. The formal definition from the International Vocabulary of Metrology (VIM) describes calibration as an operation that, under specified conditions, establishes a relation between quantity values provided by measurement standards and corresponding indications of a measuring instrument. In practical terms, calibration answers a simple question: how much does this instrument's reading differ from the true value? When you calibrate a micrometer, you measure gage blocks of known size and record how closely the micrometer's readings match those known values. When you calibrate a pressure gauge, you apply known pressures from a deadweight tester and record the gauge's indicated values at each point. The documented differences — called errors, corrections, or deviations — tell the user how much to trust the instrument's readings. It is important to distinguish calibration from adjustment. Calibration is a documentation activity — it determines and records the instrument's performance. Adjustment is a physical activity — it changes the instrument to bring its readings closer to the true value. Some calibrations include adjustment (calibrate, adjust, then re-calibrate to verify), while others only document the instrument's condition without changing it. Understanding this distinction is critical for interpreting calibration certificates and making informed decisions about instrument suitability.
Calibration exists because all measuring instruments drift over time. Mechanical wear, electronic component aging, environmental exposure, and normal use all cause an instrument's readings to change from when it was last known to be accurate. Without periodic calibration, you have no evidence that your measurements are reliable — and unreliable measurements lead to real consequences. In manufacturing, an out-of-calibration gage can pass defective parts or reject good ones. Passed defects reach customers, causing warranty claims, returns, or safety incidents. Rejected good parts waste material, labor, and machine time. Both outcomes cost money, but passed defects can also cost lives when the parts are safety-critical — fasteners in aircraft, components in medical devices, or structural elements in bridges. In laboratory testing, calibration provides the measurement traceability that gives test results their scientific credibility. A test result is only as reliable as the instruments used to produce it. Without calibration, a lab's results are opinions rather than measurements. From a regulatory and quality system perspective, calibration is not optional. ISO 9001 requires organizations to determine the monitoring and measuring resources needed, ensure they are suitable, and maintain documented evidence of fitness for purpose. ISO/IEC 17025 goes further, requiring measurement traceability, uncertainty estimation, and detailed equipment records. Industry-specific standards like AS9100, IATF 16949, and FDA regulations all contain calibration requirements. Non-compliance results in audit findings, lost certifications, and in regulated industries, enforcement actions.
A well-executed calibration follows a structured process regardless of the instrument type. The process begins with receiving and identifying the instrument — recording its manufacturer, model, serial number, condition, and the reason for calibration (scheduled recall, repair return, or new acquisition). Before calibration begins, the instrument undergoes a visual and functional inspection. The technician checks for physical damage, cleanliness, readable markings, and proper mechanical or electrical function. An instrument with a cracked display or a stuck mechanism needs repair before calibration, not just measurement comparison. The as-found calibration captures the instrument's current performance before any adjustment. The technician measures reference standards at specified test points across the instrument's range and records the instrument's indicated values. These as-found readings document the instrument's condition as it was being used — critical information for determining whether previous measurements made with the instrument are suspect. If as-found results are outside tolerance, the technician may adjust the instrument and then perform an as-left calibration to verify the adjustment brought readings within specification. Both as-found and as-left data are recorded on the certificate. Measurement uncertainty is calculated or referenced from an existing uncertainty budget. The uncertainty tells the certificate user the range within which the true value lies, at a stated confidence level. Finally, the calibration certificate is generated documenting all results, the reference standards used, environmental conditions, the procedure followed, and the personnel who performed the work. The instrument receives a calibration label and is returned to service or to the customer.
Calibration activities are governed by a hierarchy of standards that define requirements at different levels. At the top are international metrology standards maintained by the International Bureau of Weights and Measures (BIPM) and realized through national metrology institutes like NIST in the United States, NPL in the United Kingdom, and PTB in Germany. These organizations maintain primary standards and provide the ultimate traceability anchor for all measurements in their respective countries. ISO/IEC 17025 is the international standard that specifies requirements for the competence of testing and calibration laboratories. It covers management system requirements (impartiality, confidentiality, document control) and technical requirements (personnel, facilities, equipment, traceability, procedures, measurement uncertainty, and reporting). Accreditation to ISO/IEC 17025 by bodies like A2LA, NVLAP, or UKAS provides independent verification that a laboratory is technically competent. ANSI/NCSL Z540.3 is the U.S. standard for calibration laboratory requirements. It focuses on the technical aspects of calibration, particularly measurement decision risk — the probability that a calibration result will incorrectly declare an instrument in or out of tolerance. This standard introduced the concept of two-percent false accept risk that has influenced calibration practices worldwide. Industry-specific standards add requirements tailored to particular sectors. IATF 16949 adds automotive-specific calibration requirements. AS9100 and AS9110 address aerospace. AMS 2750 governs pyrometry — the calibration and use of temperature measurement equipment in heat treating. MIL-STD-45662A, though formally canceled, still flows down in many defense contracts as a calibration system requirement.
A calibration interval is the period between successive calibrations of an instrument. Setting appropriate intervals is one of the most important decisions in calibration management because it directly affects both measurement reliability and operational cost. Too-short intervals waste money on unnecessary calibrations and remove instruments from service when they could be productive. Too-long intervals risk using instruments that have drifted out of tolerance, potentially contaminating months of measurements. The simplest approach is OEM-recommended intervals — the instrument manufacturer specifies a calibration period, typically one year. This is a reasonable starting point but is often conservative because manufacturers must account for worst-case use conditions. As you accumulate calibration history, you gain the data needed to adjust intervals based on your instruments' actual performance. Statistical methods analyze historical calibration data to optimize intervals. If an instrument consistently passes calibration with wide margins, its interval can likely be extended. If it frequently fails or shows significant drift, the interval should be shortened. NCSL International Recommended Practice RP-1 describes several statistical methods including the test reliability method, which adjusts intervals to maintain a target reliability level — typically 85 to 95 percent of instruments found in tolerance at recall. Risk-based interval adjustment considers the consequences of a measurement failure alongside the probability of instrument drift. Instruments used in safety-critical measurements or high-value processes may warrant shorter intervals even if their drift history is favorable, because the cost of a wrong measurement far exceeds the cost of an extra calibration.
Metrological traceability is the property of a measurement result whereby it can be related to a reference through a documented, unbroken chain of calibrations, each contributing to the measurement uncertainty. In plain language, traceability means you can trace your measurement back to a recognized standard through an unbroken chain of comparisons. Traceability works through a hierarchy. At the top sits the SI definition of the unit — for length, the meter is defined in terms of the speed of light. National metrology institutes like NIST realize this definition in physical standards, which are used to calibrate primary reference standards. Those primary standards calibrate working standards, which calibrate your reference standards, which calibrate your working instruments. At each step, the calibration is documented and the measurement uncertainty is quantified. Each link in the chain adds uncertainty, so the total uncertainty at the bottom of the chain (your working instrument) is always larger than at the top (the national standard). This is why reference standards must be significantly more accurate than the instruments they are used to calibrate — typically by a ratio of four to one or better, known as the test accuracy ratio (TAR) or test uncertainty ratio (TUR). ISO/IEC 17025 requires that measurement results be traceable to the International System of Units (SI) through national metrology institutes or other recognized bodies. This is not just an administrative requirement — it is the foundation of measurement credibility. Without traceability, a measurement result is an assertion without evidence. Calibration certificates must document the traceability chain by identifying the reference standards used, their calibration certificates, and the laboratory that calibrated them. CalibrationOS builds traceability into its data model, linking every calibration to the reference standards used and making the complete chain visible and auditable.
Calibration is the process of comparing an instrument's readings to known reference values and documenting the differences. It is a measurement and documentation activity. Adjustment is the process of physically changing an instrument to bring its readings closer to the true value. Calibration tells you how the instrument is performing; adjustment changes how it performs. A calibration may or may not include adjustment.
There is no universal calibration frequency. Intervals depend on the instrument type, usage conditions, stability history, and the criticality of the measurements it supports. A common starting point is the manufacturer's recommendation, typically one year. Over time, organizations should adjust intervals based on calibration history data to balance measurement reliability with cost efficiency.
Measurement traceability means that a measurement result can be related to a national or international standard through an unbroken chain of documented calibrations, each with stated measurement uncertainty. It ensures that measurements made in different locations, by different instruments, are comparable and scientifically credible.
Calibration is performed by trained technicians or metrologists working in calibration laboratories, quality departments, or metrology groups. Accredited calibration laboratories have demonstrated their technical competence through ISO/IEC 17025 assessment. Some organizations calibrate instruments in-house while others outsource to commercial calibration service providers.
A calibration certificate typically includes the instrument identification (manufacturer, model, serial number), the calibration date, calibration procedure, environmental conditions, test points with reference values and indicated values, measurement errors or corrections, measurement uncertainty, reference standards used with their traceability, and the identity of the person who performed the calibration.
Start managing calibrations in minutes. Free plan with 25 assets — no credit card.
Get Started Free