Calibration of the sensor is the relationship between the physical measurement variable input and the signal variable output for that specific sensor. Typically, a sensor or an entire instrument system is calibrated by providing a known physical input to the system and recording the output. The data is plotted on a calibration curve such as illustrated in the figure below:
In the example above, the sensor has a linear response for values of the physical input less than X0. The sensitivity of the device is established by the slope of the calibration curve. In this example, for values of the physical input greater than X0, the calibration curve becomes less sensitive until it reaches a limiting value of the output signal. This behaviour is termed to as saturation, and the sensor cannot be used for measurements greater than its saturation value. In some cases, the sensor will not respond to very small values of the physical input variable. The difference between the smallest and largest physical inputs that can reliably be measured by an instrument establishes the dynamic range of the device.
Related articles:
- Analog vs. Digital Sensors
- Variable Capacitance Sensors
- Types of Load Cells and How Do They Work?
- Types of Sensors used in Measurement and Process Control
- Flowmeter calibration techniques
- Passive vs. Active Sensors
- How to calibrate a dc voltmeter
- Instrument Errors and Calibration
Comments