Key converter term: The count of binary bits used at a DAC’s input (or produced at an ADC’s output) is referred to as which specification?

Difficulty: Easy

Correct Answer: resolution

Explanation:


Introduction / Context:
Converter datasheets list several performance metrics. Among them, the number of bits is foundational because it sets the quantization granularity, code range, and ideal step size. Correct terminology avoids confusion when sizing a converter for a given application.


Given Data / Assumptions:

  • DAC input width or ADC output width is N bits.
  • Ideal code range is 0..(2^N − 1) for straight binary.
  • We seek the term that names this bit width.


Concept / Approach:
Resolution is the smallest change discernible in the converter’s output/input, ideally one least significant bit (1 LSB) relative to full scale. The number of bits directly defines resolution as a fraction of full scale: 1 / 2^N. Accuracy, linearity, and monotonicity are different (and usually worse-case) error and behavior specifications layered atop the ideal resolution.


Step-by-Step Solution:

Let N = number of bits.Ideal LSB fraction = 1 / 2^N of full scale.Thus, “N” is the converter’s resolution in bits.Other specs (accuracy, linearity, monotonicity) describe deviations or properties beyond resolution.


Verification / Alternative check:
Typical datasheets: “Resolution: 12 bits”; “INL: ±1 LSB”; “DNL: ±0.5 LSB”; “Monotonic over temperature,” clearly separating resolution from other metrics.


Why Other Options Are Wrong:

  • Accuracy: Closeness to true value, includes gain/offset/nonlinearity errors.
  • Linearity: INL/DNL behavior compared to a straight line.
  • Monotonicity: Output never decreases for increasing code; not the bit count.


Common Pitfalls:
Assuming more bits always mean more accuracy; real accuracy depends on errors and noise, not just resolution.


Final Answer:
resolution

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion