DAC accuracy error bound at full-scale A DAC has a full-scale (maximum) output of 12 V and specified accuracy of 0.1%. What is the maximum absolute error allowed for any output voltage?

Difficulty: Easy

Correct Answer: 12 mV

Explanation:


Introduction / Context:
Converter accuracy is commonly expressed as a percentage of full-scale range. This specification bounds the worst-case deviation from the ideal transfer function across the entire output span.



Given Data / Assumptions:

  • Full-scale output = 12 V.
  • Accuracy = 0.1% of full-scale.
  • Error bound applies to any code’s analog output.


Concept / Approach:
Maximum absolute error = accuracy_percent * full_scale. Here that is 0.1% of 12 V.



Step-by-Step Solution:
Convert percent → 0.1% = 0.001 (fraction).Multiply → 12 V * 0.001 = 0.012 V.Express in millivolts → 0.012 V = 12 mV.



Verification / Alternative check:
Many DAC datasheets define total unadjusted error or accuracy as a percentage of full-scale; unit conversion confirms 12 mV.



Why Other Options Are Wrong:

  • 12 V: Implies 100% error; nonsensical.
  • 120 mV: Corresponds to 1% of 12 V, not 0.1%.
  • 0 V: Real devices have nonzero tolerance.


Common Pitfalls:
Confusing accuracy with resolution; a fine LSB does not guarantee small absolute error without calibration.


Final Answer:
12 mV

More Questions from Digital Signal Processing

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion