Bit depth vs. accuracy: If we increase bit width from 4 bits to 8 bits, does the conversion accuracy merely double?

Difficulty: Easy

Correct Answer: Incorrect

Explanation:


Introduction / Context:
Bit depth strongly affects resolution and potential accuracy in digital-to-analog conversion. This item tests intuition: moving from 4 to 8 bits increases code levels from 16 to 256, which is far more than a mere doubling of precision.


Given Data / Assumptions:

  • Resolution (normalized) ≈ 1 / 2^n.
  • 4-bit → 1/16; 8-bit → 1/256.
  • “Relative accuracy” here refers to ideal quantization resolution absent nonidealities.


Concept / Approach:
Doubling the number of bits increases resolution exponentially: each added bit halves the LSB size. Going from 4 to 8 bits adds 4 bits, yielding a 2^4 = 16× improvement, not 2×. Therefore, the statement that accuracy “doubles” is incorrect by an order of magnitude for ideal quantization limits.


Step-by-Step Solution:

Compute levels: 4-bit → 16 codes; 8-bit → 256 codes.Compute normalized resolution: 1/16 vs. 1/256.Compare: improvement factor = (1/16) / (1/256) = 16.Conclude the claim of only doubling is false.


Verification / Alternative check:
Quantization noise power scales with LSB^2; adding 4 bits reduces quantization noise by 24 dB (approximately 6.02 dB/bit), far more than a 6 dB “doubling.”


Why Other Options Are Wrong:

Correct or conditional options: accuracy improvement is 16× ideally, not 2×; monotonicity and reference ideality do not change the exponential relation of bits to resolution.


Common Pitfalls:
Confusing absolute accuracy (affected by INL/DNL, reference tolerance) with bit-limited resolution; assuming linear rather than exponential scaling with additional bits.


Final Answer:
Incorrect

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion