Memory technology choices: Main computer memory is usually implemented with DRAM due to high density and low cost, while cache memory is typically SRAM due to higher speed. Assess this statement.

Difficulty: Easy

Correct Answer: Correct

Explanation:


Introduction / Context:
Computer memory hierarchies balance cost, capacity, latency, and bandwidth. Two dominant volatile memory technologies are DRAM and SRAM, each optimized for different points on this trade-off curve. Understanding which is used where is fundamental to systems design and performance analysis.


Given Data / Assumptions:

  • DRAM cells store charge on capacitors and require refresh.
  • SRAM cells use cross-coupled transistors, no refresh required.
  • Main memory refers to the large external memory pool; cache memory refers to smaller, fast buffers close to the CPU.


Concept / Approach:
DRAM achieves very high density and low cost per bit by using one-transistor/one-capacitor cells, trading off access latency and the overhead of refresh. SRAM, using more transistors per bit (commonly six), is faster with lower access latency, but at higher area and cost per bit, making it ideal for smaller, on-chip caches.


Step-by-Step Solution:

Identify performance needs: caches need lowest latency → SRAM.Identify capacity/cost needs: main memory needs large capacity at low cost → DRAM.Map technologies accordingly in standard computer architectures.Confirm that this mapping aligns with practical CPU and system designs.


Verification / Alternative check:
Block diagrams of modern CPUs show multiple levels of SRAM cache (L1/L2/L3) and external DRAM (DDR variants) as main memory. Embedded MCUs with integrated SRAM still commonly use external DRAM for larger memory footprints.


Why Other Options Are Wrong:

  • Incorrect: Contradicts a ubiquitous architectural choice.
  • Ambiguous / MCU-only: The statement holds broadly across desktops, servers, and embedded systems where both memory types are present.


Common Pitfalls:
Confusing “faster” with “higher bandwidth”; caches improve effective bandwidth and latency via locality even though DRAM channels provide high raw bandwidth with higher latency.


Final Answer:
Correct

More Questions from Memory and Storage

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion