Difficulty: Easy
Correct Answer: Correct
Explanation:
Introduction / Context:
Computer memory hierarchies balance cost, capacity, latency, and bandwidth. Two dominant volatile memory technologies are DRAM and SRAM, each optimized for different points on this trade-off curve. Understanding which is used where is fundamental to systems design and performance analysis.
Given Data / Assumptions:
Concept / Approach:
DRAM achieves very high density and low cost per bit by using one-transistor/one-capacitor cells, trading off access latency and the overhead of refresh. SRAM, using more transistors per bit (commonly six), is faster with lower access latency, but at higher area and cost per bit, making it ideal for smaller, on-chip caches.
Step-by-Step Solution:
Verification / Alternative check:
Block diagrams of modern CPUs show multiple levels of SRAM cache (L1/L2/L3) and external DRAM (DDR variants) as main memory. Embedded MCUs with integrated SRAM still commonly use external DRAM for larger memory footprints.
Why Other Options Are Wrong:
Common Pitfalls:
Confusing “faster” with “higher bandwidth”; caches improve effective bandwidth and latency via locality even though DRAM channels provide high raw bandwidth with higher latency.
Final Answer:
Correct
Discussion & Comments