Difficulty: Easy
Correct Answer: Cache memory
Explanation:
Introduction / Context:
Programs tend to access a relatively small subset of instructions and data repeatedly over short intervals. This statistical behavior—temporal and spatial locality—drives architectural optimizations that reduce effective memory latency and increase throughput.
Given Data / Assumptions:
Concept / Approach:
Cache memory is a small, fast memory that holds recently used lines from main memory. Because locality concentrates accesses, a cache achieves a high hit rate and lowers average access time. Virtual memory also benefits from locality but is primarily a capacity and protection mechanism; cache is the direct performance manifestation of locality at the hardware level.
Step-by-Step Solution:
Verification / Alternative check:
Profiling shows most accesses hit in L1/L2/L3 caches, dramatically reducing cycles per access compared to DRAM latency.
Why Other Options Are Wrong:
Common Pitfalls:
Assuming virtual memory alone speeds access; it can add overhead on page faults. Cache directly targets access time using locality.
Final Answer:
Cache memory
Discussion & Comments