Difficulty: Easy
Correct Answer: Correct
Explanation:
Introduction / Context:
Understanding how a CPU interacts with memory is foundational for computer architecture, embedded design, and performance tuning. Instruction fetch and data access are the two primary memory transactions that occur during normal execution of a program, regardless of instruction set specifics.
Given Data / Assumptions:
Concept / Approach:
Every instruction cycle includes an instruction fetch phase. Many instructions also require reading data operands (and possibly writing results) from/to memory or registers. Whether the architecture is von Neumann (unified instruction/data memory) or Harvard (separate instruction/data buses), the CPU conceptually “fetches” instructions and operands from designated memories.
Step-by-Step Solution:
1) Instruction fetch: the program counter (PC) holds the address of the next instruction.2) Decode: the fetched opcode determines the operation and addressing mode.3) Operand access: the CPU reads data from registers or memory addresses computed by the addressing mode.4) Execute and write-back: results may be stored to registers or memory; PC updates to the next instruction (or branch target).
Verification / Alternative check:
Pipeline diagrams across architectures (e.g., fetch–decode–execute) explicitly show instruction fetch and data memory stages. Caches hide latency but do not eliminate the conceptual fetches.
Why Other Options Are Wrong:
“Incorrect” denies standard CPU behavior. Limiting correctness to interrupts or Harvard architectures misunderstands that ordinary instruction execution (not just interrupts) fetches code and data in all mainstream designs.
Common Pitfalls:
Confusing register-to-register instructions (which still fetch code) with memory operations; assuming caches remove the need to fetch—they only satisfy fetches faster.
Final Answer:
Correct
Discussion & Comments