Difficulty: Easy
Correct Answer: Repeatable read
Explanation:
Introduction / Context:
Understanding which anomalies are blocked at each isolation level is fundamental for designing correct transactional behavior under concurrency.
Given Data / Assumptions:
Concept / Approach:
Repeatable read ensures that if a transaction reads a row twice, the values do not change between reads (no dirty or nonrepeatable reads). It does not, however, lock ranges in a way that always prevents new qualifying rows from appearing, so phantom reads can still occur (engine specifics aside).
Step-by-Step Solution:
Verification / Alternative check:
Refer to ANSI SQL isolation definitions; behavior matches the description, noting DBMS nuances (e.g., Next-Key locks may reduce phantoms).
Why Other Options Are Wrong:
Read committed and read uncommitted are too permissive; serializable is stricter than required.
Common Pitfalls:
Thinking repeatable read also blocks phantoms universally; that typically requires serializable or range locking.
Final Answer:
Repeatable read
Discussion & Comments