Concurrency control objective: If proper concurrency control is in place, will simultaneous updates compromise the database due to user interference?

Difficulty: Easy

Correct Answer: Incorrect

Explanation:


Introduction / Context:
Concurrency control allows multiple users to access and update shared data without violating correctness. This question checks whether a system with concurrency control will still be “compromised” by interference.



Given Data / Assumptions:

  • DBMSs employ locking, multiversion concurrency control (MVCC), or timestamp ordering.
  • Transactions follow ACID properties.
  • Isolation levels define visibility of uncommitted changes.



Concept / Approach:
With proper concurrency control, anomalies like lost update, dirty read, non-repeatable read, and phantom read are prevented according to the chosen isolation level. The goal is precisely to avoid compromise due to interference. Therefore, asserting inevitable compromise under concurrency control is incorrect.



Step-by-Step Solution:
Define isolation needs (e.g., read committed, repeatable read, serializable).Configure the DBMS mechanisms (locks or MVCC) accordingly.Ensure transactions are properly bounded with BEGIN/COMMIT/ROLLBACK.Validate via tests simulating concurrent updates; anomalies should not occur beyond permitted levels.



Verification / Alternative check:
Run a lost-update test under serializable isolation; the DBMS prevents both updates from silently overwriting each other.



Why Other Options Are Wrong:
“Safe only with full table locks” is inefficient and unnecessary given row, page, key-range locks or MVCC. Marking “Correct” contradicts the purpose of concurrency control.



Common Pitfalls:
Misusing autocommit; long transactions holding locks; misunderstanding isolation trade-offs leading to blocked sessions or deadlocks.



Final Answer:
Incorrect

More Questions from Data and Database Administration

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion