Storage organization terms In file/page storage, the “blocking factor” refers to:

Difficulty: Easy

Correct Answer: The number of physical records that fit on one page or block.

Explanation:


Introduction / Context:
On-disk layout influences performance. Many DBMSs store data in fixed-size pages or blocks. The blocking factor captures how densely records pack into those pages, directly affecting I/O costs for scans and lookups. This question checks your recall of that definition.


Given Data / Assumptions:

  • Data is stored in fixed-size pages/blocks (e.g., 8 KB).
  • Each row has a physical length including overhead.
  • Free space and fragmentation can reduce effective packing.


Concept / Approach:

Blocking factor = floor(page_size / record_size_with_overhead). Higher values mean more rows per I/O and typically better sequential scan efficiency. Very wide rows reduce the factor and can increase I/O per query.


Step-by-Step Solution:

1) Identify page size and average record size.2) Compute how many complete records fit → the blocking factor.3) Recognize its impact on buffer hits and disk reads.


Verification / Alternative check:

Database storage references and file-organization texts define the term exactly this way; it is not about keys or logical grouping but about physical packing density.


Why Other Options Are Wrong:

  • Primary/secondary key groupings describe logical design, not page packing.
  • “Column family” applies to certain NoSQL models and is unrelated to classic blocking factor.
  • Number of indexes has no direct definition link here.


Common Pitfalls:

  • Ignoring row overhead (headers, alignment) when estimating packing.
  • Assuming larger pages always help; they can hurt random access.


Final Answer:

The number of physical records that fit on one page or block.

More Questions from Physical Database Design

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion