Difficulty: Easy
Correct Answer: Correct
Explanation:
Introduction / Context:
Understanding how bridges and switches affect latency is a foundational networking topic. A bridge examines incoming frames, consults its MAC address table, and decides whether to forward, filter, or flood. This processing, along with store-and-forward buffering or cut-through behaviors, adds a small but measurable delay compared with a simple shared medium. Recognizing this helps with performance tuning and realistic expectations when adding devices to a path.
Given Data / Assumptions:
Concept / Approach:
A bridge must at minimum read enough of the frame header to determine the output port. In store-and-forward mode, it receives the entire frame and checks FCS before forwarding, inherently adding latency proportional to frame length and line rate. Even cut-through switching, which forwards after reading the destination MAC, incurs serialization and lookup delay. Therefore, bridging adds a small processing delay compared with a passive medium, though the delay is typically sub-millisecond on modern hardware.
Step-by-Step Solution:
Verification / Alternative check:
Empirical tests with back-to-back latency measurements show a few microseconds to tens of microseconds of added delay per switch hop, depending on architecture, speed (Fast/Gigabit/10G), and features (QoS, ACLs). Passive components (cables, unmanaged splitters on older media) do not make forwarding decisions and thus do not add comparable processing delay.
Why Other Options Are Wrong:
Common Pitfalls:
Assuming “negligible” means “zero”; ignoring queueing delay under congestion; confusing propagation delay (speed of signal in cable) with processing delay (device time to make decisions).
Final Answer:
Correct
Discussion & Comments