Bit lengths and representation — comparing IPv4 and IPv6 address formats Which statement pair about address length and common representation is correct for IPv4 and IPv6?

Difficulty: Easy

Correct Answer: 3 and 4

Explanation:


Introduction / Context:
Address size and human-readable representation differ between IPv4 and IPv6. Knowing the bit length and typical textual form is foundational for subnetting, ACLs, and troubleshooting. This question checks recognition of the correct pairing: IPv4 uses 32 bits and decimal dotted-quad notation, while IPv6 uses 128 bits and hexadecimal colon-separated notation.


Given Data / Assumptions:

  • We consider the canonical and most common text representations.
  • Compressed forms for IPv6 (for example, ::) are allowed but still hexadecimal.
  • No unusual binary or base-85 display formats are considered.


Concept / Approach:
IPv4 addresses are 32 bits and are usually printed as four decimal octets separated by dots (for example, 192.0.2.1). IPv6 addresses are 128 bits and are usually shown as eight 16-bit hexadecimal hextets separated by colons (for example, 2001:db8::1). Therefore, the correct statements are: “An IPv4 address is 32 bits long, represented in decimal” and “An IPv6 address is 128 bits long, represented in hexadecimal.”


Step-by-Step Solution:
Eliminate options claiming IPv6 is 32 bits or that IPv6 uses decimal by default.Select the combination that matches IPv4 32-bit decimal and IPv6 128-bit hexadecimal.Confirm consistency with standard RFCs and vendor documentation.


Verification / Alternative check:
Refer to any subnetting calculator or routing protocol documentation; they uniformly treat IPv4 as 32-bit dotted-decimal and IPv6 as 128-bit hexadecimal with optional compression rules.


Why Other Options Are Wrong:

  • A/B/D: Each includes at least one false claim about address size or representation.
  • E: Not all statements are correct; several are incorrect by definition.


Common Pitfalls:
Misreading “decimal” vs “hexadecimal” and assuming textual decimal implies decimal storage. Internally, both are binary; the question is about conventional notation.


Final Answer:
3 and 4

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion