Difficulty: Easy
Correct Answer: Incorrect — use repeated division by 2 for decimal→binary
Explanation:
Introduction / Context:
Converting numbers between bases is a foundational digital-logic and computer-architecture skill. For decimal (base 10) to binary (base 2) conversion, there is a well-known manual “short division” procedure. This question checks whether repeated division by 10 is the correct approach for decimal-to-binary conversion.
Given Data / Assumptions:
Concept / Approach:
The standard method for converting an integer N from decimal to binary uses repeated division by the target base (2). At each step, you note the remainder (0 or 1), then divide the quotient by 2 again. After the quotient becomes zero, the binary digits are the remainders read in reverse order. Division by 10 is unrelated to producing base-2 digits and would not yield correct binary bits.
Step-by-Step Solution:
Verification / Alternative check:
Try N = 13: 13/2 → q=6 r=1; 6/2 → q=3 r=0; 3/2 → q=1 r=1; 1/2 → q=0 r=1. Reading remainders backward yields 1101, which is correct for 13. Any attempt to divide by 10 produces decimal digits, not binary bits.
Why Other Options Are Wrong:
Common Pitfalls:
Mixing up methods for integer vs. fractional parts; forgetting to reverse the remainder sequence; dropping leading zeros in fixed-width formats without considering word size.
Final Answer:
Incorrect — use repeated division by 2 for decimal→binary
Discussion & Comments