C programming and IEEE-754 float on little-endian (Intel) The binary equivalent of 5.375 in normalized IEEE-754 single-precision form is: 0100 0000 1010 1100 0000 0000 0000 0000 Given the following C program, what output bytes (one per line, hex, least-significant address first on Intel) will be printed? #include<stdio.h> #include<math.h> int main() { float a = 5.375; char p; int i; p = (char)&a; for (i = 0; i <= 3; i++) printf("%02x ", (unsigned char)p[i]); return 0; }

Difficulty: Medium

Correct Answer: 00 00 AC 40

Explanation:


Introduction / Context:
In C programming, the in-memory representation of floating-point numbers follows the IEEE-754 standard (for typical desktop compilers), and the byte order depends on the machine's endianness. Intel architectures are little-endian, meaning the least significant byte is stored at the lowest memory address. This question tests your understanding of IEEE-754 single-precision encoding and how pointer-based byte inspection reveals the byte order on Intel systems.


Given Data / Assumptions:

  • Value: a = 5.375 (single precision float).
  • IEEE-754 single precision: 1 sign bit, 8 exponent bits, 23 fraction bits.
  • Normalized binary given: 0100 0000 1010 1100 0000 0000 0000 0000.
  • Platform: Intel (little-endian).
  • Code prints p[0], p[1], p[2], p[3] with %02x and a newline after each.


Concept / Approach:

IEEE-754 single precision encodes 5.375 as sign = 0, exponent = 129, fraction chosen so that 1.01011 * 2^2 = 5.375. In hex, this bit pattern is 0x40AC0000. On a big-endian system, bytes would appear in memory as 40 AC 00 00. On a little-endian system (Intel), the order in memory is reversed at the byte level, so the sequence read through a char* from lowest address becomes 00 00 AC 40.


Step-by-Step Solution:

1) Normalize 5.375: 5.375 = 101.011 (binary) = 1.01011 * 2^2. 2) Exponent field = 127 + 2 = 129 = 0x81; with sign = 0. 3) Fraction (mantissa) bits after the leading 1 are 01011 followed by zeros. 4) Full 32-bit pattern = 0x40AC0000 (matches the given binary: 0100 0000 1010 1100 0000 0000 0000 0000). 5) Big-endian byte layout would be: 40 AC 00 00. 6) Intel is little-endian ⇒ in memory the byte order is reversed: 00 00 AC 40. 7) The loop prints p[0], p[1], p[2], p[3] as two-digit lowercase hex, each on a new line ⇒ outputs the sequence 00, 00, ac, 40 (often written as 00 00 AC 40).


Verification / Alternative check:

Cross-check with a float-to-hex table or by constructing the float: sign 0, exponent 129 (binary 1000 0001), fraction 010 1100 0000 0000 0000 000. Group into nibbles to confirm 0x40AC0000. Reversing byte order for little-endian confirms the printed order 00 00 AC 40.


Why Other Options Are Wrong:

40 AC 00 00: This is the big-endian order, not what p[0]..p[3] prints on Intel.

04 CA 00 00: Digits are permuted; exponent and fraction bits no longer match 5.375.

00 00 CA 04: Byte values and order do not correspond to 0x40AC0000 reversed.


Common Pitfalls:

  • Confusing value endianness (byte order) with bit order; IEEE-754 bit layout is fixed, but memory byte order can differ by architecture.
  • Assuming printf on a float prints bytes directly; here we explicitly cast the float's address to char* and index bytes.
  • Forgetting that Intel is little-endian, so p[0] is the least significant byte.


Final Answer:

00 00 AC 40

More Questions from Floating Point Issues

Discussion & Comments

No comments yet. Be the first to comment!
Join Discussion