**2.3.2 NORMALIZATION, AND THE HIDDEN BIT**

A potential problem with representing ﬂoating point numbers is that the same number can be represented in different ways, which makes comparisons and arithmetic operations difﬁcult. For example, consider the numerically equivalent forms shown below:

3584.1 × 100 = 3.5841 × 103 = .35841 × 104.

In order to avoid multiple representations for the same number, ﬂoating point numbers are maintained in normalized form. That is, the radix point is shifted to the left or to the right and the exponent is adjusted accordingly until the radix point is to the left of the leftmost nonzero digit. So the rightmost number above is the

normalized one. Unfortunately, the number zero cannot be represented in this scheme, so to represent zero an exception is made. The exception to this rule is that zero is represented as all 0’s in the mantissa.

If the mantissa is represented as a binary, that is, base 2, number, and if the normalization condition is that there is a leading “1” in the normalized mantissa, then there is no need to store that “1” and in fact, most ﬂoating point formats do not store it. Rather, it is “chopped off ” before packing up the number for storage, and it is restored when unpacking the number into exponent and mantissa. This results in having an additional bit of precision on the right of the number, due to removing the bit on the left. This missing bit is referred to as the hidden bit, also known as a hidden 1. For example, if the mantissa in a given format is .11010 after normalization, then the bit pattern that is stored is 1010—the left-most bit is truncated, or hidden. We will see that the IEEE 754 ﬂoating point format uses a hidden bit.

## No comments:

## Post a Comment