In the early days of computing, there were common misconceptions about computers. One misconception was that the computer was only a giant adding machine performing arithmetic operations. Computers could do much more than that, even in the early days. The other common misconception, in contradiction to the ﬁrst, was that the computer could do “anything.” We now know that there are indeed classes of problems that even the most powerful imaginable computer ﬁnds intractable with the von Neumann model. The correct perception, of course, is somewhere between the two.
We are familiar with computer operations that are non-arithmetic: computer graphics, digital audio, even the manipulation of the computer mouse. Regardless of what kind of information is being manipulated by the computer, the information must be represented by patterns of 1’s and 0’s (also known as “on-off ” codes).
This immediately raises the question of how that information should be described or represented in the machine—this is the data representation, or data encoding. Graphical images, digital audio, or mouse clicks must all be encoded in a systematic, agreed-upon manner.
We might think of the decimal representation of information as the most natural when we know it the best, but the use of on-off codes to represent information predated the computer by many years, in the form of Morse code.
This chapter introduces several of the simplest and most important encodings: the encoding of signed and unsigned ﬁxed point numbers, real numbers (referred to as ﬂoating point numbers in computer jargon), and the printing characters.We shall see that in all cases there are multiple ways of encoding a given kind of
data, some useful in one context, some in another. We will also take an early look at computer arithmetic for the purpose of understanding some of the encoding schemes, though we will defer details of computer arithmetic until Chapter 3.
In the process of developing a data representation for computing, a crucial issue is deciding how much storage should be devoted to each data value. For example, a computer architect may decide to treat integers as being 32 bits in size, and to implement an ALU that supports arithmetic operations on those 32-bit values
that return 32 bit results. Some numbers can be too large to represent using 32 bits, however, and in other cases, the operands may ﬁt into 32 bits, but the result of a computation will not, creating an overﬂow condition, which is described in Chapter 3. Thus we need to understand the limits imposed on the accuracy and range of numeric calculations by the ﬁnite nature of the data representations. We will investigate these limits in the next few sections.