## Thursday, December 22, 2011

### Computer Architecture # 03 : Arithmetic: HIGH PERFORMANCE ADDITION (13)

The ripple-carry adder that we reviewed in Section 3.2.2  may introduce too much delay into a system. The longest path through the adder is from the inputs of the least signiﬁcant full adder to the outputs of the most signiﬁcant full adder.
The process of summing the inputs at each bit position is relatively fast (a small two-level circuit sufﬁces) but the carry propagation takes a long time to work its way through the circuit. In fact, the propagation time is proportional to the number of bits in the operands. This is unfortunate, since more signiﬁcant ﬁgures in an addition translates to more time to perform the addition.

### Computer Architecture # 03 : Arithmetic:HIGH PERFORMANCE ARITHMETIC (12)

3.5 HIGH PERFORMANCE ARITHMETIC
For many applications, the speed of arithmetic operations are the bottleneck to performance. Most supercomputers, such as the Cray, the Tera, and the Intel Hypercube are considered “super” because they excel at performing ﬁxed and ﬂoating point arithmetic. In this section we discuss a number of ways to improve the speed of addition, subtraction, multiplication, and division.

### Computer Architecture # 03 : Arithmetic:FLOATING POINT MULTIPLICATION AND DIVISION (12)

3.4.2 FLOATING POINT MULTIPLICATION AND DIVISION
Floating point multiplication and division are performed in a manner similar to ﬂoating point addition and subtraction, except that the sign, exponent, and fraction of the result can be computed separately. If the operands have the same sign, then the sign of the result is positive. Unlike signs produce a negative result. The exponent of the result before normalization is obtained by adding the exponents of the source operands for multiplication, or by subtracting the divisor exponent from the dividend exponent for division. The fractions are multiplied or divided according to the operation, followed by normalization.

## Monday, October 10, 2011

### Computer Architecture # 03 : Arithmetic: FLOATING POINT ADDITION AND SUBTRACTION (10)

3.4.1 FLOATING POINT ADDITION AND SUBTRACTION
Floating point arithmetic differs from integer arithmetic in that exponents must be handled as well as the magnitudes of the operands. As in ordinary base 10 arithmetic using scientiﬁc notation, the exponents of the operands must be made equal for addition and subtraction. The fractions are then added or subtracted as appropriate, and the result is normalized.
This process of adjusting the fractional part, and also rounding the result can lead to a loss of precision in the result. Consider the unsigned ﬂoating point addition (.101 × 23 + .111 × 24) in which the fractions have three signiﬁcant digits. We start by adjusting the smaller exponent to be equal to the larger exponent, and adjusting the fraction accordingly.

### Computer Architecture # 03 : Arithmetic: FLOATING POINT ARITHMETIC (10)

3.4 Floating Point Arithmetic
Arithmetic operations on ﬂoating point numbers can be carried out using the ﬁxed point arithmetic operations described in the previous sections, with attention given to maintaining aspects of the ﬂoating point representation. In the sections that follow, we explore ﬂoating point arithmetic in base 2 and base 10, keeping the requirements of the ﬂoating point representation in mind.

## Friday, September 16, 2011

### Computer Architecture # 03 :Arithmetic : SIGNED MULTIPLICATION AND DIVISION (9)

3.3.3 SIGNED MULTIPLICATION AND DIVISION
If we apply the multiplication and division methods described in the previous sections to signed integers, then we will run into some trouble. Consider multiplying −1 by +1 using four-bit words, as shown in the left side of Figure 3-16.
The eight-bit equivalent of +15 is produced instead of −1. What went wrong is that the sign bit did not get extended to the left of the result. This is not a problem for a positive result because the high order bits default to 0, producing the correct sign bit 0.

## Saturday, September 10, 2011

### Computer Architecture # 03 :Arithmetic : UNSIGNED DIVISION (8)

3.3.2 UNSIGNED DIVISION
In longhand binary division, we must successively attempt to subtract the divisor from the dividend, using the fewest number of bits in the dividend as we can.
Figure 3-13 illustrates this point by showing that (11)2 does not “ﬁt” in 0 or 01, but does ﬁt in 011 as indicated by the pattern 001 that starts the quotient.
Computer-based division of binary integers can be handled similar to the way that binary integer multiplication is carried out, but with the complication that the only way to tell if the dividend does not “ﬁt” is to actually do the subtraction and test if the remainder is negative. If the remainder is negative then the subtraction must be “backed out” by adding the divisor back in, as described below.

### Computer Architecture # 03 :Arithmetic : UNSIGNED MULTIPLICATION (7)

3.3.1 UNSIGNED MULTIPLICATION
Multiplication of unsigned binary integers is handled similar to the way it is carried out by hand for decimal numbers. Figure 3-10 illustrates the multiplication process for two unsigned binary integers. Each bit of the multiplier determines whether or not the multiplicand, shifted left according to the position of the multiplier bit, is added into the product. When two unsigned n-bit numbers are
multiplied, the result can be as large as 2n bits. For the example shown in Figure 3-10, the multiplication of two four-bit operands results in an eight-bit product.

## Wednesday, September 7, 2011

### Computer Architecture # 03 :Arithmetic : FIXED POINT MULTIPLICATION AND DIVISION (6)

3.3 Fixed Point Multiplication and Division
Multiplication and division of ﬁxed point numbers can be accomplished with addition, subtraction, and shift operations. The sections that follow describe methods for performing multiplication and division of ﬁxed point numbers in both unsigned and signed forms using these basic operations. We will ﬁrst cover unsigned multiplication and division, and then we will cover signed multiplication and division.

### Computer Architecture # 03 : Arithmetic: ONE’S COMPLEMENT ADDITION AND SUBTRACTION (5)

3.2.3 ONE’S COMPLEMENT ADDITION AND SUBTRACTION
Although it is not heavily used in mainstream computing anymore, the one’s complement representation was used in early computers.
One’s complement addition is handled somewhat differently from two’s complement addition: the carry out of the leftmost position is not discarded, but is added back into the least signiﬁcant position of the integer portion as shown in  Figure 3-7.
This is known as an end-around carry

### Computer Architecture # 03 : Arithmetic: HARDWARE IMPLEMENTATION OF ADDERS AND SUBTRACTORS

3.2.2 HARDWARE IMPLEMENTATION OF ADDERS AND SUBTRACTORS
Up until now we have focused on algorithms for addition and subtraction. Now we will take a look at implementations of simple adders and subtractors.
In Appendix A, a design of a four-bit ripple-carry adder is explored. The adder is modeled after the way that we normally perform decimal addition by hand, by summing digits in one column at a time while moving from right to left. In this section, we review the ripple-carry adder, and then take a look at ripple-borrow subtractor. We then combine the two into a single addition/subtraction unit.

## Tuesday, July 5, 2011

### Computer Architecture # 03 : Arithmetic: TWO’S COMPLEMENT ADDITION AND SUBTRACTION (3)

3.2.1 TWO’S COMPLEMENT ADDITION AND SUBTRACTION

In this section, we look at the addition of signed two’s complement numbers. As we explore the addition of signed numbers, we also implicitly cover subtraction as well, as a result of the arithmetic principle:
a - b = a + (−b).
We can negate a number by complementing it (and adding 1, for two’s complement), and so we can perform subtraction by complementing and adding. This results in a savings of hardware because it avoids the need for a hardware subtractor. We will cover this topic in more detail later.  We will need to modify the interpretation that we place on the results of addition when we add two’s complement numbers.