V22.0436 - Prof. Grishman

Lecture 16: Performance (cont'd)

How Architecture Affects Performance

Our goal is to minimize the product of the three factors (number of instructions executed, average CPI, clock cycle time) . Whenever we consider a change to the architecture, we must evaluate its effect on each of these factors.

In particular, when we add an instruction to the instruction set, we must consider whether it can significantly reduce the number of instructions to execute (the first factor) without affecting the time per instruction (the last two factors). A specialized instruction may be used only rarely by a compiler (most of the execution time is spent on a small number of instructions;  see Figure 3.26 for the distribution of instructions for the SPEC benchmarks).  On the other hand, if the instruction requires a longer data path, it may require a longer clock cycle. The net effect would be a slower machine. [We ignore the issue of code size, which is much less important than it used to be because memory is so much cheaper.]

Good candidates for instructions are those which would be used frequently and would take much longer if performed by a sequence of other instructions ... for example, floating point operations for scientific applications.

The increased use of RISC machines, starting in the mid-80's, reflected a more careful assessment of the benefits and costs of adding instructions to the instruction set.

However, the issue of binary code compatibility remains very important, if not overwhelming.  The development of entirely new machine architectures has decreased since the 90's, with the Intel PC architecture, and its variants, increasingly dominant.  As we shall discuss later, current microprocessors achieve both speed and code compatibility by translating Intel instructions into a RISC-like instruction set internally.

Single-cycle vs. multi-cycle

We could change from a single-cycle design (with an 800 ps clock period) to a multi-cycle design (with a 100 ps clock) in which we allowed a different number of cycles for different intruction types.  The machine would be slightly faster (clock cycle time would go down by a factor of 8, while average CPI would go up somewhat less), but the control unit considerably more complex.

The benefit of multi-cycle would be much greater if the instruction set included some instructions which took much longer (multiply, divide, floating point) but were less frequently executed than load, store, add/sub, branch.

For our small MIPS instruction subset, much greater speed-ups are possible by overlapping the execution of different instructions.

An Overview of Pipelining (Sec'n 4.5)

The simplest such overlap is instruction fetch overlap: fetch the next instruction while executing the current instruction. Even relatively simple processors employed such overlap.

Greater gain can be achieved by overlapping the execution (register fetch, ALU operation, ...) of successive instructions. A full pipelining scheme overlaps such operations completely, resulting ideally in a CPI (cycles per instruction) of 1. However, machines which employ such overlap must deal with data and branch hazards: instructions which influence later instructions in the pipeline. This makes the design of pipelined machines much more complex.

The benefits of pipelining increase when we have instructions which are relatively uniform in execution time and which can be finely divided into pipeline stages.  We can then have a relatively short clock cycle and issue one instruction each clock cycle (Fig. 4.27).  Under ideal conditions, instruction throughput is multiplied by the number of pipeline stages.

Instruction sets for pipelining (text, p. 335)

RISC machines like MIPS are well suited for pipelining.  Instruction format is simple and execution relatively uniform.  Pipelining is more complex for CISC machines, because the instructions may take different lengths of time to execute. However, RISC-style pipelining is now incorporated into high-performance CISC processors (such as the Pentium and Core 2) by translating most instructions into a series of  RISC-like operations.

Pipelined Data Path (text, section 4.6)

The basic idea is to introduce a set of pipeline registers which hold all the information required to complete execution of the instruction.  This includes portions of the instruction, control signals which have been decoded from the instruction, computed effective addresses and data.  Starting with the single-cycle machine, we can build a system with a 5-stage pipeline;  the basic design is shown in Figure 4.35 .

Pipeline hazards:  overlapping instruction execution can give rise to problems