欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

阅读《计算机体系结构》:定量研究方法(英文版 - 第 6 版):定量设计与分析基础

最编程 2024-04-02 16:37:15
...

点击查看第二章
计算机体系结构:量化研究方法(英文版原书第6版)
Computer Architecture: A Quantitative Approach, Sixth Edition

image.png

[美] 约翰L. 亨尼斯(John L. Hennessy)
戴维A. 帕特森(David A. Patterson) 著

1 Fundamentals of Quantitative Design and Analysis

1.1 Introduction

Computer technology has made incredible progress in the roughly 70 ycais since lhe first general-purpose electronic computer was created. Today, less than S500 will purchase a cell phone that has as much performance as the world's fastest computer bought in 1993 for $50 million. Ulis rapid improveineni has come both from advances in the technology used to build computers and from innovations in computer design.
Although technological improvements historically have been fairly steady, progress arising from better computer architectures has been much less consistent. During lhe first 25 years of electronic compulcrs. both forces made a major con- iribuiion. delivering performance improvement of about 25% per year. Hie late 1970s saw the emergence of the microprocessor. The ability of the microprocessor to ride the improvements in integrated circuit technology led lo a higher rate of performance improveineni—roughly 35% growth per year.
This growth rate, combined with the cost advantages of a mass-produced microprocessor, led to an increasing fraclion of lhe computer business being based on microprocessors. In addition, two significant changes in the computer marketplace made it easier than ever before to succeed coninicrcially with a new archilecture. First lhe virtual elimination of assembly language programming reduced the need for object-code compatibility. Second, the creation of standardized, vendor-independent openiting systems, such as UNIX and its clone. Linux, lowered the cost and risk of bringing out a new architeciure.
These changes made it possible to develop successfully a new set of airhitectures with simpler instniciions. called RISC (Reduced Instruction Set Coinpuier) architectures, in the early 1980s. The RISC-based machines focused the af.ention of designers on two critical pcrformiincc techniques, the exploitation of instruction-level parallelism (initially through pipelining and later through n ulliple instruction issue) and the use of caches (initially in simple forms and later using more sophisticated organizations and optimizations).
The RISC-based computers raised the performance bar, forcing prior architectures to keep up or disappear. The Digital Equipment Vax could not. and sc it was replaced by a RISC archiieciure. Irnel rose to lhe challenge, primarily by translating 80x86 instructions into RISC-like instnictions internally, allowing it to adopt many of the innovations first pioneered in the RISC designs. As transistor counts soared in lhe laie 1990s, lhe hardware overhead of iranslaiing the more complex x86 architecture became negligible. In low-end applications, such as cell phones, lhe cosi in power and silicon area of the x86-translaiion overhead helped lead to a RISC architeciure, ARM, becoming dominant.
Figure 1.1 shows that the combination of architectural and organizational enhancements led to 17 years of sustained growth in performance at an annual rate of over 50% a rate that is unprecedented in the computer industry.
The effect of this dramatic growth rate during the 20th century was fourfold. First, it has significantly enhanced the capability available to computer users. For many applications, the highest-performance microprocessors outperformed the supercomputer of less than 20 years earlier.

image.png

Second,this dramatic improvement in cost-performance led to new classes of compuiers. Personal computers and workslations emerged in the 1980s with ihe availability of the microprocessor. The past decade saw the rise of smart cell phones and tablet computers, which many people arc using as their primary computing platfbnns instead of PCs. These mobile clienl devices are increasingly using the Internet to access warehouses containing 100.000 servers, which are being designed as if they were a single gigantic computer.
Third,improvement of semiconductor manufacturing as predicted by Moorers law has led to the dominance of inicroproccssor-bascd computers across the entire range of computer design. Minicomputers, which were tmdilionally made from off-the-shelf logic or from gate arrays, were replaced by servers made by using microprocessors. Even mainfnimc computers and high-performance supercomputers are all collections of microprocessors.
The preceding hardware innovations led to a renaissance in computer design, which emphasized both architectural innovation and cfiicient use of technology improvements. This rate of growth compounded so that by 2003, high- performance microprocessors were 7.5 times as fast as what would havs been obtained by relying solely on technology, including improved circuit design, lhai is, 52% per year versus 35% per year.
This hardware renaissance led lo lhe fourth impact, which was on scftwiirc development. This 50.000-fold performance improvement since 1973 (see Figure 1.1) allowed modern programmers to trade performance for productivity. In place of perfbrmance-oriented languages like C and C++, much more programming today is done in managed programming languages like Java and Scala. Moreover. scripting languages like JavaScript and Python, which arc even more productive, are gaining in popularity along with programming frameworks like AngularJS and Django. To maintain productivity and iry to close the perfoimance gap. interpreters with just-in-limc compilers and tracc-biiscd compiling arc replacing lhe traditional compiler and linker of (he past. Software deployment is changing as well, with Software as a Service (SaaS) used over the Internet replacing shrink-wrapped software lhat must be installed and run on a local computer.
The nature of applications is also changing. Speech, sound, images, and video arc becoming increasingly important, along with predictable response time that is so critical to the user experience. An inspiring example is Google Translate. This application lets you hold up your cell phone to point its camera at an object, and the image is sent wirelessly over the Internet to a warchousc-scalc computer iWSC) that recognizes lhe text in lhe photo and translates it into your native language. You can also speak into it. and it will translate what you said into audio output in another language. It translates text in 90 languages and voice in 15 languages.
Alas, Figure 1.1 also shows that this 17-year hardware renaissance is over. The fundamental reason is that two characteristics of semiconductor processes that were ture for decades no longer hold.
In 1974 Robert Dennard observed that power density was constant for a given area of silicon even as you increased the number of transistors because of smaller dimensions of each transistor. Remarkably, transistors could go faster but use less power. Demuird scaling ended around 2004 because current and voltage couldn't keep dropping and still maintain lhe dependability of integrated circuits.
This change forced the microprocessor industry to use multiple efficient processors or cores instead of a single inefficient processor. Indeed, in 2004 Intel canceled its high-perfonnance uniprocessor projects and joined others in declaring that the road to higher performance would be via multiple processors per chip ndher than via faster uniprocessors. This milesionc signaled a historic sw ich from relying solely on instruction-level parallelism (ILP), the primary focus on the first three editions of this book, to data-level parallelism (DLP) and thread-level par-allelism (TLP), which were teatured in lhe tburih edition and expanded iri the fifth edition. The fifth edition also added WSCs and request-level {Kirallelism (RLP), which is expanded in this edition. Whereas the compiler and hardware conspire to exploit ILP implicitly wiihoui the programmer's atlention, DLP, TLP. and RLP are explicitly parallel, requiring the restructuring of the application so that it can exploit explicit parallelism. In some instances, this is easy; in many, it is a major new burden for programmers.
Amdahl's Law (Section i.9) prescribes practical limits to the number of useful cores per chip. If 10% of the task is serial, then lhe rnaxinniin performance benefit from parallelism is 10 no matter how many cores you put on the chip.
The second observation that ended recently is Moore's Law. In 1965 Gordon Moore famously predicted that the number of transistors per chip would double every year, which was amended in 1975 to every two years. That prediction lasted for about 50 years, but no longer holds. For example, in lhe 2010 edition of this book, the most recent Intel microprocessor had 1,170,000,000 transistors. If Moore's Law had continued, we could have expected microprocessors in 2016 io have 18,720,000,000 transistors. Insiead, lhe equivalent Iniel microprocessor has just 1,750,000,000 transistors, or off by a factor of 10 from what Moore's Law would have predicted.
The combinaiion of

  • transisiors no longer getting much better because of lhe slowing of Moore's Law and the end of Dinnard scaling.
  • the unchanging power budgets for microprocessors.
  • the replacement of the single power-hungry processor with several energy-efficient processors, and
  • the limits to multiprocessing to achieve Amdahl's Law

caused improvements in processor performance to slow down, that is. to double every 20 years, rather than every 1.5 years as it did between 1986 and 2003 (see Figure 1.1).
The only path left to improve energy-performance-cost is specializatiai. Future microprocessors will include several domain-specific cores that perform only one class of computations well, but they do so remarkably better than general-purpose cores. The new Chapter 7 in this edition introduces domain-specific architectures.
This text is about the architectural ideas and accompanying compiler improYemenis that made the incredible growth rale possible over lhe past century, ihe reasons for the dramatic change, and the challenges and initial promising approaches to architectural ideas, compilers, and interpreters for the 21 st century. Al the core is a quantilaiive approach to computer design and analysis that uses empirical observations of programs, experimentation, and simulation as its tools. It is this style and approach io compulcr design that is rcilected in this text. The purpose of this chapter is to lay the quantitative foundation on which the following chapters and appendices are based.
This book was written noi only io explain this design style but also to stimulate you to contribute to this progress. We believe this approach will serve the computers □f the future just as it worked for the implicitly parallel computers of the past.

1.2 Classes of Computers

These changes have set the stage for a dramatic change in how we view commuting, computing applications, and the computer markets in this new centuiy. Net since the creation of the personal computer have wc seen such striking changes in the way computers appearand in how they are used. These changes in computer use hive led to five diverse computing markets, each characterized by different applications, rrcininunents, nncl mmpnling technologies. Figure 1.2 snmmarizes these mainstream classes of computing environments and their important characteristics.
Internet of Things/Embedded Computers
Embedded computers are found in everyday machines: microwaves, washing machines, most printers, networking switches, and all automobiles. The phrase Internet of Things (loT) refers to embedded computers that arc connected to the Internet, typically wirelessly. When augmented with sensors and actuators, loT devices collect useful data and interact with the physical world, leading to a wide variety of "smart” applications, such as smiirt watches, smart thermostats, smart speakers, smart cars, smart homes, smart grids, and smart cities.

image.png

Embedded computers have the widest spread of processing power and cost. They include 8-bil to 32-bit processors that may cost one penny, and high-end 64-bit processors for cars and network switches that cost $100. Although he range of computing power in the embedded computing market is very large, price is a key factor in the design of computers Ibrthis space. Peribnnance requiremenisdo exist, of course, but the primary goal often meets the performance need at a minimum price, rather than achieving more performance at a higher price. The pnycctions for the number of loT devices in 2020 range from 20 to 50 billion.
Most of this book applies to the design, use, and performance of embedded processors, whether they are off-the-shelf microprocessors or microprocessor cores that will be assembled with other special-purpose hardware.
Unfortunately, the data that drive the quantitative design and evaluation of other classes of computers have not yet been extended successfully to embedded computing (see the challenges with EEMBC, for example, in Section 1.8). Hence we are left for now with qualitative descriptions, which do not fit well with the rest of the book. As a result, the embedded material is concentrated in Appendix E. We believe a separate appendix improves the flow of ideas in the text while allowing readers to see how the differing requirements affect embedded computing.

Personal Mobile Device
Personal mobile device (PMD) is the term we apply to a collection of wireless devices with multimedia user interfaces such as cell phones, tablet computers, and so on. Cost is a prime concern given the consumer price for the whole product is a few hundred dollars. Although the emphasis on energy efficiency is frequently driven by the use of batteries, the need to use less expensive packag- ing—plastic versus ceramic—and the absence of a fan for cooling also limit total power consumption. We examine the issue of energy and power in more detail in Section 1.5. Applications on PMDs are often web-based and media-oriented, like the previously mentioned Google Translate example. Energy and size requirements lead to use of Flash memory for storage (Chapter 2) instead of magnetic disks.
The processors in a PMD are often considered embedded computers, but we are keeping them as a separate category because PMDs are platforms that can run externally developed software, and they share many of the characteristics of desktop computers. Other embedded devices are more limited in hardware and software sophistication. We use the ability to run third-party software as the divid- ing line between nonembedded and embedded computers.
Responsiveness and predictability are key characteristics for media applica- tions. A real-time performance requirement means a segment of the application has an absolute maximum execution time. For example, in playing a video on a PMD, the time to process each video frame is limited, since the processor must accept and process the next frame shortly. In some applications, a more nuanced requirement exists: the average time for a particular task is constrained as well as the number of instances when some maximum time is exceeded. Such approaches—sometimes called soft real-time—arise when it is possible to miss the time constraint on an event occasionally, as long as not too many are missed. Real-time performance tends to be highly application-dependent.
Other key characteristics in many PMD applications are the need to minimize memory and the need to use energy efficiently. Energy efficiency is driven by both battery power and heat dissipation. The memory can be a substantial portion of the system cost, and it is important to optimize memory size in such cases. The impor- tance of memory size translates to an emphasis on code size, since data size is dic- tated by the application.

Desktop Computing
The first, and possibly still the largest market in dollar terms, is desktop computing. Desktop computing spans from low-end netbooks that sell for under 300tohighend,heavilyconfiguredworkstationsthatmaysellfor2500. Since 2008, more than half of the desktop computers made each year have been battery operated lap- top computers. Desktop computing sales are declining.
Throughout this range in price and capability, the desktop market tends to be driven to optimize price-performance. This combination of performance (measured primarily in terms of compute performance and graphics perfor- mance) and price of a system is what matters most to customers in this market, and hence to computer designers. As a result, the newest, highest-performance microprocessors and cost-reduced microprocessors often appear first in desktop systems (see Section 1.6 for a discussion of the issues affecting the cost of computers).
Desktop computing also tends to be reasonably well characterized in terms of applications and benchmarking, though the increasing use of web-centric, interac- tive applications poses new challenges in performance evaluation.

Servers
As the shift to desktop computing occurred in the 1980s, the role of servers grew to provide larger-scale and more reliable file and computing services. Such servers have become the backbone of large-scale enterprise computing, replacing the tra- ditional mainframe.
For servers, different characteristics are important. First, availability is critical. (We discuss availability in Section 1.7.) Consider the servers running ATM machines for banks or airline reservation systems. Failure of such server systems is far more catastrophic than failure of a single desktop, since these servers must operate seven days a week, 24 hours a day. Figure 1.3 estimates revenue costs of downtime for server applications.

image.png

A second key feature of server systems is scalability. Server systems often grow in response to an increasing demand for the services they support or an expansion in functional requirements. Thus the ability to scale up the computing capacity, the memory, the storage, and the I/O bandwidth of a server is crucial.
Finally, servers are designed for efficient throughput. That is, the overall per- formance of the server—in terms of transactions per minute or web pages served per second—is what is crucial. Responsiveness to an individual request remains important, but overall efficiency and cost-effectiveness, as determined by how many requests can be handled in a unit time, are the key metrics for most servers. We return to the issue of assessing performance for different types of computing environments in Section 1.8.

Clusters/Warehouse-Scale Computers
The growth of Software as a Service (SaaS) for applications like search, social net- working, video viewing and sharing, multiplayer games, online shopping, and so on has led to the growth of a class of computers called clusters. Clusters are col- lections of desktop computers or servers connected by local area networks to act as a single larger computer. Each node runs its own operating system, and nodes com- municate using a networking protocol. WSCs are the largest of the clusters, in that they are designed so that tens of thousands of servers can act as one. Chapter 6 describes this class of extremely large computers.
Price-performance and power are critical to WSCs since they are so large. As Chapter 6 explains, the majority of the cost of a warehouse is associated with power and cooling of the computers inside the warehouse. The annual amortized computers themselves and the networking gear cost for a WSC is 40million,becausetheyareusuallyreplacedeveryfewyears.Whenyouarebuyingthatmuchcomputing,youneedtobuywisely,becausea104 million (10% of 40million)perWSC;acompanylikeAmazonmighthave100WSCs!WSCsarerelatedtoserversinthatavailabilityiscritical.Forexample,Amazon.comhad136 billion in sales in 2016. As there are about 8800 hours in a year, the average revenue per hour was about $15 million. During a peak hour for Christ- mas shopping, the potential loss would be many times higher. As Chapter 6 explains, the difference between WSCs and servers is that WSCs use redundant, inexpensive components as the building blocks, relying on a software layer to catch and isolate the many failures that will happen with computing at this scale to deliver the availability needed for such applications. Note that scalability for a WSC is handled by the local area network connecting the computers and not by integrated computer hardware, as in the case of servers.
Supercomputers are related to WSCs in that they are equally expensive, costing hundreds of millions of dollars, but supercomputers differ by emphasi- zing floating-point performance and by running large, communication-intensive batch programs that can run for weeks at a time. In contrast, WSCs emphasize interactive applications, large-scale storage, dependability, and high Internet bandwidth.

Classes of Parallelism and Parallel Architectures
Parallelism at multiple levels is now the driving force of computer design across all four classes of computers, with energy and cost being the primary constraints. There are basically two kinds of parallelism in applications:

1.Data-level parallelism (DLP) arises because there are many data items that can be operated on at the same time.
2.Task-level parallelism (TLP) arises because tasks of work are created that can operate independently and largely in parallel.

Computer hardware in turn can exploit these two kinds of application parallelism in four major ways:
1.Instruction-level parallelism exploits data-level parallelism at modest levels with compiler help using ideas like pipelining and at medium levels using ideas like speculative execution.
2.Vector architectures, graphic processor units (GPUs), and multimedia instruc- tion sets exploit data-level parallelism by applying a single instruction to a col- lection of data in parallel.
3.Thread-level parallelism exploits either data-level parallelism or task-level par- allelism in a tightly coupled hardware model that allows for interaction between parallel threads.
4.Request-level parallelism exploits parallelism among largely decoupled tasks specified by the programmer or the operating system.

When Flynn (1966) studied the parallel computing efforts in the 1960s, he found a simple classification whose abbreviations we still use today. They target data-level parallelism and task-level parallelism. He looked at the parallelism in the instruction and data streams called for by the instructions at the most constrained component of the multiprocessor and placed all computers in one of four categories:

1.Single instruction stream, single data stream (SISD)—This category is the uni- processor. The programmer thinks of it as the standard sequential computer, but it can exploit ILP. Chapter 3 covers SISD architectures that use ILP techniques such as superscalar and speculative execution.
2.Single instruction stream, multiple data streams (SIMD)—The same instruc- tion is executed by multiple processors using different data streams. SIMD com- puters exploit data-level parallelism by applying the same operations to multiple items of data in parallel. Each processor has its own data memory (hence, the MD of SIMD), but there is a single instruction memory and control processor, which fetches and dispatches instructions. Chapter 4 covers DLP and three different architectures that exploit it: vector architectures, multimedia extensions to standard instruction sets, and GPUs.
3.Multiple instruction streams, single data stream (MISD)—No commercial mul- tiprocessor of this type has been built to date, but it rounds out this simple classification.
4.Multiple instruction streams, multiple data streams (MIMD)—Each processor fetches its own instructions and operates on its own data, and it targets task-level parallelism. In general, MIMD is more flexible than SIMD and thus more gen- erally applicable, but it is inherently more expensive than SIMD. For example, MIMD computers can also exploit data-level parallelism, although the overhead is likely to be higher than would be seen in an SIMD computer. This overhead means that grain size must be sufficiently large to exploit the parallelism effi- ciently. Chapter 5 covers tightly coupled MIMD architectures, which exploit thread-level parallelism because multiple cooperating threads operate in paral- lel. Chapter 6 covers loosely coupled MIMD architectures—specifically, clus- ters and warehouse-scale computers—that exploit request-level parallelism, where many independent tasks can proceed in parallel naturally with little need for communication or synchronization.

This taxonomy is a coarse model, as many parallel processors are hybrids of the SISD, SIMD, and MIMD classes. Nonetheless, it is useful to put a framework on the design space for the computers we will see in this book.

1.3 Defining Computer Architecture

The task the computer designer faces is a complex one: determine what attributes are important for a new computer, then design a computer to maximize performance and energy efficiency while staying within cost, power, and availabil- ity constraints. This task has many aspects, including instruction set design, func- tional organization, logic design, and implementation. The implementation may encompass integrated circuit design, packaging, power, and cooling. Optimizing the design requires familiarity with a very wide range of technologies, from com- pilers and operating systems to logic design and packaging.
A few decades ago, the term computer architecture generally referred to only instruction set design. Other aspects of computer design were called implementa- tion, often insinuating that implementation is uninteresting or less challenging.
We believe this view is incorrect. The architect’s or designer’s job is much more than instruction set design, and the technical hurdles in the other aspects of the project are likely more challenging than those encountered in instruction set design. We’ll quickly review instruction set architecture before describing the larger challenges for the computer architect.

Instruction Set Architecture: The Myopic View of Computer Architecture
We use the term instruction set architecture (ISA) to refer to the actual programmer-visible instruction set in this book. The ISA serves as the boundary between the software and hardware. This quick review of ISA will use examples from 80x86, ARMv8, and RISC-V to illustrate the seven dimensions of an ISA. The most popular RISC processors come from ARM (Advanced RISC Machine), which were in 14.8 billion chips shipped in 2015, or roughly 50 times as many chips that shipped with 80x86 processors. Appendices A and K give more details on the three ISAs.
RISC-V (“RISC Five”) is a modern RISC instruction set developed at the University of California, Berkeley, which was made free and openly adoptable in response to requests from industry. In addition to a full software stack (com- pilers, operating systems, and simulators), there are several RISC-V implementa- tions freely available for use in custom chips or in field-programmable gate arrays. Developed 30 years after the first RISC instruction sets, RISC-V inherits its ances- tors’ good ideas—a large set of registers, easy-to-pipeline instructions, and a lean set of operations—while avoiding their omissions or mistakes. It is a free and open, elegant example of the RISC architectures mentioned earlier, which is why more than 60 companies have joined the RISC-V foundation, including AMD, Google, HP Enterprise, IBM, Microsoft, Nvidia, Qualcomm, Samsung, and Western Digital. We use the integer core ISA of RISC-V as the example ISA in this book.
1.Class of ISA—Nearly all ISAs today are classified as general-purpose register architectures, where the operands are either registers or memory locations. The 80x86 has 16 general-purpose registers and 16 that can hold floating-point data, while RISC-V has 32 general-purpose and 32 floating-point registers (see Figure 1.4). The two popular versions of this class are register-memory ISAs,such as the 80x86, which can access memory as part of many instructions, and load-store ISAs, such as ARMv8 and RISC-V, which can access memory only with load or store instructions. All ISAs announced since 1985 are load-store.

image.png

2.Memory addressing—Virtually all desktop and server computers, including the 80x86, ARMv8, and RISC-V, use byte addressing to access memory operands. Some architectures, like ARMv8, require that objects must be aligned. An access to an object of size s bytes at byte address A is aligned if A mod s=0. (See Figure A.5 on page A-8.) The 80x86 and RISC-V do not require alignment, but accesses are generally faster if operands are aligned.
3.Addressing modes—In addition to specifying registers and constant operands, addressing modes specify the address of a memory object. RISC-V addressing modes are Register, Immediate (for constants), and Displacement, where a con- stant offset is added to a register to form the memory address. The 80x86 supports those three modes, plus three variations of displacement: no register (absolute), two registers (based indexed with displacement), and two registers where one register is multiplied by the size of the operand in bytes (based with scaled index and displacement). It has more like the last three modes, minus the displacement field, plus register indirect, indexed, and based with scaled index. ARMv8 has the three RISC-V addressing modes plus PC-relative addressing, the sum of two registers, and the sum of two registers where one register is multiplied by the size of the operand in bytes. It also has autoincrement and autodecrement addressing, where the calculated address replaces the contents of one of the registers used in forming the address.
4.Types and sizes of operands—Like most ISAs, 80x86, ARMv8, and RISC-V support operand sizes of 8-bit (ASCII character), 16-bit (Unicode character or half word), 32-bit (integer or word), 64-bit (double word or long integer), and IEEE 754 floating point in 32-bit (single precision) and 64-bit (double precision). The 80x86 also supports 80-bit floating point (extended double precision).
5.Operations—The general categories of operations are data transfer, arithmetic logical, control (discussed next), and floating point. RISC-V is a simple and easy-to-pipeline instruction set architecture, and it is representative of the RISC architectures being used in 2017. Figure 1.5 summarizes the integer RISC-V ISA, and Figure 1.6 lists the floating-point ISA. The 80x86 has a much richer and larger set of operations (see Appendix K).
6.Control flow instructions—Virtually all ISAs, including these three, support conditional branches, unconditional jumps, procedure calls, and returns. All three use PC-relative addressing, where the branch address is specified by an address field that is added to the PC. There are some small differences. RISC-V conditional branches (BE, BNE, etc.) test the contents of registers, and the 80x86 and ARMv8 branches test condition code bits set as side effects of arithmetic/logic operations. The ARMv8 and RISC-V procedure call places the return address in a register, whereas the 80x86 call (CALLF) places the return address on a stack in memory.
7.Encoding an ISA—There are two basic choices on encoding: fixed length and variable length. All ARMv8 and RISC-V instructions are 32 bits long, which simplifies instruction decoding. Figure 1.7 shows the RISC-V instruction for- mats. The 80x86 encoding is variable length, ranging from 1 to 18 bytes. Variable-length instructions can take less space than fixed-length instructions, so a program compiled for the 80x86 is usually smaller than the same program compiled for RISC-V. Note that choices mentioned previously will affect how the instructions are encoded into a binary representation. For example, the num- ber of registers and the number of addressing modes both have a significant impact on the size of instructions, because the register field and addressing mode field can appear many times in a single instruction. (Note that ARMv8 and RISC-V later offered extensions, called Thumb-2 and RV64IC, that provide a mix of 16-bit and 32-bit length instructions, respectively, to reduce program size. Code size for these compact versions of RISC architectures are smaller than that of the 80x86. See Appendix K.)

image.png
image.png

The other challenges facing the computer architect beyond ISA design are par- ticularly acute at the present, when the differences among instruction sets are small and when there are distinct application areas. Therefore, starting with the fourth edition of this book, beyond this quick review, the bulk of the instruction set mate- rial is found in the appendices (see Appendices A and K).

Genuine Computer Architecture: Designing the Organization and Hardware to Meet Goals and Functional Requirements
The implementation of a computer has two components: organization and hard- ware. The term organization includes the high-level aspects of a computer’s design, such as the memory system, the memory interconnect, and the design of the internal processor or CPU (central processing unit—where arithmetic, logic, branching, and data transfer are implemented). The term microarchitecture is also used instead of organization. For example, two processors with the same instruc- tion set architectures but different organizations are the AMD Opteron and the Intel Core i7. Both processors implement the 80 x 86 instruction set, but they have very different pipeline and cache organizations.
The switch to multiple processors per microprocessor led to the term core also being used for processors. Instead of saying multiprocessor microprocessor, the term multicore caught on. Given that virtually all chips have multiple processors, the term central processing unit, or CPU, is fading in popularity.
Hardware refers to the specifics of a computer, including the detailed logic design and the packaging technology of the computer. Often a line of computers contains computers with identical instruction set architectures and very similar organizations, but they differ in the detailed hardware implementation. For exam- ple, the Intel Core i7 (see Chapter 3) and the Intel Xeon E7 (see Chapter 5) are nearly identical but offer different clock rates and different memory systems, mak- ing the Xeon E7 more effective for server computers.
In this book, the word architecture covers all three aspects of computer design—instruction set architecture, organization or microarchitecture, and hardware.
Computer architects must design a computer to meet functional requirements as well as price, power, performance, and availability goals. Figure 1.8 summarizes requirements to consider in designing a new computer. Often, architects also must determine what the functional requirements are, which can be a major task. The requirements may be specific features inspired by the market. Application software typically drives the choice of certain functional requirements by determining how the computer will be used. If a large body of software exists for a particular instruc- tion set architecture, the architect may decide that a new computer should imple- ment an existing instruction set. The presence of a large market for a particular class of applications might encourage the designers to incorporate requirements that would make the computer competitive in that market. Later chapters examine many of these requirements and features in depth.

image.png

Architects must also be aware of important trends in both the technology and the use of computers because such trends affect not only the future cost but also the longevity of an architecture.

1.4 Trends in Technology

If an instruction set architecture is to prevail, it must be designed to survive rapid changes in computer technology. After all, a successful new instruction set architecture may last decades—for example, the core of the IBM mainframe has been in use for more than 50 years. An architect must plan for technology changes that can increase the lifetime of a successful computer.
To plan for the evolution of a computer, the designer must be aware of rapid changes in implementation technology. Five implementation technologies, which change at a dramatic pace, are critical to modern implementations:

  • Integrated circuit logic technology—Historically, transistor density increased by about 35% per year, quadrupling somewhat over four years. Increases in die size are less predictable and slower, ranging from 10% to 20% per year. The combined effect was a traditional growth rate in transistor count on a chip of about 40%–55% per year, or doubling every 18–24 months. This trend is popularly known as Moore’s Law. Device speed scales more slowly, as we discuss below. Shockingly, Moore’s Law is no more. The number of devices per chip is still increasing, but at a decelerating rate. Unlike in the Moore’s Law era, we expect the doubling time to be stretched with each new technol- ogy generation.
  • Semiconductor DRAM (dynamic random-access memory)—This technology is the foundation of main memory, and we discuss it in Chapter 2. The growth of DRAM has slowed dramatically, from quadrupling every three years as in the past. The 8-gigabit DRAM was shipping in 2014, but the 16-gigabit DRAM won’t reach that state until 2019, and it looks like there will be no 32-gigabit DRAM (Kim, 2005). Chapter 2 mentions several other technologies that may replace DRAM when it hits its capacity wall.
  • Semiconductor Flash (electrically erasable programmable read-only mem- ory)—This nonvolatile semiconductor memory is the standard storage device in PMDs, and its rapidly increasing popularity has fueled its rapid growth rate in capacity. In recent years, the capacity per Flash chip increased by about 50%–60% per year, doubling roughly every 2 years. Currently, Flash memory is 8–10 times cheaper per bit than DRAM. Chapter 2 describes Flash memory.
  • Magnetic disk technology—Prior to 1990, density increased by about 30% per year, doubling in three years. It rose to 60% per year thereafter, and increased to 100% per year in 1996. Between 2004 and 2011, it dropped back to about 40% per year, or doubled every two years. Recently, disk improvement has slowed to less than 5% per year. One way to increase disk capacity is to add more plat- ters at the same areal density, but there are already seven platters within the one-inch depth of the 3.5-inch form factor disks. There is room for at most one or two more platters. The last hope for real density increase is to use a small laser on each disk read-write head to heat a 30 nm spot to 400°C so that it can be written magnetically before it cools. It is unclear whether Heat Assisted Magnetic Recording can be manufactured economically and reliably, although Seagate announced plans to ship HAMR in limited production in 2018. HAMR is the last chance for continued improvement in areal density of hard disk drives, which are now 8–10 times cheaper per bit than Flash and 200–300 times cheaper per bit than DRAM. This technology is central to server- and warehouse-scale storage, and we discuss the trends in detail in Appendix D.
  • Network technology—Network performance depends both on the performance of switches and on the performance of the transmission system. We discuss the trends in networking in Appendix F.

These rapidly changing technologies shape the design of a computer that, with speed and technology enhancements, may have a lifetime of 3–5 years. Key tech- nologies such as Flash change sufficiently that the designer must plan for these changes. Indeed, designers often design for the next technology, knowing that, when a product begins shipping in volume, the following technology may be the most cost-effective or may have performance advantages. Traditionally, cost has decreased at about the rate at which density increases.
Although technology improves continuously, the impact of these increases can be in discrete leaps, as a threshold that allows a new capability is reached. For example, when MOS technology reached a point in the early 1980s where between 25,000 and 50,000 transistors could fit on a single chip, it became possible to build a single-chip, 32-bit microprocessor. By the late 1980s, first-level caches could go on a chip. By eliminating chip crossings within the processor and between the pro- cessor and the cache, a dramatic improvement in cost-performance and energy- performance was possible. This design was simply unfeasible until the technology reached a certain point. With multicore microprocessors and increasing numbers of cores each generation, even server computers are increasingly headed toward a sin- gle chip for all processors. Such technology thresholds are not rare and have a sig- nificant impact on a wide variety of design decisions.

Performance Trends: Bandwidth Over Latency
As we shall see in Section 1.8, bandwidth or throughput is the total amount of work done in a given time, such as megabytes per second for a disk transfer. In contrast, latency or response time is the time between the start and the completion of an event, such as milliseconds for a disk access. Figure 1.9 plots the relative improve- ment in bandwidth and latency for technology milestones for microprocessors, memory, networks, and disks. Figure 1.10 describes the examples and milestones in more detail.
Performance is the primary differentiator for microprocessors and networks, so they have seen the greatest gains: 32,000–40,000 in bandwidth and 50–90 in latency. Capacity is generally more important than performance for memory and disks, so capacity has improved more, yet bandwidth advances of 400–2400 are still much greater than gains in latency of 8–9.
Clearly, bandwidth hasoutpaced latencyacrossthese technologies andwilllikely continue to do so. A simple rule of thumb is that bandwidth grows by at least the square of the improvement in latency. Computer designers should plan accordingly.

image.png

Scaling of Transistor Performance and Wires
Integrated circuit processes are characterized by the feature size, which is the min- imum size of a transistor or a wire in either the x or y dimension. Feature sizes decreased from 10 μm in 1971 to 0.016 μm in 2017; in fact, we have switched units, so production in 2017 is referred to as “16 nm,” and 7 nm chips are under- way. Since the transistor count per square millimeter of silicon is determined by the surface area of a transistor, the density of transistors increases quadratically with a linear decrease in feature size.

image.png

The increase in transistor performance, however, is more complex. As feature sizes shrink, devices shrink quadratically in the horizontal dimension and also shrink in the vertical dimension. The shrink in the vertical dimension requires a reduction in operating voltage to maintain correct operation and reliability of the transistors. This combination of scaling factors leads to a complex interrelationship between transistor performance and process feature size. To a first approximation, in the past the transistor performance improved linearly with decreasing feature size. The fact that transistor count improves quadratically with a linear increase in tran- sistor performance is both the challenge and the opportunity for which computer architects were created! In the early days of microprocessors, the higher rate of improvement in density was used to move quickly