Added various origin projects
[name-generation.git] / markov / sicp.txt
1 [Go to first, previous, next page; contents; index]
2
3 \f[Go to first, previous, next page; contents; index]
4
5 Structure and Interpretation
6 of Computer Programs
7 second edition
8 Harold Abelson and Gerald Jay Sussman
9 with Julie Sussman
10 foreword by Alan J. Perlis
11 The MIT Press
12 Cambridge, Massachusetts
13
14 London, England
15
16 McGraw-Hill Book Company
17 New York St. Louis San Francisco
18
19 Montreal
20
21 [Go to first, previous, next page; contents; index]
22
23 Toronto
24
25 \f[Go to first, previous, next page; contents; index]
26 This book is one of a series of texts written by faculty of the Electrical Engineering and Computer
27 Science Department at the Massachusetts Institute of Technology. It was edited and produced by
28 The MIT Press under a joint production-distribution arrangement with the McGraw-Hill Book
29 Company.
30 Ordering Information:
31 North America
32 Text orders should be addressed to the McGraw-Hill Book Company.
33 All other orders should be addressed to The MIT Press.
34 Outside North America
35 All orders should be addressed to The MIT Press or its local distributor.
36 © 1996 by The Massachusetts Institute of Technology
37 Second edition
38 All rights reserved. No part of this book may be reproduced in any form or by any electronic or
39 mechanical means (including photocopying, recording, or information storage and retrieval)
40 without permission in writing from the publisher.
41 This book was set by the authors using the LATEX typesetting system and was printed and bound
42 in the United States of America.
43 Library of Congress Cataloging-in-Publication Data
44 Abelson, Harold
45 Structure and interpretation of computer programs / Harold Abelson
46 and Gerald Jay Sussman, with Julie Sussman. -- 2nd ed.
47 p. cm. -- (Electrical engineering and computer science
48 series)
49 Includes bibliographical references and index.
50 ISBN 0-262-01153-0 (MIT Press hardcover)
51 ISBN 0-262-51087-1 (MIT Press paperback)
52 ISBN 0-07-000484-6 (McGraw-Hill hardcover)
53 1. Electronic digital computers -- Programming. 2. LISP (Computer
54 program language) I. Sussman, Gerald Jay. II. Sussman, Julie.
55 III. Title. IV. Series: MIT electrical engineering and computer
56 science series.
57 QA76.6.A255
58 1996
59 005.13’3 -- dc20
60 96-17756
61 Fourth printing, 1999
62 [Go to first, previous, next page; contents; index]
63
64 \f[Go to first, previous, next page; contents; index]
65 This book is dedicated, in respect and admiration, to the spirit that lives in the computer.
66 ‘‘I think that it’s extraordinarily important that we in computer science keep fun in computing. When
67 it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and
68 then, and after a while we began to take their complaints seriously. We began to feel as if we really
69 were responsible for the successful, error-free perfect use of these machines. I don’t think we are. I
70 think we’re responsible for stretching them, setting them off in new directions, and keeping fun in the
71 house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don’t
72 become missionaries. Don’t feel as if you’re Bible salesmen. The world has too many of those already.
73 What you know about computing other people will learn. Don’t feel as if the key to successful
74 computing is only in your hands. What’s in your hands, I think and hope, is intelligence: the ability to
75 see the machine as more than when you were first led up to it, that you can make it more.’’
76 Alan J. Perlis (April 1, 1922-February 7, 1990)
77 [Go to first, previous, next page; contents; index]
78
79 \f[Go to first, previous, next page; contents; index]
80
81 Contents
82 Foreword
83 Preface to the Second Edition
84 Preface to the First Edition
85 Acknowledgments
86 1 Building Abstractions with Procedures
87 1.1 The Elements of Programming
88 1.1.1 Expressions
89 1.1.2 Naming and the Environment
90 1.1.3 Evaluating Combinations
91 1.1.4 Compound Procedures
92 1.1.5 The Substitution Model for Procedure Application
93 1.1.6 Conditional Expressions and Predicates
94 1.1.7 Example: Square Roots by Newton’s Method
95 1.1.8 Procedures as Black-Box Abstractions
96 1.2 Procedures and the Processes They Generate
97 1.2.1 Linear Recursion and Iteration
98 1.2.2 Tree Recursion
99 1.2.3 Orders of Growth
100 1.2.4 Exponentiation
101 1.2.5 Greatest Common Divisors
102 1.2.6 Example: Testing for Primality
103 1.3 Formulating Abstractions with Higher-Order Procedures
104 1.3.1 Procedures as Arguments
105 1.3.2 Constructing Procedures Using Lambda
106 1.3.3 Procedures as General Methods
107 1.3.4 Procedures as Returned Values
108 2 Building Abstractions with Data
109 2.1 Introduction to Data Abstraction
110 2.1.1 Example: Arithmetic Operations for Rational Numbers
111 2.1.2 Abstraction Barriers
112 2.1.3 What Is Meant by Data?
113 2.1.4 Extended Exercise: Interval Arithmetic
114 2.2 Hierarchical Data and the Closure Property
115 2.2.1 Representing Sequences
116 2.2.2 Hierarchical Structures
117 2.2.3 Sequences as Conventional Interfaces
118 2.2.4 Example: A Picture Language
119 2.3 Symbolic Data
120 2.3.1 Quotation
121
122 \f2.3.2 Example: Symbolic Differentiation
123 2.3.3 Example: Representing Sets
124 2.3.4 Example: Huffman Encoding Trees
125 2.4 Multiple Representations for Abstract Data
126 2.4.1 Representations for Complex Numbers
127 2.4.2 Tagged data
128 2.4.3 Data-Directed Programming and Additivity
129 2.5 Systems with Generic Operations
130 2.5.1 Generic Arithmetic Operations
131 2.5.2 Combining Data of Different Types
132 2.5.3 Example: Symbolic Algebra
133 3 Modularity, Objects, and State
134 3.1 Assignment and Local State
135 3.1.1 Local State Variables
136 3.1.2 The Benefits of Introducing Assignment
137 3.1.3 The Costs of Introducing Assignment
138 3.2 The Environment Model of Evaluation
139 3.2.1 The Rules for Evaluation
140 3.2.2 Applying Simple Procedures
141 3.2.3 Frames as the Repository of Local State
142 3.2.4 Internal Definitions
143 3.3 Modeling with Mutable Data
144 3.3.1 Mutable List Structure
145 3.3.2 Representing Queues
146 3.3.3 Representing Tables
147 3.3.4 A Simulator for Digital Circuits
148 3.3.5 Propagation of Constraints
149 3.4 Concurrency: Time Is of the Essence
150 3.4.1 The Nature of Time in Concurrent Systems
151 3.4.2 Mechanisms for Controlling Concurrency
152 3.5 Streams
153 3.5.1 Streams Are Delayed Lists
154 3.5.2 Infinite Streams
155 3.5.3 Exploiting the Stream Paradigm
156 3.5.4 Streams and Delayed Evaluation
157 3.5.5 Modularity of Functional Programs and Modularity of Objects
158 4 Metalinguistic Abstraction
159 4.1 The Metacircular Evaluator
160 4.1.1 The Core of the Evaluator
161 4.1.2 Representing Expressions
162 4.1.3 Evaluator Data Structures
163 4.1.4 Running the Evaluator as a Program
164 4.1.5 Data as Programs
165 4.1.6 Internal Definitions
166 4.1.7 Separating Syntactic Analysis from Execution
167 4.2 Variations on a Scheme -- Lazy Evaluation
168 4.2.1 Normal Order and Applicative Order
169 4.2.2 An Interpreter with Lazy Evaluation
170 4.2.3 Streams as Lazy Lists
171
172 \f4.3 Variations on a Scheme -- Nondeterministic Computing
173 4.3.1 Amb and Search
174 4.3.2 Examples of Nondeterministic Programs
175 4.3.3 Implementing the Amb Evaluator
176 4.4 Logic Programming
177 4.4.1 Deductive Information Retrieval
178 4.4.2 How the Query System Works
179 4.4.3 Is Logic Programming Mathematical Logic?
180 4.4.4 Implementing the Query System
181 5 Computing with Register Machines
182 5.1 Designing Register Machines
183 5.1.1 A Language for Describing Register Machines
184 5.1.2 Abstraction in Machine Design
185 5.1.3 Subroutines
186 5.1.4 Using a Stack to Implement Recursion
187 5.1.5 Instruction Summary
188 5.2 A Register-Machine Simulator
189 5.2.1 The Machine Model
190 5.2.2 The Assembler
191 5.2.3 Generating Execution Procedures for Instructions
192 5.2.4 Monitoring Machine Performance
193 5.3 Storage Allocation and Garbage Collection
194 5.3.1 Memory as Vectors
195 5.3.2 Maintaining the Illusion of Infinite Memory
196 5.4 The Explicit-Control Evaluator
197 5.4.1 The Core of the Explicit-Control Evaluator
198 5.4.2 Sequence Evaluation and Tail Recursion
199 5.4.3 Conditionals, Assignments, and Definitions
200 5.4.4 Running the Evaluator
201 5.5 Compilation
202 5.5.1 Structure of the Compiler
203 5.5.2 Compiling Expressions
204 5.5.3 Compiling Combinations
205 5.5.4 Combining Instruction Sequences
206 5.5.5 An Example of Compiled Code
207 5.5.6 Lexical Addressing
208 5.5.7 Interfacing Compiled Code to the Evaluator
209 References
210 List of Exercises
211 Index
212 [Go to first, previous, next page; contents; index]
213
214 \f[Go to first, previous, next page; contents; index]
215
216 Foreword
217 Educators, generals, dieticians, psychologists, and parents program. Armies, students, and some
218 societies are programmed. An assault on large problems employs a succession of programs, most of
219 which spring into existence en route. These programs are rife with issues that appear to be particular to
220 the problem at hand. To appreciate programming as an intellectual activity in its own right you must
221 turn to computer programming; you must read and write computer programs -- many of them. It
222 doesn’t matter much what the programs are about or what applications they serve. What does matter is
223 how well they perform and how smoothly they fit with other programs in the creation of still greater
224 programs. The programmer must seek both perfection of part and adequacy of collection. In this book
225 the use of ‘‘program’’ is focused on the creation, execution, and study of programs written in a dialect
226 of Lisp for execution on a digital computer. Using Lisp we restrict or limit not what we may program,
227 but only the notation for our program descriptions.
228 Our traffic with the subject matter of this book involves us with three foci of phenomena: the human
229 mind, collections of computer programs, and the computer. Every computer program is a model,
230 hatched in the mind, of a real or mental process. These processes, arising from human experience and
231 thought, are huge in number, intricate in detail, and at any time only partially understood. They are
232 modeled to our permanent satisfaction rarely by our computer programs. Thus even though our
233 programs are carefully handcrafted discrete collections of symbols, mosaics of interlocking functions,
234 they continually evolve: we change them as our perception of the model deepens, enlarges, generalizes
235 until the model ultimately attains a metastable place within still another model with which we struggle.
236 The source of the exhilaration associated with computer programming is the continual unfolding
237 within the mind and on the computer of mechanisms expressed as programs and the explosion of
238 perception they generate. If art interprets our dreams, the computer executes them in the guise of
239 programs!
240 For all its power, the computer is a harsh taskmaster. Its programs must be correct, and what we wish
241 to say must be said accurately in every detail. As in every other symbolic activity, we become
242 convinced of program truth through argument. Lisp itself can be assigned a semantics (another model,
243 by the way), and if a program’s function can be specified, say, in the predicate calculus, the proof
244 methods of logic can be used to make an acceptable correctness argument. Unfortunately, as programs
245 get large and complicated, as they almost always do, the adequacy, consistency, and correctness of the
246 specifications themselves become open to doubt, so that complete formal arguments of correctness
247 seldom accompany large programs. Since large programs grow from small ones, it is crucial that we
248 develop an arsenal of standard program structures of whose correctness we have become sure -- we
249 call them idioms -- and learn to combine them into larger structures using organizational techniques of
250 proven value. These techniques are treated at length in this book, and understanding them is essential
251 to participation in the Promethean enterprise called programming. More than anything else, the
252 uncovering and mastery of powerful organizational techniques accelerates our ability to create large,
253 significant programs. Conversely, since writing large programs is very taxing, we are stimulated to
254 invent new methods of reducing the mass of function and detail to be fitted into large programs.
255
256 \fUnlike programs, computers must obey the laws of physics. If they wish to perform rapidly -- a few
257 nanoseconds per state change -- they must transmit electrons only small distances (at most 1 1 /2 feet).
258 The heat generated by the huge number of devices so concentrated in space has to be removed. An
259 exquisite engineering art has been developed balancing between multiplicity of function and density of
260 devices. In any event, hardware always operates at a level more primitive than that at which we care to
261 program. The processes that transform our Lisp programs to ‘‘machine’’ programs are themselves
262 abstract models which we program. Their study and creation give a great deal of insight into the
263 organizational programs associated with programming arbitrary models. Of course the computer itself
264 can be so modeled. Think of it: the behavior of the smallest physical switching element is modeled by
265 quantum mechanics described by differential equations whose detailed behavior is captured by
266 numerical approximations represented in computer programs executing on computers composed of
267 ...!
268 It is not merely a matter of tactical convenience to separately identify the three foci. Even though, as
269 they say, it’s all in the head, this logical separation induces an acceleration of symbolic traffic between
270 these foci whose richness, vitality, and potential is exceeded in human experience only by the
271 evolution of life itself. At best, relationships between the foci are metastable. The computers are never
272 large enough or fast enough. Each breakthrough in hardware technology leads to more massive
273 programming enterprises, new organizational principles, and an enrichment of abstract models. Every
274 reader should ask himself periodically ‘‘Toward what end, toward what end?’’ -- but do not ask it too
275 often lest you pass up the fun of programming for the constipation of bittersweet philosophy.
276 Among the programs we write, some (but never enough) perform a precise mathematical function such
277 as sorting or finding the maximum of a sequence of numbers, determining primality, or finding the
278 square root. We call such programs algorithms, and a great deal is known of their optimal behavior,
279 particularly with respect to the two important parameters of execution time and data storage
280 requirements. A programmer should acquire good algorithms and idioms. Even though some programs
281 resist precise specifications, it is the responsibility of the programmer to estimate, and always to
282 attempt to improve, their performance.
283 Lisp is a survivor, having been in use for about a quarter of a century. Among the active programming
284 languages only Fortran has had a longer life. Both languages have supported the programming needs
285 of important areas of application, Fortran for scientific and engineering computation and Lisp for
286 artificial intelligence. These two areas continue to be important, and their programmers are so devoted
287 to these two languages that Lisp and Fortran may well continue in active use for at least another
288 quarter-century.
289 Lisp changes. The Scheme dialect used in this text has evolved from the original Lisp and differs from
290 the latter in several important ways, including static scoping for variable binding and permitting
291 functions to yield functions as values. In its semantic structure Scheme is as closely akin to Algol 60
292 as to early Lisps. Algol 60, never to be an active language again, lives on in the genes of Scheme and
293 Pascal. It would be difficult to find two languages that are the communicating coin of two more
294 different cultures than those gathered around these two languages. Pascal is for building pyramids -imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for
295 building organisms -- imposing, breathtaking, dynamic structures built by squads fitting fluctuating
296 myriads of simpler organisms into place. The organizing principles used are the same in both cases,
297 except for one extraordinarily important difference: The discretionary exportable functionality
298 entrusted to the individual Lisp programmer is more than an order of magnitude greater than that to be
299 found within Pascal enterprises. Lisp programs inflate libraries with functions whose utility transcends
300 the application that produced them. The list, Lisp’s native data structure, is largely responsible for such
301 growth of utility. The simple structure and natural applicability of lists are reflected in functions that
302
303 \fare amazingly nonidiosyncratic. In Pascal the plethora of declarable data structures induces a
304 specialization within functions that inhibits and penalizes casual cooperation. It is better to have 100
305 functions operate on one data structure than to have 10 functions operate on 10 data structures. As a
306 result the pyramid must stand unchanged for a millennium; the organism must evolve or perish.
307 To illustrate this difference, compare the treatment of material and exercises within this book with that
308 in any first-course text using Pascal. Do not labor under the illusion that this is a text digestible at MIT
309 only, peculiar to the breed found there. It is precisely what a serious book on programming Lisp must
310 be, no matter who the student is or where it is used.
311 Note that this is a text about programming, unlike most Lisp books, which are used as a preparation for
312 work in artificial intelligence. After all, the critical programming concerns of software engineering and
313 artificial intelligence tend to coalesce as the systems under investigation become larger. This explains
314 why there is such growing interest in Lisp outside of artificial intelligence.
315 As one would expect from its goals, artificial intelligence research generates many significant
316 programming problems. In other programming cultures this spate of problems spawns new languages.
317 Indeed, in any very large programming task a useful organizing principle is to control and isolate
318 traffic within the task modules via the invention of language. These languages tend to become less
319 primitive as one approaches the boundaries of the system where we humans interact most often. As a
320 result, such systems contain complex language-processing functions replicated many times. Lisp has
321 such a simple syntax and semantics that parsing can be treated as an elementary task. Thus parsing
322 technology plays almost no role in Lisp programs, and the construction of language processors is
323 rarely an impediment to the rate of growth and change of large Lisp systems. Finally, it is this very
324 simplicity of syntax and semantics that is responsible for the burden and freedom borne by all Lisp
325 programmers. No Lisp program of any size beyond a few lines can be written without being saturated
326 with discretionary functions. Invent and fit; have fits and reinvent! We toast the Lisp programmer who
327 pens his thoughts within nests of parentheses.
328 Alan J. Perlis
329 New Haven, Connecticut
330 [Go to first, previous, next page; contents; index]
331
332 \f[Go to first, previous, next page; contents; index]
333
334 Preface to the Second Edition
335 Is it possible that software is not like anything else, that it
336 is meant to be discarded: that the whole point is to always
337 see it as a soap bubble?
338 Alan J. Perlis
339 The material in this book has been the basis of MIT’s entry-level computer science subject since 1980.
340 We had been teaching this material for four years when the first edition was published, and twelve
341 more years have elapsed until the appearance of this second edition. We are pleased that our work has
342 been widely adopted and incorporated into other texts. We have seen our students take the ideas and
343 programs in this book and build them in as the core of new computer systems and languages. In literal
344 realization of an ancient Talmudic pun, our students have become our builders. We are lucky to have
345 such capable students and such accomplished builders.
346 In preparing this edition, we have incorporated hundreds of clarifications suggested by our own
347 teaching experience and the comments of colleagues at MIT and elsewhere. We have redesigned most
348 of the major programming systems in the book, including the generic-arithmetic system, the
349 interpreters, the register-machine simulator, and the compiler; and we have rewritten all the program
350 examples to ensure that any Scheme implementation conforming to the IEEE Scheme standard (IEEE
351 1990) will be able to run the code.
352 This edition emphasizes several new themes. The most important of these is the central role played by
353 different approaches to dealing with time in computational models: objects with state, concurrent
354 programming, functional programming, lazy evaluation, and nondeterministic programming. We have
355 included new sections on concurrency and nondeterminism, and we have tried to integrate this theme
356 throughout the book.
357 The first edition of the book closely followed the syllabus of our MIT one-semester subject. With all
358 the new material in the second edition, it will not be possible to cover everything in a single semester,
359 so the instructor will have to pick and choose. In our own teaching, we sometimes skip the section on
360 logic programming (section 4.4), we have students use the register-machine simulator but we do not
361 cover its implementation (section 5.2), and we give only a cursory overview of the compiler
362 (section 5.5). Even so, this is still an intense course. Some instructors may wish to cover only the first
363 three or four chapters, leaving the other material for subsequent courses.
364 The World-Wide-Web site www-mitpress.mit.edu/sicp provides support for users of this
365 book. This includes programs from the book, sample programming assignments, supplementary
366 materials, and downloadable implementations of the Scheme dialect of Lisp.
367 [Go to first, previous, next page; contents; index]
368
369 \f[Go to first, previous, next page; contents; index]
370
371 Preface to the First Edition
372 A computer is like a violin. You can imagine a novice
373 trying first a phonograph and then a violin. The latter, he
374 says, sounds terrible. That is the argument we have heard
375 from our humanists and most of our computer scientists.
376 Computer programs are good, they say, for particular
377 purposes, but they aren’t flexible. Neither is a violin, or a
378 typewriter, until you learn how to use it.
379 Marvin Minsky, ‘‘Why Programming Is a Good
380 Medium for Expressing Poorly-Understood and
381 Sloppily-Formulated Ideas’’
382 ‘‘The Structure and Interpretation of Computer Programs’’ is the entry-level subject in computer
383 science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in
384 electrical engineering or in computer science, as one-fourth of the ‘‘common core curriculum,’’ which
385 also includes two subjects on circuits and linear systems and a subject on the design of digital systems.
386 We have been involved in the development of this subject since 1978, and we have taught this material
387 in its present form since the fall of 1980 to between 600 and 700 students each year. Most of these
388 students have had little or no prior formal training in computation, although many have played with
389 computers a bit and a few have had extensive programming or hardware-design experience.
390 Our design of this introductory computer-science subject reflects two major concerns. First, we want
391 to establish the idea that a computer language is not just a way of getting a computer to perform
392 operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus,
393 programs must be written for people to read, and only incidentally for machines to execute. Second,
394 we believe that the essential material to be addressed by a subject at this level is not the syntax of
395 particular programming-language constructs, nor clever algorithms for computing particular functions
396 efficiently, nor even the mathematical analysis of algorithms and the foundations of computing, but
397 rather the techniques used to control the intellectual complexity of large software systems.
398 Our goal is that students who complete this subject should have a good feel for the elements of style
399 and the aesthetics of programming. They should have command of the major techniques for
400 controlling complexity in a large system. They should be capable of reading a 50-page-long program,
401 if it is written in an exemplary style. They should know what not to read, and what they need not
402 understand at any moment. They should feel secure about modifying a program, retaining the spirit
403 and style of the original author.
404 These skills are by no means unique to computer programming. The techniques we teach and draw
405 upon are common to all of engineering design. We control complexity by building abstractions that
406 hide details when appropriate. We control complexity by establishing conventional interfaces that
407 enable us to construct systems by combining standard, well-understood pieces in a ‘‘mix and match’’
408 way. We control complexity by establishing new languages for describing a design, each of which
409 emphasizes particular aspects of the design and deemphasizes others.
410
411 \fUnderlying our approach to this subject is our conviction that ‘‘computer science’’ is not a science and
412 that its significance has little to do with computers. The computer revolution is a revolution in the way
413 we think and in the way we express what we think. The essence of this change is the emergence of
414 what might best be called procedural epistemology -- the study of the structure of knowledge from an
415 imperative point of view, as opposed to the more declarative point of view taken by classical
416 mathematical subjects. Mathematics provides a framework for dealing precisely with notions of ‘‘what
417 is.’’ Computation provides a framework for dealing precisely with notions of ‘‘how to.’’
418 In teaching our material we use a dialect of the programming language Lisp. We never formally teach
419 the language, because we don’t have to. We just use it, and students pick it up in a few days. This is
420 one great advantage of Lisp-like languages: They have very few ways of forming compound
421 expressions, and almost no syntactic structure. All of the formal properties can be covered in an hour,
422 like the rules of chess. After a short time we forget about syntactic details of the language (because
423 there are none) and get on with the real issues -- figuring out what we want to compute, how we will
424 decompose problems into manageable parts, and how we will work on the parts. Another advantage of
425 Lisp is that it supports (but does not enforce) more of the large-scale strategies for modular
426 decomposition of programs than any other language we know. We can make procedural and data
427 abstractions, we can use higher-order functions to capture common patterns of usage, we can model
428 local state using assignment and data mutation, we can link parts of a program with streams and
429 delayed evaluation, and we can easily implement embedded languages. All of this is embedded in an
430 interactive environment with excellent support for incremental program design, construction, testing,
431 and debugging. We thank all the generations of Lisp wizards, starting with John McCarthy, who have
432 fashioned a fine tool of unprecedented power and elegance.
433 Scheme, the dialect of Lisp that we use, is an attempt to bring together the power and elegance of Lisp
434 and Algol. From Lisp we take the metalinguistic power that derives from the simple syntax, the
435 uniform representation of programs as data objects, and the garbage-collected heap-allocated data.
436 From Algol we take lexical scoping and block structure, which are gifts from the pioneers of
437 programming-language design who were on the Algol committee. We wish to cite John Reynolds and
438 Peter Landin for their insights into the relationship of Church’s lambda calculus to the structure of
439 programming languages. We also recognize our debt to the mathematicians who scouted out this
440 territory decades before computers appeared on the scene. These pioneers include Alonzo Church,
441 Barkley Rosser, Stephen Kleene, and Haskell Curry.
442 [Go to first, previous, next page; contents; index]
443
444 \f[Go to first, previous, next page; contents; index]
445
446 Acknowledgments
447 We would like to thank the many people who have helped us develop this book and this curriculum.
448 Our subject is a clear intellectual descendant of ‘‘6.231,’’ a wonderful subject on programming
449 linguistics and the lambda calculus taught at MIT in the late 1960s by Jack Wozencraft and Arthur
450 Evans, Jr.
451 We owe a great debt to Robert Fano, who reorganized MIT’s introductory curriculum in electrical
452 engineering and computer science to emphasize the principles of engineering design. He led us in
453 starting out on this enterprise and wrote the first set of subject notes from which this book evolved.
454 Much of the style and aesthetics of programming that we try to teach were developed in conjunction
455 with Guy Lewis Steele Jr., who collaborated with Gerald Jay Sussman in the initial development of the
456 Scheme language. In addition, David Turner, Peter Henderson, Dan Friedman, David Wise, and Will
457 Clinger have taught us many of the techniques of the functional programming community that appear
458 in this book.
459 Joel Moses taught us about structuring large systems. His experience with the Macsyma system for
460 symbolic computation provided the insight that one should avoid complexities of control and
461 concentrate on organizing the data to reflect the real structure of the world being modeled.
462 Marvin Minsky and Seymour Papert formed many of our attitudes about programming and its place in
463 our intellectual lives. To them we owe the understanding that computation provides a means of
464 expression for exploring ideas that would otherwise be too complex to deal with precisely. They
465 emphasize that a student’s ability to write and modify programs provides a powerful medium in which
466 exploring becomes a natural activity.
467 We also strongly agree with Alan Perlis that programming is lots of fun and we had better be careful to
468 support the joy of programming. Part of this joy derives from observing great masters at work. We are
469 fortunate to have been apprentice programmers at the feet of Bill Gosper and Richard Greenblatt.
470 It is difficult to identify all the people who have contributed to the development of our curriculum. We
471 thank all the lecturers, recitation instructors, and tutors who have worked with us over the past fifteen
472 years and put in many extra hours on our subject, especially Bill Siebert, Albert Meyer, Joe Stoy,
473 Randy Davis, Louis Braida, Eric Grimson, Rod Brooks, Lynn Stein, and Peter Szolovits. We would
474 like to specially acknowledge the outstanding teaching contributions of Franklyn Turbak, now at
475 Wellesley; his work in undergraduate instruction set a standard that we can all aspire to. We are
476 grateful to Jerry Saltzer and Jim Miller for helping us grapple with the mysteries of concurrency, and
477 to Peter Szolovits and David McAllester for their contributions to the exposition of nondeterministic
478 evaluation in chapter 4.
479 Many people have put in significant effort presenting this material at other universities. Some of the
480 people we have worked closely with are Jacob Katzenelson at the Technion, Hardy Mayer at the
481 University of California at Irvine, Joe Stoy at Oxford, Elisha Sacks at Purdue, and Jan Komorowski at
482 the Norwegian University of Science and Technology. We are exceptionally proud of our colleagues
483
484 \fwho have received major teaching awards for their adaptations of this subject at other universities,
485 including Kenneth Yip at Yale, Brian Harvey at the University of California at Berkeley, and Dan
486 Huttenlocher at Cornell.
487 Al Moyé arranged for us to teach this material to engineers at Hewlett-Packard, and for the production
488 of videotapes of these lectures. We would like to thank the talented instructors -- in particular Jim
489 Miller, Bill Siebert, and Mike Eisenberg -- who have designed continuing education courses
490 incorporating these tapes and taught them at universities and industry all over the world.
491 Many educators in other countries have put in significant work translating the first edition. Michel
492 Briand, Pierre Chamard, and André Pic produced a French edition; Susanne Daniels-Herold produced
493 a German edition; and Fumio Motoyoshi produced a Japanese edition. We do not know who produced
494 the Chinese edition, but we consider it an honor to have been selected as the subject of an
495 ‘‘unauthorized’’ translation.
496 It is hard to enumerate all the people who have made technical contributions to the development of the
497 Scheme systems we use for instructional purposes. In addition to Guy Steele, principal wizards have
498 included Chris Hanson, Joe Bowbeer, Jim Miller, Guillermo Rozas, and Stephen Adams. Others who
499 have put in significant time are Richard Stallman, Alan Bawden, Kent Pitman, Jon Taft, Neil Mayle,
500 John Lamping, Gwyn Osnos, Tracy Larrabee, George Carrette, Soma Chaudhuri, Bill Chiarchiaro,
501 Steven Kirsch, Leigh Klotz, Wayne Noss, Todd Cass, Patrick O’Donnell, Kevin Theobald, Daniel
502 Weise, Kenneth Sinclair, Anthony Courtemanche, Henry M. Wu, Andrew Berlin, and Ruth Shyu.
503 Beyond the MIT implementation, we would like to thank the many people who worked on the IEEE
504 Scheme standard, including William Clinger and Jonathan Rees, who edited the R 4 RS, and Chris
505 Haynes, David Bartley, Chris Hanson, and Jim Miller, who prepared the IEEE standard.
506 Dan Friedman has been a long-time leader of the Scheme community. The community’s broader work
507 goes beyond issues of language design to encompass significant educational innovations, such as the
508 high-school curriculum based on EdScheme by Schemer’s Inc., and the wonderful books by Mike
509 Eisenberg and by Brian Harvey and Matthew Wright.
510 We appreciate the work of those who contributed to making this a real book, especially Terry Ehling,
511 Larry Cohen, and Paul Bethge at the MIT Press. Ella Mazel found the wonderful cover image. For the
512 second edition we are particularly grateful to Bernard and Ella Mazel for help with the book design,
513 and to David Jones, TEX wizard extraordinaire. We also are indebted to those readers who made
514 penetrating comments on the new draft: Jacob Katzenelson, Hardy Mayer, Jim Miller, and especially
515 Brian Harvey, who did unto this book as Julie did unto his book Simply Scheme.
516 Finally, we would like to acknowledge the support of the organizations that have encouraged this work
517 over the years, including support from Hewlett-Packard, made possible by Ira Goldstein and Joel
518 Birnbaum, and support from DARPA, made possible by Bob Kahn.
519 [Go to first, previous, next page; contents; index]
520
521 \f[Go to first, previous, next page; contents; index]
522
523 Chapter 1
524 Building Abstractions with Procedures
525 The acts of the mind, wherein it exerts its power over
526 simple ideas, are chiefly these three: 1. Combining
527 several simple ideas into one compound one, and thus all
528 complex ideas are made. 2. The second is bringing two
529 ideas, whether simple or complex, together, and setting
530 them by one another so as to take a view of them at once,
531 without uniting them into one, by which it gets all its
532 ideas of relations. 3. The third is separating them from all
533 other ideas that accompany them in their real existence:
534 this is called abstraction, and thus all its general ideas are
535 made.
536 John Locke, An Essay Concerning Human Understanding
537 (1690)
538 We are about to study the idea of a computational process. Computational processes are abstract
539 beings that inhabit computers. As they evolve, processes manipulate other abstract things called data.
540 The evolution of a process is directed by a pattern of rules called a program. People create programs to
541 direct processes. In effect, we conjure the spirits of the computer with our spells.
542 A computational process is indeed much like a sorcerer’s idea of a spirit. It cannot be seen or touched.
543 It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can
544 answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm
545 in a factory. The programs we use to conjure processes are like a sorcerer’s spells. They are carefully
546 composed from symbolic expressions in arcane and esoteric programming languages that prescribe the
547 tasks we want our processes to perform.
548 A computational process, in a correctly working computer, executes programs precisely and
549 accurately. Thus, like the sorcerer’s apprentice, novice programmers must learn to understand and to
550 anticipate the consequences of their conjuring. Even small errors (usually called bugs or glitches) in
551 programs can have complex and unanticipated consequences.
552 Fortunately, learning to program is considerably less dangerous than learning sorcery, because the
553 spirits we deal with are conveniently contained in a secure way. Real-world programming, however,
554 requires care, expertise, and wisdom. A small bug in a computer-aided design program, for example,
555 can lead to the catastrophic collapse of an airplane or a dam or the self-destruction of an industrial
556 robot.
557 Master software engineers have the ability to organize programs so that they can be reasonably sure
558 that the resulting processes will perform the tasks intended. They can visualize the behavior of their
559 systems in advance. They know how to structure programs so that unanticipated problems do not lead
560 to catastrophic consequences, and when problems do arise, they can debug their programs.
561 Well-designed computational systems, like well-designed automobiles or nuclear reactors, are
562
563 \fdesigned in a modular manner, so that the parts can be constructed, replaced, and debugged separately.
564
565 Programming in Lisp
566 We need an appropriate language for describing processes, and we will use for this purpose the
567 programming language Lisp. Just as our everyday thoughts are usually expressed in our natural
568 language (such as English, French, or Japanese), and descriptions of quantitative phenomena are
569 expressed with mathematical notations, our procedural thoughts will be expressed in Lisp. Lisp was
570 invented in the late 1950s as a formalism for reasoning about the use of certain kinds of logical
571 expressions, called recursion equations, as a model for computation. The language was conceived by
572 John McCarthy and is based on his paper ‘‘Recursive Functions of Symbolic Expressions and Their
573 Computation by Machine’’ (McCarthy 1960).
574 Despite its inception as a mathematical formalism, Lisp is a practical programming language. A Lisp
575 interpreter is a machine that carries out processes described in the Lisp language. The first Lisp
576 interpreter was implemented by McCarthy with the help of colleagues and students in the Artificial
577 Intelligence Group of the MIT Research Laboratory of Electronics and in the MIT Computation
578 Center. 1 Lisp, whose name is an acronym for LISt Processing, was designed to provide
579 symbol-manipulating capabilities for attacking programming problems such as the symbolic
580 differentiation and integration of algebraic expressions. It included for this purpose new data objects
581 known as atoms and lists, which most strikingly set it apart from all other languages of the period.
582 Lisp was not the product of a concerted design effort. Instead, it evolved informally in an experimental
583 manner in response to users’ needs and to pragmatic implementation considerations. Lisp’s informal
584 evolution has continued through the years, and the community of Lisp users has traditionally resisted
585 attempts to promulgate any ‘‘official’’ definition of the language. This evolution, together with the
586 flexibility and elegance of the initial conception, has enabled Lisp, which is the second oldest language
587 in widespread use today (only Fortran is older), to continually adapt to encompass the most modern
588 ideas about program design. Thus, Lisp is by now a family of dialects, which, while sharing most of
589 the original features, may differ from one another in significant ways. The dialect of Lisp used in this
590 book is called Scheme. 2
591 Because of its experimental character and its emphasis on symbol manipulation, Lisp was at first very
592 inefficient for numerical computations, at least in comparison with Fortran. Over the years, however,
593 Lisp compilers have been developed that translate programs into machine code that can perform
594 numerical computations reasonably efficiently. And for special applications, Lisp has been used with
595 great effectiveness. 3 Although Lisp has not yet overcome its old reputation as hopelessly inefficient,
596 Lisp is now used in many applications where efficiency is not the central concern. For example, Lisp
597 has become a language of choice for operating-system shell languages and for extension languages for
598 editors and computer-aided design systems.
599 If Lisp is not a mainstream language, why are we using it as the framework for our discussion of
600 programming? Because the language possesses unique features that make it an excellent medium for
601 studying important programming constructs and data structures and for relating them to the linguistic
602 features that support them. The most significant of these features is the fact that Lisp descriptions of
603 processes, called procedures, can themselves be represented and manipulated as Lisp data. The
604 importance of this is that there are powerful program-design techniques that rely on the ability to blur
605 the traditional distinction between ‘‘passive’’ data and ‘‘active’’ processes. As we shall discover,
606 Lisp’s flexibility in handling procedures as data makes it one of the most convenient languages in
607 existence for exploring these techniques. The ability to represent procedures as data also makes Lisp
608 an excellent language for writing programs that must manipulate other programs as data, such as the
609
610 \finterpreters and compilers that support computer languages. Above and beyond these considerations,
611 programming in Lisp is great fun.
612 1 The Lisp 1 Programmer’s Manual appeared in 1960, and the Lisp 1.5 Programmer’s Manual
613
614 (McCarthy 1965) was published in 1962. The early history of Lisp is described in McCarthy 1978.
615 2 The two dialects in which most major Lisp programs of the 1970s were written are MacLisp (Moon
616
617 1978; Pitman 1983), developed at the MIT Project MAC, and Interlisp (Teitelman 1974), developed at
618 Bolt Beranek and Newman Inc. and the Xerox Palo Alto Research Center. Portable Standard Lisp
619 (Hearn 1969; Griss 1981) was a Lisp dialect designed to be easily portable between different
620 machines. MacLisp spawned a number of subdialects, such as Franz Lisp, which was developed at the
621 University of California at Berkeley, and Zetalisp (Moon 1981), which was based on a special-purpose
622 processor designed at the MIT Artificial Intelligence Laboratory to run Lisp very efficiently. The Lisp
623 dialect used in this book, called Scheme (Steele 1975), was invented in 1975 by Guy Lewis Steele Jr.
624 and Gerald Jay Sussman of the MIT Artificial Intelligence Laboratory and later reimplemented for
625 instructional use at MIT. Scheme became an IEEE standard in 1990 (IEEE 1990). The Common Lisp
626 dialect (Steele 1982, Steele 1990) was developed by the Lisp community to combine features from the
627 earlier Lisp dialects to make an industrial standard for Lisp. Common Lisp became an ANSI standard
628 in 1994 (ANSI 1994).
629 3 One such special application was a breakthrough computation of scientific importance -- an
630
631 integration of the motion of the Solar System that extended previous results by nearly two orders of
632 magnitude, and demonstrated that the dynamics of the Solar System is chaotic. This computation was
633 made possible by new integration algorithms, a special-purpose compiler, and a special-purpose
634 computer all implemented with the aid of software tools written in Lisp (Abelson et al. 1992; Sussman
635 and Wisdom 1992).
636 [Go to first, previous, next page; contents; index]
637
638 \f[Go to first, previous, next page; contents; index]
639
640 1.1 The Elements of Programming
641 A powerful programming language is more than just a means for instructing a computer to perform
642 tasks. The language also serves as a framework within which we organize our ideas about processes.
643 Thus, when we describe a language, we should pay particular attention to the means that the language
644 provides for combining simple ideas to form more complex ideas. Every powerful language has three
645 mechanisms for accomplishing this:
646 primitive expressions, which represent the simplest entities the language is concerned with,
647 means of combination, by which compound elements are built from simpler ones, and
648 means of abstraction, by which compound elements can be named and manipulated as units.
649 In programming, we deal with two kinds of elements: procedures and data. (Later we will discover that
650 they are really not so distinct.) Informally, data is ‘‘stuff’’ that we want to manipulate, and procedures
651 are descriptions of the rules for manipulating the data. Thus, any powerful programming language
652 should be able to describe primitive data and primitive procedures and should have methods for
653 combining and abstracting procedures and data.
654 In this chapter we will deal only with simple numerical data so that we can focus on the rules for
655 building procedures. 4 In later chapters we will see that these same rules allow us to build procedures
656 to manipulate compound data as well.
657
658 1.1.1 Expressions
659 One easy way to get started at programming is to examine some typical interactions with an interpreter
660 for the Scheme dialect of Lisp. Imagine that you are sitting at a computer terminal. You type an
661 expression, and the interpreter responds by displaying the result of its evaluating that expression.
662 One kind of primitive expression you might type is a number. (More precisely, the expression that you
663 type consists of the numerals that represent the number in base 10.) If you present Lisp with a number
664 486
665 the interpreter will respond by printing 5
666 486
667 Expressions representing numbers may be combined with an expression representing a primitive
668 procedure (such as + or *) to form a compound expression that represents the application of the
669 procedure to those numbers. For example:
670 (+ 137 349)
671 486
672 (- 1000 334)
673 666
674 (* 5 99)
675 495
676 (/ 10 5)
677
678 \f2
679 (+ 2.7 10)
680 12.7
681 Expressions such as these, formed by delimiting a list of expressions within parentheses in order to
682 denote procedure application, are called combinations. The leftmost element in the list is called the
683 operator, and the other elements are called operands. The value of a combination is obtained by
684 applying the procedure specified by the operator to the arguments that are the values of the operands.
685 The convention of placing the operator to the left of the operands is known as prefix notation, and it
686 may be somewhat confusing at first because it departs significantly from the customary mathematical
687 convention. Prefix notation has several advantages, however. One of them is that it can accommodate
688 procedures that may take an arbitrary number of arguments, as in the following examples:
689 (+ 21 35 12 7)
690 75
691 (* 25 4 12)
692 1200
693 No ambiguity can arise, because the operator is always the leftmost element and the entire
694 combination is delimited by the parentheses.
695 A second advantage of prefix notation is that it extends in a straightforward way to allow combinations
696 to be nested, that is, to have combinations whose elements are themselves combinations:
697 (+ (* 3 5) (- 10 6))
698 19
699 There is no limit (in principle) to the depth of such nesting and to the overall complexity of the
700 expressions that the Lisp interpreter can evaluate. It is we humans who get confused by still relatively
701 simple expressions such as
702 (+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))
703 which the interpreter would readily evaluate to be 57. We can help ourselves by writing such an
704 expression in the form
705 (+ (* 3
706 (+ (* 2 4)
707 (+ 3 5)))
708 (+ (- 10 7)
709 6))
710 following a formatting convention known as pretty-printing, in which each long combination is
711 written so that the operands are aligned vertically. The resulting indentations display clearly the
712 structure of the expression. 6
713 Even with complex expressions, the interpreter always operates in the same basic cycle: It reads an
714 expression from the terminal, evaluates the expression, and prints the result. This mode of operation is
715 often expressed by saying that the interpreter runs in a read-eval-print loop. Observe in particular that
716 it is not necessary to explicitly instruct the interpreter to print the value of the expression. 7
717
718 \f1.1.2 Naming and the Environment
719 A critical aspect of a programming language is the means it provides for using names to refer to
720 computational objects. We say that the name identifies a variable whose value is the object.
721 In the Scheme dialect of Lisp, we name things with define. Typing
722 (define size 2)
723 causes the interpreter to associate the value 2 with the name size. 8 Once the name size has been
724 associated with the number 2, we can refer to the value 2 by name:
725 size
726 2
727 (* 5 size)
728 10
729 Here are further examples of the use of define:
730 (define pi 3.14159)
731 (define radius 10)
732 (* pi (* radius radius))
733 314.159
734 (define circumference (* 2 pi radius))
735 circumference
736 62.8318
737 Define is our language’s simplest means of abstraction, for it allows us to use simple names to refer
738 to the results of compound operations, such as the circumference computed above. In general,
739 computational objects may have very complex structures, and it would be extremely inconvenient to
740 have to remember and repeat their details each time we want to use them. Indeed, complex programs
741 are constructed by building, step by step, computational objects of increasing complexity. The
742 interpreter makes this step-by-step program construction particularly convenient because name-object
743 associations can be created incrementally in successive interactions. This feature encourages the
744 incremental development and testing of programs and is largely responsible for the fact that a Lisp
745 program usually consists of a large number of relatively simple procedures.
746 It should be clear that the possibility of associating values with symbols and later retrieving them
747 means that the interpreter must maintain some sort of memory that keeps track of the name-object
748 pairs. This memory is called the environment (more precisely the global environment, since we will
749 see later that a computation may involve a number of different environments). 9
750
751 1.1.3 Evaluating Combinations
752 One of our goals in this chapter is to isolate issues about thinking procedurally. As a case in point, let
753 us consider that, in evaluating combinations, the interpreter is itself following a procedure.
754 To evaluate a combination, do the following:
755
756 \f1. Evaluate the subexpressions of the combination.
757 2. Apply the procedure that is the value of the leftmost subexpression (the operator) to the
758 arguments that are the values of the other subexpressions (the operands).
759 Even this simple rule illustrates some important points about processes in general. First, observe that
760 the first step dictates that in order to accomplish the evaluation process for a combination we must first
761 perform the evaluation process on each element of the combination. Thus, the evaluation rule is
762 recursive in nature; that is, it includes, as one of its steps, the need to invoke the rule itself. 10
763 Notice how succinctly the idea of recursion can be used to express what, in the case of a deeply nested
764 combination, would otherwise be viewed as a rather complicated process. For example, evaluating
765 (* (+ 2 (* 4 6))
766 (+ 3 5 7))
767 requires that the evaluation rule be applied to four different combinations. We can obtain a picture of
768 this process by representing the combination in the form of a tree, as shown in figure 1.1. Each
769 combination is represented by a node with branches corresponding to the operator and the operands of
770 the combination stemming from it. The terminal nodes (that is, nodes with no branches stemming from
771 them) represent either operators or numbers. Viewing evaluation in terms of the tree, we can imagine
772 that the values of the operands percolate upward, starting from the terminal nodes and then combining
773 at higher and higher levels. In general, we shall see that recursion is a very powerful technique for
774 dealing with hierarchical, treelike objects. In fact, the ‘‘percolate values upward’’ form of the
775 evaluation rule is an example of a general kind of process known as tree accumulation.
776
777 Figure 1.1: Tree representation, showing the value of each subcombination.
778 Figure 1.1: Tree representation, showing the value of each subcombination.
779 Next, observe that the repeated application of the first step brings us to the point where we need to
780 evaluate, not combinations, but primitive expressions such as numerals, built-in operators, or other
781 names. We take care of the primitive cases by stipulating that
782
783 \fthe values of numerals are the numbers that they name,
784 the values of built-in operators are the machine instruction sequences that carry out the
785 corresponding operations, and
786 the values of other names are the objects associated with those names in the environment.
787 We may regard the second rule as a special case of the third one by stipulating that symbols such as +
788 and * are also included in the global environment, and are associated with the sequences of machine
789 instructions that are their ‘‘values.’’ The key point to notice is the role of the environment in
790 determining the meaning of the symbols in expressions. In an interactive language such as Lisp, it is
791 meaningless to speak of the value of an expression such as (+ x 1) without specifying any
792 information about the environment that would provide a meaning for the symbol x (or even for the
793 symbol +). As we shall see in chapter 3, the general notion of the environment as providing a context
794 in which evaluation takes place will play an important role in our understanding of program execution.
795 Notice that the evaluation rule given above does not handle definitions. For instance, evaluating
796 (define x 3) does not apply define to two arguments, one of which is the value of the symbol
797 x and the other of which is 3, since the purpose of the define is precisely to associate x with a value.
798 (That is, (define x 3) is not a combination.)
799 Such exceptions to the general evaluation rule are called special forms. Define is the only example
800 of a special form that we have seen so far, but we will meet others shortly. Each special form has its
801 own evaluation rule. The various kinds of expressions (each with its associated evaluation rule)
802 constitute the syntax of the programming language. In comparison with most other programming
803 languages, Lisp has a very simple syntax; that is, the evaluation rule for expressions can be described
804 by a simple general rule together with specialized rules for a small number of special forms. 11
805
806 1.1.4 Compound Procedures
807 We have identified in Lisp some of the elements that must appear in any powerful programming
808 language:
809 Numbers and arithmetic operations are primitive data and procedures.
810 Nesting of combinations provides a means of combining operations.
811 Definitions that associate names with values provide a limited means of abstraction.
812 Now we will learn about procedure definitions, a much more powerful abstraction technique by which
813 a compound operation can be given a name and then referred to as a unit.
814 We begin by examining how to express the idea of ‘‘squaring.’’ We might say, ‘‘To square something,
815 multiply it by itself.’’ This is expressed in our language as
816 (define (square x) (* x x))
817 We can understand this in the following way:
818 (define (square
819 To
820
821 x)
822
823 (*
824
825 square something, multiply
826
827 x
828
829 x))
830
831 it by itself.
832
833 \fWe have here a compound procedure, which has been given the name square. The procedure
834 represents the operation of multiplying something by itself. The thing to be multiplied is given a local
835 name, x, which plays the same role that a pronoun plays in natural language. Evaluating the definition
836 creates this compound procedure and associates it with the name square. 12
837 The general form of a procedure definition is
838 (define (<name> <formal parameters>) <body>)
839 The <name> is a symbol to be associated with the procedure definition in the environment. 13 The
840 <formal parameters> are the names used within the body of the procedure to refer to the
841 corresponding arguments of the procedure. The <body> is an expression that will yield the value of the
842 procedure application when the formal parameters are replaced by the actual arguments to which the
843 procedure is applied. 14 The <name> and the <formal parameters> are grouped within parentheses,
844 just as they would be in an actual call to the procedure being defined.
845 Having defined square, we can now use it:
846 (square 21)
847 441
848 (square (+ 2 5))
849 49
850 (square (square 3))
851 81
852 We can also use square as a building block in defining other procedures. For example, x 2 + y 2 can
853 be expressed as
854 (+ (square x) (square y))
855 We can easily define a procedure sum-of-squares that, given any two numbers as arguments,
856 produces the sum of their squares:
857 (define (sum-of-squares x y)
858 (+ (square x) (square y)))
859 (sum-of-squares 3 4)
860 25
861 Now we can use sum-of-squares as a building block in constructing further procedures:
862 (define (f a)
863 (sum-of-squares (+ a 1) (* a 2)))
864 (f 5)
865 136
866 Compound procedures are used in exactly the same way as primitive procedures. Indeed, one could
867 not tell by looking at the definition of sum-of-squares given above whether square was built
868 into the interpreter, like + and *, or defined as a compound procedure.
869
870 \f1.1.5 The Substitution Model for Procedure Application
871 To evaluate a combination whose operator names a compound procedure, the interpreter follows much
872 the same process as for combinations whose operators name primitive procedures, which we described
873 in section 1.1.3. That is, the interpreter evaluates the elements of the combination and applies the
874 procedure (which is the value of the operator of the combination) to the arguments (which are the
875 values of the operands of the combination).
876 We can assume that the mechanism for applying primitive procedures to arguments is built into the
877 interpreter. For compound procedures, the application process is as follows:
878 To apply a compound procedure to arguments, evaluate the body of the procedure with each
879 formal parameter replaced by the corresponding argument.
880 To illustrate this process, let’s evaluate the combination
881 (f 5)
882 where f is the procedure defined in section 1.1.4. We begin by retrieving the body of f:
883 (sum-of-squares (+ a 1) (* a 2))
884 Then we replace the formal parameter a by the argument 5:
885 (sum-of-squares (+ 5 1) (* 5 2))
886 Thus the problem reduces to the evaluation of a combination with two operands and an operator
887 sum-of-squares. Evaluating this combination involves three subproblems. We must evaluate the
888 operator to get the procedure to be applied, and we must evaluate the operands to get the arguments.
889 Now (+ 5 1) produces 6 and (* 5 2) produces 10, so we must apply the sum-of-squares
890 procedure to 6 and 10. These values are substituted for the formal parameters x and y in the body of
891 sum-of-squares, reducing the expression to
892 (+ (square 6) (square 10))
893 If we use the definition of square, this reduces to
894 (+ (* 6 6) (* 10 10))
895 which reduces by multiplication to
896 (+ 36 100)
897 and finally to
898 136
899 The process we have just described is called the substitution model for procedure application. It can be
900 taken as a model that determines the ‘‘meaning’’ of procedure application, insofar as the procedures in
901 this chapter are concerned. However, there are two points that should be stressed:
902
903 \fThe purpose of the substitution is to help us think about procedure application, not to provide a
904 description of how the interpreter really works. Typical interpreters do not evaluate procedure
905 applications by manipulating the text of a procedure to substitute values for the formal
906 parameters. In practice, the ‘‘substitution’’ is accomplished by using a local environment for the
907 formal parameters. We will discuss this more fully in chapters 3 and 4 when we examine the
908 implementation of an interpreter in detail.
909 Over the course of this book, we will present a sequence of increasingly elaborate models of how
910 interpreters work, culminating with a complete implementation of an interpreter and compiler in
911 chapter 5. The substitution model is only the first of these models -- a way to get started thinking
912 formally about the evaluation process. In general, when modeling phenomena in science and
913 engineering, we begin with simplified, incomplete models. As we examine things in greater
914 detail, these simple models become inadequate and must be replaced by more refined models.
915 The substitution model is no exception. In particular, when we address in chapter 3 the use of
916 procedures with ‘‘mutable data,’’ we will see that the substitution model breaks down and must
917 be replaced by a more complicated model of procedure application. 15
918
919 Applicative order versus normal order
920 According to the description of evaluation given in section 1.1.3, the interpreter first evaluates the
921 operator and operands and then applies the resulting procedure to the resulting arguments. This is not
922 the only way to perform evaluation. An alternative evaluation model would not evaluate the operands
923 until their values were needed. Instead it would first substitute operand expressions for parameters
924 until it obtained an expression involving only primitive operators, and would then perform the
925 evaluation. If we used this method, the evaluation of
926 (f 5)
927 would proceed according to the sequence of expansions
928 (sum-of-squares (+ 5 1) (* 5 2))
929 (+
930 (square (+ 5 1))
931 (square (* 5 2)) )
932 (+
933 (* (+ 5 1) (+ 5 1))
934 (* (* 5 2) (* 5 2)))
935 followed by the reductions
936 (+
937 (+
938
939 (* 6 6)
940 36
941
942 (* 10 10))
943 100)
944 136
945
946 This gives the same answer as our previous evaluation model, but the process is different. In
947 particular, the evaluations of (+ 5 1) and (* 5 2) are each performed twice here, corresponding
948 to the reduction of the expression
949 (* x x)
950 with x replaced respectively by (+ 5 1) and (* 5 2).
951 This alternative ‘‘fully expand and then reduce’’ evaluation method is known as normal-order
952 evaluation, in contrast to the ‘‘evaluate the arguments and then apply’’ method that the interpreter
953 actually uses, which is called applicative-order evaluation. It can be shown that, for procedure
954 applications that can be modeled using substitution (including all the procedures in the first two
955
956 \fchapters of this book) and that yield legitimate values, normal-order and applicative-order evaluation
957 produce the same value. (See exercise 1.5 for an instance of an ‘‘illegitimate’’ value where
958 normal-order and applicative-order evaluation do not give the same result.)
959 Lisp uses applicative-order evaluation, partly because of the additional efficiency obtained from
960 avoiding multiple evaluations of expressions such as those illustrated with (+ 5 1) and (* 5 2)
961 above and, more significantly, because normal-order evaluation becomes much more complicated to
962 deal with when we leave the realm of procedures that can be modeled by substitution. On the other
963 hand, normal-order evaluation can be an extremely valuable tool, and we will investigate some of its
964 implications in chapters 3 and 4. 16
965
966 1.1.6 Conditional Expressions and Predicates
967 The expressive power of the class of procedures that we can define at this point is very limited,
968 because we have no way to make tests and to perform different operations depending on the result of a
969 test. For instance, we cannot define a procedure that computes the absolute value of a number by
970 testing whether the number is positive, negative, or zero and taking different actions in the different
971 cases according to the rule
972
973 This construct is called a case analysis, and there is a special form in Lisp for notating such a case
974 analysis. It is called cond (which stands for ‘‘conditional’’), and it is used as follows:
975 (define (abs x)
976 (cond ((> x 0) x)
977 ((= x 0) 0)
978 ((< x 0) (- x))))
979 The general form of a conditional expression is
980 (cond (<p 1 > <e 1 >)
981 (<p 2 > <e 2 >)
982 (<p n > <e n >))
983 consisting of the symbol cond followed by parenthesized pairs of expressions (<p> <e>) called
984 clauses. The first expression in each pair is a predicate -- that is, an expression whose value is
985 interpreted as either true or false. 17
986 Conditional expressions are evaluated as follows. The predicate <p 1 > is evaluated first. If its value is
987 false, then <p 2 > is evaluated. If <p 2 >’s value is also false, then <p 3 > is evaluated. This process
988 continues until a predicate is found whose value is true, in which case the interpreter returns the value
989 of the corresponding consequent expression <e> of the clause as the value of the conditional
990 expression. If none of the <p>’s is found to be true, the value of the cond is undefined.
991
992 \fThe word predicate is used for procedures that return true or false, as well as for expressions that
993 evaluate to true or false. The absolute-value procedure abs makes use of the primitive predicates >, <,
994 and =. 18 These take two numbers as arguments and test whether the first number is, respectively,
995 greater than, less than, or equal to the second number, returning true or false accordingly.
996 Another way to write the absolute-value procedure is
997 (define (abs x)
998 (cond ((< x 0) (- x))
999 (else x)))
1000 which could be expressed in English as ‘‘If x is less than zero return - x; otherwise return x.’’ Else is
1001 a special symbol that can be used in place of the <p> in the final clause of a cond. This causes the
1002 cond to return as its value the value of the corresponding <e> whenever all previous clauses have
1003 been bypassed. In fact, any expression that always evaluates to a true value could be used as the <p>
1004 here.
1005 Here is yet another way to write the absolute-value procedure:
1006 (define (abs x)
1007 (if (< x 0)
1008 (- x)
1009 x))
1010 This uses the special form if, a restricted type of conditional that can be used when there are
1011 precisely two cases in the case analysis. The general form of an if expression is
1012 (if <predicate> <consequent> <alternative>)
1013 To evaluate an if expression, the interpreter starts by evaluating the <predicate> part of the
1014 expression. If the <predicate> evaluates to a true value, the interpreter then evaluates the
1015 <consequent> and returns its value. Otherwise it evaluates the <alternative> and returns its value. 19
1016 In addition to primitive predicates such as <, =, and >, there are logical composition operations, which
1017 enable us to construct compound predicates. The three most frequently used are these:
1018 (and <e 1 > ... <e n >)
1019 The interpreter evaluates the expressions <e> one at a time, in left-to-right order. If any <e>
1020 evaluates to false, the value of the and expression is false, and the rest of the <e>’s are not
1021 evaluated. If all <e>’s evaluate to true values, the value of the and expression is the value of the
1022 last one.
1023 (or <e 1 > ... <e n >)
1024 The interpreter evaluates the expressions <e> one at a time, in left-to-right order. If any <e>
1025 evaluates to a true value, that value is returned as the value of the or expression, and the rest of
1026 the <e>’s are not evaluated. If all <e>’s evaluate to false, the value of the or expression is false.
1027 (not <e>)
1028
1029 \fThe value of a not expression is true when the expression <e> evaluates to false, and false
1030 otherwise.
1031 Notice that and and or are special forms, not procedures, because the subexpressions are not
1032 necessarily all evaluated. Not is an ordinary procedure.
1033 As an example of how these are used, the condition that a number x be in the range 5 < x < 10 may be
1034 expressed as
1035 (and (> x 5) (< x 10))
1036 As another example, we can define a predicate to test whether one number is greater than or equal to
1037 another as
1038 (define (>= x y)
1039 (or (> x y) (= x y)))
1040 or alternatively as
1041 (define (>= x y)
1042 (not (< x y)))
1043 Exercise 1.1. Below is a sequence of expressions. What is the result printed by the interpreter in
1044 response to each expression? Assume that the sequence is to be evaluated in the order in which it is
1045 presented.
1046 10
1047 (+ 5 3 4)
1048 (- 9 1)
1049 (/ 6 2)
1050 (+ (* 2 4) (- 4 6))
1051 (define a 3)
1052 (define b (+ a 1))
1053 (+ a b (* a b))
1054 (= a b)
1055 (if (and (> b a) (< b (* a b)))
1056 b
1057 a)
1058 (cond ((= a 4) 6)
1059 ((= b 4) (+ 6 7 a))
1060 (else 25))
1061 (+ 2 (if (> b a) b a))
1062 (* (cond ((> a b) a)
1063 ((< a b) b)
1064 (else -1))
1065 (+ a 1))
1066 Exercise 1.2. Translate the following expression into prefix form
1067
1068 \fExercise 1.3. Define a procedure that takes three numbers as arguments and returns the sum of the
1069 squares of the two larger numbers.
1070 Exercise 1.4. Observe that our model of evaluation allows for combinations whose operators are
1071 compound expressions. Use this observation to describe the behavior of the following procedure:
1072 (define (a-plus-abs-b a b)
1073 ((if (> b 0) + -) a b))
1074 Exercise 1.5. Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is
1075 using applicative-order evaluation or normal-order evaluation. He defines the following two
1076 procedures:
1077 (define (p) (p))
1078 (define (test x y)
1079 (if (= x 0)
1080 0
1081 y))
1082 Then he evaluates the expression
1083 (test 0 (p))
1084 What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What
1085 behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer.
1086 (Assume that the evaluation rule for the special form if is the same whether the interpreter is using
1087 normal or applicative order: The predicate expression is evaluated first, and the result determines
1088 whether to evaluate the consequent or the alternative expression.)
1089
1090 1.1.7 Example: Square Roots by Newton’s Method
1091 Procedures, as introduced above, are much like ordinary mathematical functions. They specify a value
1092 that is determined by one or more parameters. But there is an important difference between
1093 mathematical functions and computer procedures. Procedures must be effective.
1094 As a case in point, consider the problem of computing square roots. We can define the square-root
1095 function as
1096
1097 This describes a perfectly legitimate mathematical function. We could use it to recognize whether one
1098 number is the square root of another, or to derive facts about square roots in general. On the other
1099 hand, the definition does not describe a procedure. Indeed, it tells us almost nothing about how to
1100 actually find the square root of a given number. It will not help matters to rephrase this definition in
1101 pseudo-Lisp:
1102 (define (sqrt x)
1103 (the y (and (>= y 0)
1104 (= (square y) x))))
1105
1106 \fThis only begs the question.
1107 The contrast between function and procedure is a reflection of the general distinction between
1108 describing properties of things and describing how to do things, or, as it is sometimes referred to, the
1109 distinction between declarative knowledge and imperative knowledge. In mathematics we are usually
1110 concerned with declarative (what is) descriptions, whereas in computer science we are usually
1111 concerned with imperative (how to) descriptions. 20
1112 How does one compute square roots? The most common way is to use Newton’s method of successive
1113 approximations, which says that whenever we have a guess y for the value of the square root of a
1114 number x, we can perform a simple manipulation to get a better guess (one closer to the actual square
1115 root) by averaging y with x/y. 21 For example, we can compute the square root of 2 as follows.
1116 Suppose our initial guess is 1:
1117 Guess
1118
1119 Quotient
1120
1121 Average
1122
1123 1
1124
1125 (2/1) = 2
1126
1127 ((2 + 1)/2) = 1.5
1128
1129 1.5
1130
1131 (2/1.5) = 1.3333
1132
1133 ((1.3333 + 1.5)/2) = 1.4167
1134
1135 1.4167
1136
1137 (2/1.4167) = 1.4118
1138
1139 ((1.4167 + 1.4118)/2) = 1.4142
1140
1141 1.4142
1142
1143 ...
1144
1145 ...
1146
1147 Continuing this process, we obtain better and better approximations to the square root.
1148 Now let’s formalize the process in terms of procedures. We start with a value for the radicand (the
1149 number whose square root we are trying to compute) and a value for the guess. If the guess is good
1150 enough for our purposes, we are done; if not, we must repeat the process with an improved guess. We
1151 write this basic strategy as a procedure:
1152 (define (sqrt-iter guess x)
1153 (if (good-enough? guess x)
1154 guess
1155 (sqrt-iter (improve guess x)
1156 x)))
1157 A guess is improved by averaging it with the quotient of the radicand and the old guess:
1158 (define (improve guess x)
1159 (average guess (/ x guess)))
1160 where
1161
1162 \f(define (average x y)
1163 (/ (+ x y) 2))
1164 We also have to say what we mean by ‘‘good enough.’’ The following will do for illustration, but it is
1165 not really a very good test. (See exercise 1.7.) The idea is to improve the answer until it is close
1166 enough so that its square differs from the radicand by less than a predetermined tolerance (here
1167 0.001): 22
1168 (define (good-enough? guess x)
1169 (< (abs (- (square guess) x)) 0.001))
1170 Finally, we need a way to get started. For instance, we can always guess that the square root of any
1171 number is 1: 23
1172 (define (sqrt x)
1173 (sqrt-iter 1.0 x))
1174 If we type these definitions to the interpreter, we can use sqrt just as we can use any procedure:
1175 (sqrt 9)
1176 3.00009155413138
1177 (sqrt (+ 100 37))
1178 11.704699917758145
1179 (sqrt (+ (sqrt 2) (sqrt 3)))
1180 1.7739279023207892
1181 (square (sqrt 1000))
1182 1000.000369924366
1183 The sqrt program also illustrates that the simple procedural language we have introduced so far is
1184 sufficient for writing any purely numerical program that one could write in, say, C or Pascal. This
1185 might seem surprising, since we have not included in our language any iterative (looping) constructs
1186 that direct the computer to do something over and over again. Sqrt-iter, on the other hand,
1187 demonstrates how iteration can be accomplished using no special construct other than the ordinary
1188 ability to call a procedure. 24
1189 Exercise 1.6. Alyssa P. Hacker doesn’t see why if needs to be provided as a special form. ‘‘Why
1190 can’t I just define it as an ordinary procedure in terms of cond?’’ she asks. Alyssa’s friend Eva Lu
1191 Ator claims this can indeed be done, and she defines a new version of if:
1192 (define (new-if predicate then-clause else-clause)
1193 (cond (predicate then-clause)
1194 (else else-clause)))
1195 Eva demonstrates the program for Alyssa:
1196 (new-if (= 2 3) 0 5)
1197 5
1198 (new-if (= 1 1) 0 5)
1199 0
1200
1201 \fDelighted, Alyssa uses new-if to rewrite the square-root program:
1202 (define (sqrt-iter guess x)
1203 (new-if (good-enough? guess x)
1204 guess
1205 (sqrt-iter (improve guess x)
1206 x)))
1207 What happens when Alyssa attempts to use this to compute square roots? Explain.
1208 Exercise 1.7. The good-enough? test used in computing square roots will not be very effective for
1209 finding the square roots of very small numbers. Also, in real computers, arithmetic operations are
1210 almost always performed with limited precision. This makes our test inadequate for very large
1211 numbers. Explain these statements, with examples showing how the test fails for small and large
1212 numbers. An alternative strategy for implementing good-enough? is to watch how guess changes
1213 from one iteration to the next and to stop when the change is a very small fraction of the guess. Design
1214 a square-root procedure that uses this kind of end test. Does this work better for small and large
1215 numbers?
1216 Exercise 1.8. Newton’s method for cube roots is based on the fact that if y is an approximation to the
1217 cube root of x, then a better approximation is given by the value
1218
1219 Use this formula to implement a cube-root procedure analogous to the square-root procedure. (In
1220 section 1.3.4 we will see how to implement Newton’s method in general as an abstraction of these
1221 square-root and cube-root procedures.)
1222
1223 1.1.8 Procedures as Black-Box Abstractions
1224 Sqrt is our first example of a process defined by a set of mutually defined procedures. Notice that the
1225 definition of sqrt-iter is recursive; that is, the procedure is defined in terms of itself. The idea of
1226 being able to define a procedure in terms of itself may be disturbing; it may seem unclear how such a
1227 ‘‘circular’’ definition could make sense at all, much less specify a well-defined process to be carried
1228 out by a computer. This will be addressed more carefully in section 1.2. But first let’s consider some
1229 other important points illustrated by the sqrt example.
1230 Observe that the problem of computing square roots breaks up naturally into a number of
1231 subproblems: how to tell whether a guess is good enough, how to improve a guess, and so on. Each of
1232 these tasks is accomplished by a separate procedure. The entire sqrt program can be viewed as a
1233 cluster of procedures (shown in figure 1.2) that mirrors the decomposition of the problem into
1234 subproblems.
1235
1236 \fFigure 1.2: Procedural decomposition of the sqrt program.
1237 Figure 1.2: Procedural decomposition of the sqrt program.
1238 The importance of this decomposition strategy is not simply that one is dividing the program into
1239 parts. After all, we could take any large program and divide it into parts -- the first ten lines, the next
1240 ten lines, the next ten lines, and so on. Rather, it is crucial that each procedure accomplishes an
1241 identifiable task that can be used as a module in defining other procedures. For example, when we
1242 define the good-enough? procedure in terms of square, we are able to regard the square
1243 procedure as a ‘‘black box.’’ We are not at that moment concerned with how the procedure computes
1244 its result, only with the fact that it computes the square. The details of how the square is computed can
1245 be suppressed, to be considered at a later time. Indeed, as far as the good-enough? procedure is
1246 concerned, square is not quite a procedure but rather an abstraction of a procedure, a so-called
1247 procedural abstraction. At this level of abstraction, any procedure that computes the square is equally
1248 good.
1249 Thus, considering only the values they return, the following two procedures for squaring a number
1250 should be indistinguishable. Each takes a numerical argument and produces the square of that number
1251 as the value. 25
1252 (define (square x) (* x x))
1253 (define (square x)
1254 (exp (double (log x))))
1255 (define (double x) (+ x x))
1256 So a procedure definition should be able to suppress detail. The users of the procedure may not have
1257 written the procedure themselves, but may have obtained it from another programmer as a black box.
1258 A user should not need to know how the procedure is implemented in order to use it.
1259
1260 Local names
1261 One detail of a procedure’s implementation that should not matter to the user of the procedure is the
1262 implementer’s choice of names for the procedure’s formal parameters. Thus, the following procedures
1263 should not be distinguishable:
1264
1265 \f(define (square x) (* x x))
1266 (define (square y) (* y y))
1267 This principle -- that the meaning of a procedure should be independent of the parameter names used
1268 by its author -- seems on the surface to be self-evident, but its consequences are profound. The
1269 simplest consequence is that the parameter names of a procedure must be local to the body of the
1270 procedure. For example, we used square in the definition of good-enough? in our square-root
1271 procedure:
1272 (define (good-enough? guess x)
1273 (< (abs (- (square guess) x)) 0.001))
1274 The intention of the author of good-enough? is to determine if the square of the first argument is
1275 within a given tolerance of the second argument. We see that the author of good-enough? used the
1276 name guess to refer to the first argument and x to refer to the second argument. The argument of
1277 square is guess. If the author of square used x (as above) to refer to that argument, we see that
1278 the x in good-enough? must be a different x than the one in square. Running the procedure
1279 square must not affect the value of x that is used by good-enough?, because that value of x may
1280 be needed by good-enough? after square is done computing.
1281 If the parameters were not local to the bodies of their respective procedures, then the parameter x in
1282 square could be confused with the parameter x in good-enough?, and the behavior of
1283 good-enough? would depend upon which version of square we used. Thus, square would not
1284 be the black box we desired.
1285 A formal parameter of a procedure has a very special role in the procedure definition, in that it doesn’t
1286 matter what name the formal parameter has. Such a name is called a bound variable, and we say that
1287 the procedure definition binds its formal parameters. The meaning of a procedure definition is
1288 unchanged if a bound variable is consistently renamed throughout the definition. 26 If a variable is not
1289 bound, we say that it is free. The set of expressions for which a binding defines a name is called the
1290 scope of that name. In a procedure definition, the bound variables declared as the formal parameters of
1291 the procedure have the body of the procedure as their scope.
1292 In the definition of good-enough? above, guess and x are bound variables but <, -, abs, and
1293 square are free. The meaning of good-enough? should be independent of the names we choose
1294 for guess and x so long as they are distinct and different from <, -, abs, and square. (If we
1295 renamed guess to abs we would have introduced a bug by capturing the variable abs. It would
1296 have changed from free to bound.) The meaning of good-enough? is not independent of the names
1297 of its free variables, however. It surely depends upon the fact (external to this definition) that the
1298 symbol abs names a procedure for computing the absolute value of a number. Good-enough? will
1299 compute a different function if we substitute cos for abs in its definition.
1300
1301 Internal definitions and block structure
1302 We have one kind of name isolation available to us so far: The formal parameters of a procedure are
1303 local to the body of the procedure. The square-root program illustrates another way in which we would
1304 like to control the use of names. The existing program consists of separate procedures:
1305 (define (sqrt x)
1306 (sqrt-iter 1.0 x))
1307 (define (sqrt-iter guess x)
1308 (if (good-enough? guess x)
1309
1310 \fguess
1311 (sqrt-iter (improve guess x) x)))
1312 (define (good-enough? guess x)
1313 (< (abs (- (square guess) x)) 0.001))
1314 (define (improve guess x)
1315 (average guess (/ x guess)))
1316 The problem with this program is that the only procedure that is important to users of sqrt is sqrt.
1317 The other procedures (sqrt-iter, good-enough?, and improve) only clutter up their minds.
1318 They may not define any other procedure called good-enough? as part of another program to work
1319 together with the square-root program, because sqrt needs it. The problem is especially severe in the
1320 construction of large systems by many separate programmers. For example, in the construction of a
1321 large library of numerical procedures, many numerical functions are computed as successive
1322 approximations and thus might have procedures named good-enough? and improve as auxiliary
1323 procedures. We would like to localize the subprocedures, hiding them inside sqrt so that sqrt could
1324 coexist with other successive approximations, each having its own private good-enough?
1325 procedure. To make this possible, we allow a procedure to have internal definitions that are local to
1326 that procedure. For example, in the square-root problem we can write
1327 (define (sqrt x)
1328 (define (good-enough? guess x)
1329 (< (abs (- (square guess) x)) 0.001))
1330 (define (improve guess x)
1331 (average guess (/ x guess)))
1332 (define (sqrt-iter guess x)
1333 (if (good-enough? guess x)
1334 guess
1335 (sqrt-iter (improve guess x) x)))
1336 (sqrt-iter 1.0 x))
1337 Such nesting of definitions, called block structure, is basically the right solution to the simplest
1338 name-packaging problem. But there is a better idea lurking here. In addition to internalizing the
1339 definitions of the auxiliary procedures, we can simplify them. Since x is bound in the definition of
1340 sqrt, the procedures good-enough?, improve, and sqrt-iter, which are defined internally
1341 to sqrt, are in the scope of x. Thus, it is not necessary to pass x explicitly to each of these
1342 procedures. Instead, we allow x to be a free variable in the internal definitions, as shown below. Then
1343 x gets its value from the argument with which the enclosing procedure sqrt is called. This discipline
1344 is called lexical scoping. 27
1345 (define (sqrt x)
1346 (define (good-enough? guess)
1347 (< (abs (- (square guess) x)) 0.001))
1348 (define (improve guess)
1349 (average guess (/ x guess)))
1350 (define (sqrt-iter guess)
1351 (if (good-enough? guess)
1352 guess
1353 (sqrt-iter (improve guess))))
1354 (sqrt-iter 1.0))
1355
1356 \fWe will use block structure extensively to help us break up large programs into tractable pieces. 28
1357 The idea of block structure originated with the programming language Algol 60. It appears in most
1358 advanced programming languages and is an important tool for helping to organize the construction of
1359 large programs.
1360 4 The characterization of numbers as ‘‘simple data’’ is a barefaced bluff. In fact, the treatment of
1361
1362 numbers is one of the trickiest and most confusing aspects of any programming language. Some
1363 typical issues involved are these: Some computer systems distinguish integers, such as 2, from real
1364 numbers, such as 2.71. Is the real number 2.00 different from the integer 2? Are the arithmetic
1365 operations used for integers the same as the operations used for real numbers? Does 6 divided by 2
1366 produce 3, or 3.0? How large a number can we represent? How many decimal places of accuracy can
1367 we represent? Is the range of integers the same as the range of real numbers? Above and beyond these
1368 questions, of course, lies a collection of issues concerning roundoff and truncation errors -- the entire
1369 science of numerical analysis. Since our focus in this book is on large-scale program design rather than
1370 on numerical techniques, we are going to ignore these problems. The numerical examples in this
1371 chapter will exhibit the usual roundoff behavior that one observes when using arithmetic operations
1372 that preserve a limited number of decimal places of accuracy in noninteger operations.
1373 5 Throughout this book, when we wish to emphasize the distinction between the input typed by the
1374
1375 user and the response printed by the interpreter, we will show the latter in slanted characters.
1376 6 Lisp systems typically provide features to aid the user in formatting expressions. Two especially
1377
1378 useful features are one that automatically indents to the proper pretty-print position whenever a new
1379 line is started and one that highlights the matching left parenthesis whenever a right parenthesis is
1380 typed.
1381 7 Lisp obeys the convention that every expression has a value. This convention, together with the old
1382
1383 reputation of Lisp as an inefficient language, is the source of the quip by Alan Perlis (paraphrasing
1384 Oscar Wilde) that ‘‘Lisp programmers know the value of everything but the cost of nothing.’’
1385 8 In this book, we do not show the interpreter’s response to evaluating definitions, since this is highly
1386
1387 implementation-dependent.
1388 9 Chapter 3 will show that this notion of environment is crucial, both for understanding how the
1389
1390 interpreter works and for implementing interpreters.
1391 10 It may seem strange that the evaluation rule says, as part of the first step, that we should evaluate
1392
1393 the leftmost element of a combination, since at this point that can only be an operator such as + or *
1394 representing a built-in primitive procedure such as addition or multiplication. We will see later that it
1395 is useful to be able to work with combinations whose operators are themselves compound expressions.
1396 11 Special syntactic forms that are simply convenient alternative surface structures for things that can
1397
1398 be written in more uniform ways are sometimes called syntactic sugar, to use a phrase coined by Peter
1399 Landin. In comparison with users of other languages, Lisp programmers, as a rule, are less concerned
1400 with matters of syntax. (By contrast, examine any Pascal manual and notice how much of it is devoted
1401 to descriptions of syntax.) This disdain for syntax is due partly to the flexibility of Lisp, which makes
1402 it easy to change surface syntax, and partly to the observation that many ‘‘convenient’’ syntactic
1403 constructs, which make the language less uniform, end up causing more trouble than they are worth
1404 when programs become large and complex. In the words of Alan Perlis, ‘‘Syntactic sugar causes
1405 cancer of the semicolon.’’
1406
1407 \f12 Observe that there are two different operations being combined here: we are creating the procedure,
1408
1409 and we are giving it the name square. It is possible, indeed important, to be able to separate these
1410 two notions -- to create procedures without naming them, and to give names to procedures that have
1411 already been created. We will see how to do this in section 1.3.2.
1412 13 Throughout this book, we will describe the general syntax of expressions by using italic symbols
1413
1414 delimited by angle brackets -- e.g., <name> -- to denote the ‘‘slots’’ in the expression to be filled in
1415 when such an expression is actually used.
1416 14 More generally, the body of the procedure can be a sequence of expressions. In this case, the
1417
1418 interpreter evaluates each expression in the sequence in turn and returns the value of the final
1419 expression as the value of the procedure application.
1420 15 Despite the simplicity of the substitution idea, it turns out to be surprisingly complicated to give a
1421
1422 rigorous mathematical definition of the substitution process. The problem arises from the possibility of
1423 confusion between the names used for the formal parameters of a procedure and the (possibly
1424 identical) names used in the expressions to which the procedure may be applied. Indeed, there is a long
1425 history of erroneous definitions of substitution in the literature of logic and programming semantics.
1426 See Stoy 1977 for a careful discussion of substitution.
1427 16 In chapter 3 we will introduce stream processing, which is a way of handling apparently ‘‘infinite’’
1428
1429 data structures by incorporating a limited form of normal-order evaluation. In section 4.2 we will
1430 modify the Scheme interpreter to produce a normal-order variant of Scheme.
1431 17 ‘‘Interpreted as either true or false’’ means this: In Scheme, there are two distinguished values that
1432
1433 are denoted by the constants #t and #f. When the interpreter checks a predicate’s value, it interprets
1434 #f as false. Any other value is treated as true. (Thus, providing #t is logically unnecessary, but it is
1435 convenient.) In this book we will use names true and false, which are associated with the values
1436 #t and #f respectively.
1437 18 Abs also uses the ‘‘minus’’ operator -, which, when used with a single operand, as in (- x),
1438
1439 indicates negation.
1440 19 A minor difference between if and cond is that the <e> part of each cond clause may be a
1441
1442 sequence of expressions. If the corresponding <p> is found to be true, the expressions <e> are
1443 evaluated in sequence and the value of the final expression in the sequence is returned as the value of
1444 the cond. In an if expression, however, the <consequent> and <alternative> must be single
1445 expressions.
1446 20 Declarative and imperative descriptions are intimately related, as indeed are mathematics and
1447
1448 computer science. For instance, to say that the answer produced by a program is ‘‘correct’’ is to make
1449 a declarative statement about the program. There is a large amount of research aimed at establishing
1450 techniques for proving that programs are correct, and much of the technical difficulty of this subject
1451 has to do with negotiating the transition between imperative statements (from which programs are
1452 constructed) and declarative statements (which can be used to deduce things). In a related vein, an
1453 important current area in programming-language design is the exploration of so-called very high-level
1454 languages, in which one actually programs in terms of declarative statements. The idea is to make
1455 interpreters sophisticated enough so that, given ‘‘what is’’ knowledge specified by the programmer,
1456 they can generate ‘‘how to’’ knowledge automatically. This cannot be done in general, but there are
1457 important areas where progress has been made. We shall revisit this idea in chapter 4.
1458
1459 \f21 This square-root algorithm is actually a special case of Newton’s method, which is a general
1460
1461 technique for finding roots of equations. The square-root algorithm itself was developed by Heron of
1462 Alexandria in the first century A.D. We will see how to express the general Newton’s method as a Lisp
1463 procedure in section 1.3.4.
1464 22 We will usually give predicates names ending with question marks, to help us remember that they
1465
1466 are predicates. This is just a stylistic convention. As far as the interpreter is concerned, the question
1467 mark is just an ordinary character.
1468 23 Observe that we express our initial guess as 1.0 rather than 1. This would not make any difference
1469
1470 in many Lisp implementations. MIT Scheme, however, distinguishes between exact integers and
1471 decimal values, and dividing two integers produces a rational number rather than a decimal. For
1472 example, dividing 10 by 6 yields 5/3, while dividing 10.0 by 6.0 yields 1.6666666666666667. (We
1473 will learn how to implement arithmetic on rational numbers in section 2.1.1.) If we start with an initial
1474 guess of 1 in our square-root program, and x is an exact integer, all subsequent values produced in the
1475 square-root computation will be rational numbers rather than decimals. Mixed operations on rational
1476 numbers and decimals always yield decimals, so starting with an initial guess of 1.0 forces all
1477 subsequent values to be decimals.
1478 24 Readers who are worried about the efficiency issues involved in using procedure calls to implement
1479
1480 iteration should note the remarks on ‘‘tail recursion’’ in section 1.2.1.
1481 25 It is not even clear which of these procedures is a more efficient implementation. This depends
1482
1483 upon the hardware available. There are machines for which the ‘‘obvious’’ implementation is the less
1484 efficient one. Consider a machine that has extensive tables of logarithms and antilogarithms stored in a
1485 very efficient manner.
1486 26 The concept of consistent renaming is actually subtle and difficult to define formally. Famous
1487
1488 logicians have made embarrassing errors here.
1489 27 Lexical scoping dictates that free variables in a procedure are taken to refer to bindings made by
1490
1491 enclosing procedure definitions; that is, they are looked up in the environment in which the procedure
1492 was defined. We will see how this works in detail in chapter 3 when we study environments and the
1493 detailed behavior of the interpreter.
1494 28 Embedded definitions must come first in a procedure body. The management is not responsible for
1495
1496 the consequences of running programs that intertwine definition and use.
1497 [Go to first, previous, next page; contents; index]
1498
1499 \f[Go to first, previous, next page; contents; index]
1500
1501 1.2 Procedures and the Processes They Generate
1502 We have now considered the elements of programming: We have used primitive arithmetic operations,
1503 we have combined these operations, and we have abstracted these composite operations by defining
1504 them as compound procedures. But that is not enough to enable us to say that we know how to
1505 program. Our situation is analogous to that of someone who has learned the rules for how the pieces
1506 move in chess but knows nothing of typical openings, tactics, or strategy. Like the novice chess player,
1507 we don’t yet know the common patterns of usage in the domain. We lack the knowledge of which
1508 moves are worth making (which procedures are worth defining). We lack the experience to predict the
1509 consequences of making a move (executing a procedure).
1510 The ability to visualize the consequences of the actions under consideration is crucial to becoming an
1511 expert programmer, just as it is in any synthetic, creative activity. In becoming an expert photographer,
1512 for example, one must learn how to look at a scene and know how dark each region will appear on a
1513 print for each possible choice of exposure and development conditions. Only then can one reason
1514 backward, planning framing, lighting, exposure, and development to obtain the desired effects. So it is
1515 with programming, where we are planning the course of action to be taken by a process and where we
1516 control the process by means of a program. To become experts, we must learn to visualize the
1517 processes generated by various types of procedures. Only after we have developed such a skill can we
1518 learn to reliably construct programs that exhibit the desired behavior.
1519 A procedure is a pattern for the local evolution of a computational process. It specifies how each stage
1520 of the process is built upon the previous stage. We would like to be able to make statements about the
1521 overall, or global, behavior of a process whose local evolution has been specified by a procedure. This
1522 is very difficult to do in general, but we can at least try to describe some typical patterns of process
1523 evolution.
1524 In this section we will examine some common ‘‘shapes’’ for processes generated by simple
1525 procedures. We will also investigate the rates at which these processes consume the important
1526 computational resources of time and space. The procedures we will consider are very simple. Their
1527 role is like that played by test patterns in photography: as oversimplified prototypical patterns, rather
1528 than practical examples in their own right.
1529
1530 1.2.1 Linear Recursion and Iteration
1531
1532 \fFigure 1.3: A linear recursive process for computing 6!.
1533 Figure 1.3: A linear recursive process for computing 6!.
1534 We begin by considering the factorial function, defined by
1535
1536 There are many ways to compute factorials. One way is to make use of the observation that n! is equal
1537 to n times (n - 1)! for any positive integer n:
1538
1539 Thus, we can compute n! by computing (n - 1)! and multiplying the result by n. If we add the
1540 stipulation that 1! is equal to 1, this observation translates directly into a procedure:
1541 (define (factorial n)
1542 (if (= n 1)
1543 1
1544 (* n (factorial (- n 1)))))
1545 We can use the substitution model of section 1.1.5 to watch this procedure in action computing 6!, as
1546 shown in figure 1.3.
1547 Now let’s take a different perspective on computing factorials. We could describe a rule for computing
1548 n! by specifying that we first multiply 1 by 2, then multiply the result by 3, then by 4, and so on until
1549 we reach n. More formally, we maintain a running product, together with a counter that counts from 1
1550 up to n. We can describe the computation by saying that the counter and the product simultaneously
1551 change from one step to the next according to the rule
1552 product
1553
1554 counter · product
1555
1556 counter
1557
1558 counter + 1
1559
1560 and stipulating that n! is the value of the product when the counter exceeds n.
1561
1562 \fFigure 1.4: A linear iterative process for computing 6!.
1563 Figure 1.4: A linear iterative process for computing 6!.
1564 Once again, we can recast our description as a procedure for computing factorials: 29
1565 (define (factorial n)
1566 (fact-iter 1 1 n))
1567 (define (fact-iter product counter max-count)
1568 (if (> counter max-count)
1569 product
1570 (fact-iter (* counter product)
1571 (+ counter 1)
1572 max-count)))
1573 As before, we can use the substitution model to visualize the process of computing 6!, as shown in
1574 figure 1.4.
1575 Compare the two processes. From one point of view, they seem hardly different at all. Both compute
1576 the same mathematical function on the same domain, and each requires a number of steps proportional
1577 to n to compute n!. Indeed, both processes even carry out the same sequence of multiplications,
1578 obtaining the same sequence of partial products. On the other hand, when we consider the ‘‘shapes’’ of
1579 the two processes, we find that they evolve quite differently.
1580 Consider the first process. The substitution model reveals a shape of expansion followed by
1581 contraction, indicated by the arrow in figure 1.3. The expansion occurs as the process builds up a chain
1582 of deferred operations (in this case, a chain of multiplications). The contraction occurs as the
1583 operations are actually performed. This type of process, characterized by a chain of deferred
1584 operations, is called a recursive process. Carrying out this process requires that the interpreter keep
1585 track of the operations to be performed later on. In the computation of n!, the length of the chain of
1586 deferred multiplications, and hence the amount of information needed to keep track of it, grows
1587 linearly with n (is proportional to n), just like the number of steps. Such a process is called a linear
1588 recursive process.
1589 By contrast, the second process does not grow and shrink. At each step, all we need to keep track of,
1590 for any n, are the current values of the variables product, counter, and max-count. We call this
1591 an iterative process. In general, an iterative process is one whose state can be summarized by a fixed
1592 number of state variables, together with a fixed rule that describes how the state variables should be
1593 updated as the process moves from state to state and an (optional) end test that specifies conditions
1594 under which the process should terminate. In computing n!, the number of steps required grows
1595 linearly with n. Such a process is called a linear iterative process.
1596
1597 \fThe contrast between the two processes can be seen in another way. In the iterative case, the program
1598 variables provide a complete description of the state of the process at any point. If we stopped the
1599 computation between steps, all we would need to do to resume the computation is to supply the
1600 interpreter with the values of the three program variables. Not so with the recursive process. In this
1601 case there is some additional ‘‘hidden’’ information, maintained by the interpreter and not contained in
1602 the program variables, which indicates ‘‘where the process is’’ in negotiating the chain of deferred
1603 operations. The longer the chain, the more information must be maintained. 30
1604 In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive
1605 process with the notion of a recursive procedure. When we describe a procedure as recursive, we are
1606 referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the
1607 procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive,
1608 we are speaking about how the process evolves, not about the syntax of how a procedure is written. It
1609 may seem disturbing that we refer to a recursive procedure such as fact-iter as generating an
1610 iterative process. However, the process really is iterative: Its state is captured completely by its three
1611 state variables, and an interpreter need keep track of only three variables in order to execute the
1612 process.
1613 One reason that the distinction between process and procedure may be confusing is that most
1614 implementations of common languages (including Ada, Pascal, and C) are designed in such a way that
1615 the interpretation of any recursive procedure consumes an amount of memory that grows with the
1616 number of procedure calls, even when the process described is, in principle, iterative. As a
1617 consequence, these languages can describe iterative processes only by resorting to special-purpose
1618 ‘‘looping constructs’’ such as do, repeat, until, for, and while. The implementation of
1619 Scheme we shall consider in chapter 5 does not share this defect. It will execute an iterative process in
1620 constant space, even if the iterative process is described by a recursive procedure. An implementation
1621 with this property is called tail-recursive. With a tail-recursive implementation, iteration can be
1622 expressed using the ordinary procedure call mechanism, so that special iteration constructs are useful
1623 only as syntactic sugar. 31
1624 Exercise 1.9. Each of the following two procedures defines a method for adding two positive integers
1625 in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its
1626 argument by 1.
1627 (define (+ a b)
1628 (if (= a 0)
1629 b
1630 (inc (+ (dec a) b))))
1631 (define (+ a b)
1632 (if (= a 0)
1633 b
1634 (+ (dec a) (inc b))))
1635 Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4
1636 5). Are these processes iterative or recursive?
1637 Exercise 1.10. The following procedure computes a mathematical function called Ackermann’s
1638 function.
1639
1640 \f(define (A x y)
1641 (cond ((= y 0)
1642 ((= x 0)
1643 ((= y 1)
1644 (else (A
1645
1646 0)
1647 (* 2 y))
1648 2)
1649 (- x 1)
1650 (A x (- y 1))))))
1651
1652 What are the values of the following expressions?
1653 (A 1 10)
1654 (A 2 4)
1655 (A 3 3)
1656 Consider the following procedures, where A is the procedure defined above:
1657 (define
1658 (define
1659 (define
1660 (define
1661
1662 (f
1663 (g
1664 (h
1665 (k
1666
1667 n)
1668 n)
1669 n)
1670 n)
1671
1672 (A
1673 (A
1674 (A
1675 (*
1676
1677 0
1678 1
1679 2
1680 5
1681
1682 n))
1683 n))
1684 n))
1685 n n))
1686
1687 Give concise mathematical definitions for the functions computed by the procedures f, g, and h for
1688 positive integer values of n. For example, (k n) computes 5n 2 .
1689
1690 1.2.2 Tree Recursion
1691 Another common pattern of computation is called tree recursion. As an example, consider computing
1692 the sequence of Fibonacci numbers, in which each number is the sum of the preceding two:
1693
1694 In general, the Fibonacci numbers can be defined by the rule
1695
1696 We can immediately translate this definition into a recursive procedure for computing Fibonacci
1697 numbers:
1698 (define (fib n)
1699 (cond ((= n 0) 0)
1700 ((= n 1) 1)
1701 (else (+ (fib (- n 1))
1702 (fib (- n 2))))))
1703
1704 \fFigure 1.5: The tree-recursive process generated in computing (fib 5).
1705 Figure 1.5: The tree-recursive process generated in computing (fib 5).
1706 Consider the pattern of this computation. To compute (fib 5), we compute (fib 4) and (fib
1707 3). To compute (fib 4), we compute (fib 3) and (fib 2). In general, the evolved process
1708 looks like a tree, as shown in figure 1.5. Notice that the branches split into two at each level (except at
1709 the bottom); this reflects the fact that the fib procedure calls itself twice each time it is invoked.
1710 This procedure is instructive as a prototypical tree recursion, but it is a terrible way to compute
1711 Fibonacci numbers because it does so much redundant computation. Notice in figure 1.5 that the entire
1712 computation of (fib 3) -- almost half the work -- is duplicated. In fact, it is not hard to show that
1713 the number of times the procedure will compute (fib 1) or (fib 0) (the number of leaves in the
1714 above tree, in general) is precisely Fib(n + 1). To get an idea of how bad this is, one can show that the
1715 value of Fib(n) grows exponentially with n. More precisely (see exercise 1.13), Fib(n) is the closest
1716 integer to n / 5, where
1717
1718 is the golden ratio, which satisfies the equation
1719
1720 Thus, the process uses a number of steps that grows exponentially with the input. On the other hand,
1721 the space required grows only linearly with the input, because we need keep track only of which nodes
1722 are above us in the tree at any point in the computation. In general, the number of steps required by a
1723 tree-recursive process will be proportional to the number of nodes in the tree, while the space required
1724 will be proportional to the maximum depth of the tree.
1725
1726 \fWe can also formulate an iterative process for computing the Fibonacci numbers. The idea is to use a
1727 pair of integers a and b, initialized to Fib(1) = 1 and Fib(0) = 0, and to repeatedly apply the
1728 simultaneous transformations
1729
1730 It is not hard to show that, after applying this transformation n times, a and b will be equal,
1731 respectively, to Fib(n + 1) and Fib(n). Thus, we can compute Fibonacci numbers iteratively using the
1732 procedure
1733 (define (fib n)
1734 (fib-iter 1 0 n))
1735 (define (fib-iter a b count)
1736 (if (= count 0)
1737 b
1738 (fib-iter (+ a b) a (- count 1))))
1739 This second method for computing Fib(n) is a linear iteration. The difference in number of steps
1740 required by the two methods -- one linear in n, one growing as fast as Fib(n) itself -- is enormous, even
1741 for small inputs.
1742 One should not conclude from this that tree-recursive processes are useless. When we consider
1743 processes that operate on hierarchically structured data rather than numbers, we will find that tree
1744 recursion is a natural and powerful tool. 32 But even in numerical operations, tree-recursive processes
1745 can be useful in helping us to understand and design programs. For instance, although the first fib
1746 procedure is much less efficient than the second one, it is more straightforward, being little more than
1747 a translation into Lisp of the definition of the Fibonacci sequence. To formulate the iterative algorithm
1748 required noticing that the computation could be recast as an iteration with three state variables.
1749
1750 Example: Counting change
1751 It takes only a bit of cleverness to come up with the iterative Fibonacci algorithm. In contrast, consider
1752 the following problem: How many different ways can we make change of $ 1.00, given half-dollars,
1753 quarters, dimes, nickels, and pennies? More generally, can we write a procedure to compute the
1754 number of ways to change any given amount of money?
1755 This problem has a simple solution as a recursive procedure. Suppose we think of the types of coins
1756 available as arranged in some order. Then the following relation holds:
1757 The number of ways to change amount a using n kinds of coins equals
1758 the number of ways to change amount a using all but the first kind of coin, plus
1759 the number of ways to change amount a - d using all n kinds of coins, where d is the
1760 denomination of the first kind of coin.
1761 To see why this is true, observe that the ways to make change can be divided into two groups: those
1762 that do not use any of the first kind of coin, and those that do. Therefore, the total number of ways to
1763 make change for some amount is equal to the number of ways to make change for the amount without
1764 using any of the first kind of coin, plus the number of ways to make change assuming that we do use
1765 the first kind of coin. But the latter number is equal to the number of ways to make change for the
1766
1767 \famount that remains after using a coin of the first kind.
1768 Thus, we can recursively reduce the problem of changing a given amount to the problem of changing
1769 smaller amounts using fewer kinds of coins. Consider this reduction rule carefully, and convince
1770 yourself that we can use it to describe an algorithm if we specify the following degenerate cases: 33
1771 If a is exactly 0, we should count that as 1 way to make change.
1772 If a is less than 0, we should count that as 0 ways to make change.
1773 If n is 0, we should count that as 0 ways to make change.
1774 We can easily translate this description into a recursive procedure:
1775 (define (count-change amount)
1776 (cc amount 5))
1777 (define (cc amount kinds-of-coins)
1778 (cond ((= amount 0) 1)
1779 ((or (< amount 0) (= kinds-of-coins 0)) 0)
1780 (else (+ (cc amount
1781 (- kinds-of-coins 1))
1782 (cc (- amount
1783 (first-denomination kinds-of-coins))
1784 kinds-of-coins)))))
1785 (define (first-denomination kinds-of-coins)
1786 (cond ((= kinds-of-coins 1) 1)
1787 ((= kinds-of-coins 2) 5)
1788 ((= kinds-of-coins 3) 10)
1789 ((= kinds-of-coins 4) 25)
1790 ((= kinds-of-coins 5) 50)))
1791 (The first-denomination procedure takes as input the number of kinds of coins available and
1792 returns the denomination of the first kind. Here we are thinking of the coins as arranged in order from
1793 largest to smallest, but any order would do as well.) We can now answer our original question about
1794 changing a dollar:
1795 (count-change 100)
1796 292
1797 Count-change generates a tree-recursive process with redundancies similar to those in our first
1798 implementation of fib. (It will take quite a while for that 292 to be computed.) On the other hand, it
1799 is not obvious how to design a better algorithm for computing the result, and we leave this problem as
1800 a challenge. The observation that a tree-recursive process may be highly inefficient but often easy to
1801 specify and understand has led people to propose that one could get the best of both worlds by
1802 designing a ‘‘smart compiler’’ that could transform tree-recursive procedures into more efficient
1803 procedures that compute the same result. 34
1804 Exercise 1.11. A function f is defined by the rule that f(n) = n if n<3 and f(n) = f(n - 1) + 2f(n - 2) +
1805 3f(n - 3) if n> 3. Write a procedure that computes f by means of a recursive process. Write a procedure
1806 that computes f by means of an iterative process.
1807
1808 \fExercise 1.12. The following pattern of numbers is called Pascal’s triangle.
1809
1810 The numbers at the edge of the triangle are all 1, and each number inside the triangle is the sum of the
1811 two numbers above it. 35 Write a procedure that computes elements of Pascal’s triangle by means of a
1812 recursive process.
1813 Exercise 1.13. Prove that Fib(n) is the closest integer to n / 5, where = (1 + 5)/2. Hint: Let =
1814 (1 - 5)/2. Use induction and the definition of the Fibonacci numbers (see section 1.2.2) to prove that
1815 Fib(n) = ( n - n )/ 5.
1816
1817 1.2.3 Orders of Growth
1818 The previous examples illustrate that processes can differ considerably in the rates at which they
1819 consume computational resources. One convenient way to describe this difference is to use the notion
1820 of order of growth to obtain a gross measure of the resources required by a process as the inputs
1821 become larger.
1822 Let n be a parameter that measures the size of the problem, and let R(n) be the amount of resources the
1823 process requires for a problem of size n. In our previous examples we took n to be the number for
1824 which a given function is to be computed, but there are other possibilities. For instance, if our goal is
1825 to compute an approximation to the square root of a number, we might take n to be the number of
1826 digits accuracy required. For matrix multiplication we might take n to be the number of rows in the
1827 matrices. In general there are a number of properties of the problem with respect to which it will be
1828 desirable to analyze a given process. Similarly, R(n) might measure the number of internal storage
1829 registers used, the number of elementary machine operations performed, and so on. In computers that
1830 do only a fixed number of operations at a time, the time required will be proportional to the number of
1831 elementary machine operations performed.
1832 We say that R(n) has order of growth (f(n)), written R(n) = (f(n)) (pronounced ‘‘theta of f(n)’’), if
1833 there are positive constants k 1 and k 2 independent of n such that
1834
1835 for any sufficiently large value of n. (In other words, for large n, the value R(n) is sandwiched between
1836 k 1 f(n) and k 2 f(n).)
1837 For instance, with the linear recursive process for computing factorial described in section 1.2.1 the
1838 number of steps grows proportionally to the input n. Thus, the steps required for this process grows as
1839 (n). We also saw that the space required grows as (n). For the iterative factorial, the number of
1840 steps is still (n) but the space is (1) -- that is, constant. 36 The tree-recursive Fibonacci
1841 computation requires ( n ) steps and space (n), where is the golden ratio described in
1842 section 1.2.2.
1843
1844 \fOrders of growth provide only a crude description of the behavior of a process. For example, a process
1845 requiring n 2 steps and a process requiring 1000n 2 steps and a process requiring 3n 2 + 10n + 17 steps
1846 all have (n 2 ) order of growth. On the other hand, order of growth provides a useful indication of
1847 how we may expect the behavior of the process to change as we change the size of the problem. For a
1848 (n) (linear) process, doubling the size will roughly double the amount of resources used. For an
1849 exponential process, each increment in problem size will multiply the resource utilization by a
1850 constant factor. In the remainder of section 1.2 we will examine two algorithms whose order of growth
1851 is logarithmic, so that doubling the problem size increases the resource requirement by a constant
1852 amount.
1853 Exercise 1.14. Draw the tree illustrating the process generated by the count-change procedure of
1854 section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and number of
1855 steps used by this process as the amount to be changed increases?
1856 Exercise 1.15. The sine of an angle (specified in radians) can be computed by making use of the
1857 approximation sin x x if x is sufficiently small, and the trigonometric identity
1858
1859 to reduce the size of the argument of sin. (For purposes of this exercise an angle is considered
1860 ‘‘sufficiently small’’ if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in
1861 the following procedures:
1862 (define (cube x) (* x x x))
1863 (define (p x) (- (* 3 x) (* 4 (cube x))))
1864 (define (sine angle)
1865 (if (not (> (abs angle) 0.1))
1866 angle
1867 (p (sine (/ angle 3.0)))))
1868 a. How many times is the procedure p applied when (sine 12.15) is evaluated?
1869 b. What is the order of growth in space and number of steps (as a function of a) used by the process
1870 generated by the sine procedure when (sine a) is evaluated?
1871
1872 1.2.4 Exponentiation
1873 Consider the problem of computing the exponential of a given number. We would like a procedure
1874 that takes as arguments a base b and a positive integer exponent n and computes b n . One way to do
1875 this is via the recursive definition
1876
1877 which translates readily into the procedure
1878 (define (expt b n)
1879 (if (= n 0)
1880 1
1881 (* b (expt b (- n 1)))))
1882
1883 \fThis is a linear recursive process, which requires (n) steps and
1884 can readily formulate an equivalent linear iteration:
1885
1886 (n) space. Just as with factorial, we
1887
1888 (define (expt b n)
1889 (expt-iter b n 1))
1890 (define (expt-iter b counter product)
1891 (if (= counter 0)
1892 product
1893 (expt-iter b
1894 (- counter 1)
1895 (* b product))))
1896 This version requires
1897
1898 (n) steps and
1899
1900 (1) space.
1901
1902 We can compute exponentials in fewer steps by using successive squaring. For instance, rather than
1903 computing b 8 as
1904
1905 we can compute it using three multiplications:
1906
1907 This method works fine for exponents that are powers of 2. We can also take advantage of successive
1908 squaring in computing exponentials in general if we use the rule
1909
1910 We can express this method as a procedure:
1911 (define (fast-expt b n)
1912 (cond ((= n 0) 1)
1913 ((even? n) (square (fast-expt b (/ n 2))))
1914 (else (* b (fast-expt b (- n 1))))))
1915 where the predicate to test whether an integer is even is defined in terms of the primitive procedure
1916 remainder by
1917 (define (even? n)
1918 (= (remainder n 2) 0))
1919 The process evolved by fast-expt grows logarithmically with n in both space and number of steps.
1920 To see this, observe that computing b 2n using fast-expt requires only one more multiplication
1921 than computing b n . The size of the exponent we can compute therefore doubles (approximately) with
1922 every new multiplication we are allowed. Thus, the number of multiplications required for an exponent
1923 of n grows about as fast as the logarithm of n to the base 2. The process has (log n) growth. 37
1924
1925 \fThe difference between (log n) growth and (n) growth becomes striking as n becomes large. For
1926 example, fast-expt for n = 1000 requires only 14 multiplications. 38 It is also possible to use the
1927 idea of successive squaring to devise an iterative algorithm that computes exponentials with a
1928 logarithmic number of steps (see exercise 1.16), although, as is often the case with iterative
1929 algorithms, this is not written down so straightforwardly as the recursive algorithm. 39
1930 Exercise 1.16. Design a procedure that evolves an iterative exponentiation process that uses
1931 successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the
1932 observation that (b n/2 ) 2 = (b 2 ) n/2 , keep, along with the exponent n and the base b, an additional state
1933 variable a, and define the state transformation in such a way that the product a b n is unchanged from
1934 state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of
1935 a at the end of the process. In general, the technique of defining an invariant quantity that remains
1936 unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
1937 Exercise 1.17. The exponentiation algorithms in this section are based on performing exponentiation
1938 by means of repeated multiplication. In a similar way, one can perform integer multiplication by
1939 means of repeated addition. The following multiplication procedure (in which it is assumed that our
1940 language can only add, not multiply) is analogous to the expt procedure:
1941 (define (* a b)
1942 (if (= b 0)
1943 0
1944 (+ a (* a (- b 1)))))
1945 This algorithm takes a number of steps that is linear in b. Now suppose we include, together with
1946 addition, operations double, which doubles an integer, and halve, which divides an (even) integer
1947 by 2. Using these, design a multiplication procedure analogous to fast-expt that uses a logarithmic
1948 number of steps.
1949 Exercise 1.18. Using the results of exercises 1.16 and 1.17, devise a procedure that generates an
1950 iterative process for multiplying two integers in terms of adding, doubling, and halving and uses a
1951 logarithmic number of steps. 40
1952 Exercise 1.19. There is a clever algorithm for computing the Fibonacci numbers in a logarithmic
1953 number of steps. Recall the transformation of the state variables a and b in the fib-iter process of
1954 section 1.2.2: a a + b and b a. Call this transformation T, and observe that applying T over and
1955 over again n times, starting with 1 and 0, produces the pair Fib(n + 1) and Fib(n). In other words, the
1956 Fibonacci numbers are produced by applying T n , the nth power of the transformation T, starting with
1957 the pair (1,0). Now consider T to be the special case of p = 0 and q = 1 in a family of transformations
1958 T pq , where T pq transforms the pair (a,b) according to a bq + aq + ap and b bp + aq. Show that if
1959 we apply such a transformation T pq twice, the effect is the same as using a single transformation T p’q’
1960 of the same form, and compute p’ and q’ in terms of p and q. This gives us an explicit way to square
1961 these transformations, and thus we can compute T n using successive squaring, as in the fast-expt
1962 procedure. Put this all together to complete the following procedure, which runs in a logarithmic
1963 number of steps: 41
1964 (define (fib n)
1965 (fib-iter 1 0 0 1 n))
1966 (define (fib-iter a b p q count)
1967 (cond ((= count 0) b)
1968 ((even? count)
1969
1970 \f(fib-iter a
1971 b
1972 <??>
1973 ; compute p’
1974 <??>
1975 ; compute q’
1976 (/ count 2)))
1977 (else (fib-iter (+ (* b q) (* a q) (* a p))
1978 (+ (* b p) (* a q))
1979 p
1980 q
1981 (- count 1)))))
1982
1983 1.2.5 Greatest Common Divisors
1984 The greatest common divisor (GCD) of two integers a and b is defined to be the largest integer that
1985 divides both a and b with no remainder. For example, the GCD of 16 and 28 is 4. In chapter 2, when
1986 we investigate how to implement rational-number arithmetic, we will need to be able to compute
1987 GCDs in order to reduce rational numbers to lowest terms. (To reduce a rational number to lowest
1988 terms, we must divide both the numerator and the denominator by their GCD. For example, 16/28
1989 reduces to 4/7.) One way to find the GCD of two integers is to factor them and search for common
1990 factors, but there is a famous algorithm that is much more efficient.
1991 The idea of the algorithm is based on the observation that, if r is the remainder when a is divided by b,
1992 then the common divisors of a and b are precisely the same as the common divisors of b and r. Thus,
1993 we can use the equation
1994
1995 to successively reduce the problem of computing a GCD to the problem of computing the GCD of
1996 smaller and smaller pairs of integers. For example,
1997
1998 reduces GCD(206,40) to GCD(2,0), which is 2. It is possible to show that starting with any two
1999 positive integers and performing repeated reductions will always eventually produce a pair where the
2000 second number is 0. Then the GCD is the other number in the pair. This method for computing the
2001 GCD is known as Euclid’s Algorithm. 42
2002 It is easy to express Euclid’s Algorithm as a procedure:
2003 (define (gcd a b)
2004 (if (= b 0)
2005 a
2006 (gcd b (remainder a b))))
2007
2008 \fThis generates an iterative process, whose number of steps grows as the logarithm of the numbers
2009 involved.
2010 The fact that the number of steps required by Euclid’s Algorithm has logarithmic growth bears an
2011 interesting relation to the Fibonacci numbers:
2012 Lamé’s Theorem: If Euclid’s Algorithm requires k steps to compute the GCD of some pair, then the
2013 smaller number in the pair must be greater than or equal to the kth Fibonacci number. 43
2014 We can use this theorem to get an order-of-growth estimate for Euclid’s Algorithm. Let n be the
2015 smaller of the two inputs to the procedure. If the process takes k steps, then we must have n> Fib (k)
2016 k / 5. Therefore the number of steps k grows as the logarithm (to the base ) of n. Hence, the order
2017 of growth is (log n).
2018 Exercise 1.20. The process that a procedure generates is of course dependent on the rules used by the
2019 interpreter. As an example, consider the iterative gcd procedure given above. Suppose we were to
2020 interpret this procedure using normal-order evaluation, as discussed in section 1.1.5. (The
2021 normal-order-evaluation rule for if is described in exercise 1.5.) Using the substitution method (for
2022 normal order), illustrate the process generated in evaluating (gcd 206 40) and indicate the
2023 remainder operations that are actually performed. How many remainder operations are actually
2024 performed in the normal-order evaluation of (gcd 206 40)? In the applicative-order evaluation?
2025
2026 1.2.6 Example: Testing for Primality
2027 This section describes two methods for checking the primality of an integer n, one with order of
2028 growth ( n), and a ‘‘probabilistic’’ algorithm with order of growth (log n). The exercises at the
2029 end of this section suggest programming projects based on these algorithms.
2030
2031 Searching for divisors
2032 Since ancient times, mathematicians have been fascinated by problems concerning prime numbers, and
2033 many people have worked on the problem of determining ways to test if numbers are prime. One way
2034 to test if a number is prime is to find the number’s divisors. The following program finds the smallest
2035 integral divisor (greater than 1) of a given number n. It does this in a straightforward way, by testing n
2036 for divisibility by successive integers starting with 2.
2037 (define (smallest-divisor n)
2038 (find-divisor n 2))
2039 (define (find-divisor n test-divisor)
2040 (cond ((> (square test-divisor) n) n)
2041 ((divides? test-divisor n) test-divisor)
2042 (else (find-divisor n (+ test-divisor 1)))))
2043 (define (divides? a b)
2044 (= (remainder b a) 0))
2045 We can test whether a number is prime as follows: n is prime if and only if n is its own smallest
2046 divisor.
2047 (define (prime? n)
2048 (= n (smallest-divisor n)))
2049
2050 \fThe end test for find-divisor is based on the fact that if n is not prime it must have a divisor less
2051 than or equal to n. 44 This means that the algorithm need only test divisors between 1 and n.
2052 Consequently, the number of steps required to identify n as prime will have order of growth ( n).
2053
2054 The Fermat test
2055 The (log n) primality test is based on a result from number theory known as Fermat’s Little
2056 Theorem. 45
2057 Fermat’s Little Theorem: If n is a prime number and a is any positive integer less than n, then a
2058 raised to the nth power is congruent to a modulo n.
2059 (Two numbers are said to be congruent modulo n if they both have the same remainder when divided
2060 by n. The remainder of a number a when divided by n is also referred to as the remainder of a modulo
2061 n, or simply as a modulo n.)
2062 If n is not prime, then, in general, most of the numbers a< n will not satisfy the above relation. This
2063 leads to the following algorithm for testing primality: Given a number n, pick a random number a < n
2064 and compute the remainder of a n modulo n. If the result is not equal to a, then n is certainly not prime.
2065 If it is a, then chances are good that n is prime. Now pick another random number a and test it with the
2066 same method. If it also satisfies the equation, then we can be even more confident that n is prime. By
2067 trying more and more values of a, we can increase our confidence in the result. This algorithm is
2068 known as the Fermat test.
2069 To implement the Fermat test, we need a procedure that computes the exponential of a number modulo
2070 another number:
2071 (define (expmod base exp m)
2072 (cond ((= exp 0) 1)
2073 ((even? exp)
2074 (remainder (square (expmod base (/ exp 2) m))
2075 m))
2076 (else
2077 (remainder (* base (expmod base (- exp 1) m))
2078 m))))
2079 This is very similar to the fast-expt procedure of section 1.2.4. It uses successive squaring, so that
2080 the number of steps grows logarithmically with the exponent. 46
2081 The Fermat test is performed by choosing at random a number a between 1 and n - 1 inclusive and
2082 checking whether the remainder modulo n of the nth power of a is equal to a. The random number a is
2083 chosen using the procedure random, which we assume is included as a primitive in Scheme. Random
2084 returns a nonnegative integer less than its integer input. Hence, to obtain a random number between 1
2085 and n - 1, we call random with an input of n - 1 and add 1 to the result:
2086 (define (fermat-test n)
2087 (define (try-it a)
2088 (= (expmod a n n) a))
2089 (try-it (+ 1 (random (- n 1)))))
2090
2091 \fThe following procedure runs the test a given number of times, as specified by a parameter. Its value is
2092 true if the test succeeds every time, and false otherwise.
2093 (define (fast-prime? n times)
2094 (cond ((= times 0) true)
2095 ((fermat-test n) (fast-prime? n (- times 1)))
2096 (else false)))
2097
2098 Probabilistic methods
2099 The Fermat test differs in character from most familiar algorithms, in which one computes an answer
2100 that is guaranteed to be correct. Here, the answer obtained is only probably correct. More precisely, if
2101 n ever fails the Fermat test, we can be certain that n is not prime. But the fact that n passes the test,
2102 while an extremely strong indication, is still not a guarantee that n is prime. What we would like to say
2103 is that for any number n, if we perform the test enough times and find that n always passes the test,
2104 then the probability of error in our primality test can be made as small as we like.
2105 Unfortunately, this assertion is not quite correct. There do exist numbers that fool the Fermat test:
2106 numbers n that are not prime and yet have the property that a n is congruent to a modulo n for all
2107 integers a < n. Such numbers are extremely rare, so the Fermat test is quite reliable in practice. 47
2108 There are variations of the Fermat test that cannot be fooled. In these tests, as with the Fermat method,
2109 one tests the primality of an integer n by choosing a random integer a<n and checking some condition
2110 that depends upon n and a. (See exercise 1.28 for an example of such a test.) On the other hand, in
2111 contrast to the Fermat test, one can prove that, for any n, the condition does not hold for most of the
2112 integers a<n unless n is prime. Thus, if n passes the test for some random choice of a, the chances are
2113 better than even that n is prime. If n passes the test for two random choices of a, the chances are better
2114 than 3 out of 4 that n is prime. By running the test with more and more randomly chosen values of a
2115 we can make the probability of error as small as we like.
2116 The existence of tests for which one can prove that the chance of error becomes arbitrarily small has
2117 sparked interest in algorithms of this type, which have come to be known as probabilistic algorithms.
2118 There is a great deal of research activity in this area, and probabilistic algorithms have been fruitfully
2119 applied to many fields. 48
2120 Exercise 1.21. Use the smallest-divisor procedure to find the smallest divisor of each of the
2121 following numbers: 199, 1999, 19999.
2122 Exercise 1.22. Most Lisp implementations include a primitive called runtime that returns an integer
2123 that specifies the amount of time the system has been running (measured, for example, in
2124 microseconds). The following timed-prime-test procedure, when called with an integer n, prints
2125 n and checks to see if n is prime. If n is prime, the procedure prints three asterisks followed by the
2126 amount of time used in performing the test.
2127 (define (timed-prime-test n)
2128 (newline)
2129 (display n)
2130 (start-prime-test n (runtime)))
2131 (define (start-prime-test n start-time)
2132 (if (prime? n)
2133 (report-prime (- (runtime) start-time))))
2134 (define (report-prime elapsed-time)
2135 (display " *** ")
2136
2137 \f(display elapsed-time))
2138 Using this procedure, write a procedure search-for-primes that checks the primality of
2139 consecutive odd integers in a specified range. Use your procedure to find the three smallest primes
2140 larger than 1000; larger than 10,000; larger than 100,000; larger than 1,000,000. Note the time needed
2141 to test each prime. Since the testing algorithm has order of growth of ( n), you should expect that
2142 testing for primes around 10,000 should take about 10 times as long as testing for primes around
2143 1000. Do your timing data bear this out? How well do the data for 100,000 and 1,000,000 support the
2144 n prediction? Is your result compatible with the notion that programs on your machine run in time
2145 proportional to the number of steps required for the computation?
2146 Exercise 1.23. The smallest-divisor procedure shown at the start of this section does lots of
2147 needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to
2148 see if it is divisible by any larger even numbers. This suggests that the values used for
2149 test-divisor should not be 2, 3, 4, 5, 6, ..., but rather 2, 3, 5, 7, 9, .... To implement this
2150 change, define a procedure next that returns 3 if its input is equal to 2 and otherwise returns its input
2151 plus 2. Modify the smallest-divisor procedure to use (next test-divisor) instead of
2152 (+ test-divisor 1). With timed-prime-test incorporating this modified version of
2153 smallest-divisor, run the test for each of the 12 primes found in exercise 1.22. Since this
2154 modification halves the number of test steps, you should expect it to run about twice as fast. Is this
2155 expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how
2156 do you explain the fact that it is different from 2?
2157 Exercise 1.24. Modify the timed-prime-test procedure of exercise 1.22 to use fast-prime?
2158 (the Fermat method), and test each of the 12 primes you found in that exercise. Since the Fermat test
2159 has (log n) growth, how would you expect the time to test primes near 1,000,000 to compare with
2160 the time needed to test primes near 1000? Do your data bear this out? Can you explain any discrepancy
2161 you find?
2162 Exercise 1.25. Alyssa P. Hacker complains that we went to a lot of extra work in writing expmod.
2163 After all, she says, since we already know how to compute exponentials, we could have simply written
2164 (define (expmod base exp m)
2165 (remainder (fast-expt base exp) m))
2166 Is she correct? Would this procedure serve as well for our fast prime tester? Explain.
2167 Exercise 1.26. Louis Reasoner is having great difficulty doing exercise 1.24. His fast-prime? test
2168 seems to run more slowly than his prime? test. Louis calls his friend Eva Lu Ator over to help. When
2169 they examine Louis’s code, they find that he has rewritten the expmod procedure to use an explicit
2170 multiplication, rather than calling square:
2171 (define (expmod base exp m)
2172 (cond ((= exp 0) 1)
2173 ((even? exp)
2174 (remainder (* (expmod base (/ exp 2) m)
2175 (expmod base (/ exp 2) m))
2176 m))
2177 (else
2178 (remainder (* base (expmod base (- exp 1) m))
2179 m))))
2180
2181 \f‘‘I don’t see what difference that could make,’’ says Louis. ‘‘I do.’’ says Eva. ‘‘By writing the
2182 procedure like that, you have transformed the (log n) process into a (n) process.’’ Explain.
2183 Exercise 1.27. Demonstrate that the Carmichael numbers listed in footnote 47 really do fool the
2184 Fermat test. That is, write a procedure that takes an integer n and tests whether a n is congruent to a
2185 modulo n for every a<n, and try your procedure on the given Carmichael numbers.
2186 Exercise 1.28. One variant of the Fermat test that cannot be fooled is called the Miller-Rabin test
2187 (Miller 1976; Rabin 1980). This starts from an alternate form of Fermat’s Little Theorem, which states
2188 that if n is a prime number and a is any positive integer less than n, then a raised to the (n - 1)st power
2189 is congruent to 1 modulo n. To test the primality of a number n by the Miller-Rabin test, we pick a
2190 random number a<n and raise a to the (n - 1)st power modulo n using the expmod procedure.
2191 However, whenever we perform the squaring step in expmod, we check to see if we have discovered
2192 a ‘‘nontrivial square root of 1 modulo n,’’ that is, a number not equal to 1 or n - 1 whose square is
2193 equal to 1 modulo n. It is possible to prove that if such a nontrivial square root of 1 exists, then n is not
2194 prime. It is also possible to prove that if n is an odd number that is not prime, then, for at least half the
2195 numbers a<n, computing a n-1 in this way will reveal a nontrivial square root of 1 modulo n. (This is
2196 why the Miller-Rabin test cannot be fooled.) Modify the expmod procedure to signal if it discovers a
2197 nontrivial square root of 1, and use this to implement the Miller-Rabin test with a procedure analogous
2198 to fermat-test. Check your procedure by testing various known primes and non-primes. Hint:
2199 One convenient way to make expmod signal is to have it return 0.
2200 29 In a real program we would probably use the block structure introduced in the last section to hide
2201
2202 the definition of fact-iter:
2203 (define (factorial n)
2204 (define (iter product counter)
2205 (if (> counter n)
2206 product
2207 (iter (* counter product)
2208 (+ counter 1))))
2209 (iter 1 1))
2210 We avoided doing this here so as to minimize the number of things to think about at once.
2211 30 When we discuss the implementation of procedures on register machines in chapter 5, we will see
2212
2213 that any iterative process can be realized ‘‘in hardware’’ as a machine that has a fixed set of registers
2214 and no auxiliary memory. In contrast, realizing a recursive process requires a machine that uses an
2215 auxiliary data structure known as a stack.
2216 31 Tail recursion has long been known as a compiler optimization trick. A coherent semantic basis for
2217
2218 tail recursion was provided by Carl Hewitt (1977), who explained it in terms of the
2219 ‘‘message-passing’’ model of computation that we shall discuss in chapter 3. Inspired by this, Gerald
2220 Jay Sussman and Guy Lewis Steele Jr. (see Steele 1975) constructed a tail-recursive interpreter for
2221 Scheme. Steele later showed how tail recursion is a consequence of the natural way to compile
2222 procedure calls (Steele 1977). The IEEE standard for Scheme requires that Scheme implementations
2223 be tail-recursive.
2224 32 An example of this was hinted at in section 1.1.3: The interpreter itself evaluates expressions using
2225
2226 a tree-recursive process.
2227
2228 \f33 For example, work through in detail how the reduction rule applies to the problem of making
2229
2230 change for 10 cents using pennies and nickels.
2231 34 One approach to coping with redundant computations is to arrange matters so that we automatically
2232
2233 construct a table of values as they are computed. Each time we are asked to apply the procedure to
2234 some argument, we first look to see if the value is already stored in the table, in which case we avoid
2235 performing the redundant computation. This strategy, known as tabulation or memoization, can be
2236 implemented in a straightforward way. Tabulation can sometimes be used to transform processes that
2237 require an exponential number of steps (such as count-change) into processes whose space and
2238 time requirements grow linearly with the input. See exercise 3.27.
2239 35 The elements of Pascal’s triangle are called the binomial coefficients, because the nth row consists
2240
2241 of the coefficients of the terms in the expansion of (x + y) n . This pattern for computing the coefficients
2242 appeared in Blaise Pascal’s 1653 seminal work on probability theory, Traité du triangle arithmétique.
2243 According to Knuth (1973), the same pattern appears in the Szu-yuen Yü-chien (‘‘The Precious Mirror
2244 of the Four Elements’’), published by the Chinese mathematician Chu Shih-chieh in 1303, in the
2245 works of the twelfth-century Persian poet and mathematician Omar Khayyam, and in the works of the
2246 twelfth-century Hindu mathematician Bháscara Áchárya.
2247 36 These statements mask a great deal of oversimplification. For instance, if we count process steps as
2248
2249 ‘‘machine operations’’ we are making the assumption that the number of machine operations needed
2250 to perform, say, a multiplication is independent of the size of the numbers to be multiplied, which is
2251 false if the numbers are sufficiently large. Similar remarks hold for the estimates of space. Like the
2252 design and description of a process, the analysis of a process can be carried out at various levels of
2253 abstraction.
2254 37 More precisely, the number of multiplications required is equal to 1 less than the log base 2 of n
2255
2256 plus the number of ones in the binary representation of n. This total is always less than twice the log
2257 base 2 of n. The arbitrary constants k 1 and k 2 in the definition of order notation imply that, for a
2258 logarithmic process, the base to which logarithms are taken does not matter, so all such processes are
2259 described as (log n).
2260 38 You may wonder why anyone would care about raising numbers to the 1000th power. See
2261
2262 section 1.2.6.
2263 39 This iterative algorithm is ancient. It appears in the Chandah-sutra by Áchárya Pingala, written
2264
2265 before 200 B.C. See Knuth 1981, section 4.6.3, for a full discussion and analysis of this and other
2266 methods of exponentiation.
2267 40 This algorithm, which is sometimes known as the ‘‘Russian peasant method’’ of multiplication, is
2268
2269 ancient. Examples of its use are found in the Rhind Papyrus, one of the two oldest mathematical
2270 documents in existence, written about 1700 B.C. (and copied from an even older document) by an
2271 Egyptian scribe named A’h-mose.
2272 41 This exercise was suggested to us by Joe Stoy, based on an example in Kaldewaij 1990.
2273 42 Euclid’s Algorithm is so called because it appears in Euclid’s Elements (Book 7, ca. 300 B.C.).
2274
2275 According to Knuth (1973), it can be considered the oldest known nontrivial algorithm. The ancient
2276 Egyptian method of multiplication (exercise 1.18) is surely older, but, as Knuth explains, Euclid’s
2277 algorithm is the oldest known to have been presented as a general algorithm, rather than as a set of
2278 illustrative examples.
2279
2280 \f43 This theorem was proved in 1845 by Gabriel Lamé, a French mathematician and engineer known
2281
2282 chiefly for his contributions to mathematical physics. To prove the theorem, we consider pairs (a k
2283 ,b k ), where a k > b k , for which Euclid’s Algorithm terminates in k steps. The proof is based on the
2284 claim that, if (a k+1 , b k+1 ) (a k , b k ) (a k-1 , b k-1 ) are three successive pairs in the reduction
2285 process, then we must have b k+1 > b k + b k-1 . To verify the claim, consider that a reduction step is
2286 defined by applying the transformation a k-1 = b k , b k-1 = remainder of a k divided by b k . The second
2287 equation means that a k = qb k + b k-1 for some positive integer q. And since q must be at least 1 we
2288 have a k = qb k + b k-1 > b k + b k-1 . But in the previous reduction step we have b k+1 = a k . Therefore,
2289 b k+1 = a k > b k + b k-1 . This verifies the claim. Now we can prove the theorem by induction on k, the
2290 number of steps that the algorithm requires to terminate. The result is true for k = 1, since this merely
2291 requires that b be at least as large as Fib(1) = 1. Now, assume that the result is true for all integers less
2292 than or equal to k and establish the result for k + 1. Let (a k+1 , b k+1 ) (a k , b k ) (a k-1 , b k-1 ) be
2293 successive pairs in the reduction process. By our induction hypotheses, we have b k-1 > Fib(k - 1) and
2294 b k > Fib(k). Thus, applying the claim we just proved together with the definition of the Fibonacci
2295 numbers gives b k+1 > b k + b k-1 > Fib(k) + Fib(k - 1) = Fib(k + 1), which completes the proof of
2296 Lamé’s Theorem.
2297 44 If d is a divisor of n, then so is n/d. But d and n/d cannot both be greater than
2298
2299 n.
2300
2301 45 Pierre de Fermat (1601-1665) is considered to be the founder of modern number theory. He
2302
2303 obtained many important number-theoretic results, but he usually announced just the results, without
2304 providing his proofs. Fermat’s Little Theorem was stated in a letter he wrote in 1640. The first
2305 published proof was given by Euler in 1736 (and an earlier, identical proof was discovered in the
2306 unpublished manuscripts of Leibniz). The most famous of Fermat’s results -- known as Fermat’s Last
2307 Theorem -- was jotted down in 1637 in his copy of the book Arithmetic (by the third-century Greek
2308 mathematician Diophantus) with the remark ‘‘I have discovered a truly remarkable proof, but this
2309 margin is too small to contain it.’’ Finding a proof of Fermat’s Last Theorem became one of the most
2310 famous challenges in number theory. A complete solution was finally given in 1995 by Andrew Wiles
2311 of Princeton University.
2312 46 The reduction steps in the cases where the exponent e is greater than 1 are based on the fact that,
2313
2314 for any integers x, y, and m, we can find the remainder of x times y modulo m by computing separately
2315 the remainders of x modulo m and y modulo m, multiplying these, and then taking the remainder of the
2316 result modulo m. For instance, in the case where e is even, we compute the remainder of b e/2 modulo
2317 m, square this, and take the remainder modulo m. This technique is useful because it means we can
2318 perform our computation without ever having to deal with numbers much larger than m. (Compare
2319 exercise 1.25.)
2320 47 Numbers that fool the Fermat test are called Carmichael numbers, and little is known about them
2321
2322 other than that they are extremely rare. There are 255 Carmichael numbers below 100,000,000. The
2323 smallest few are 561, 1105, 1729, 2465, 2821, and 6601. In testing primality of very large numbers
2324 chosen at random, the chance of stumbling upon a value that fools the Fermat test is less than the
2325 chance that cosmic radiation will cause the computer to make an error in carrying out a ‘‘correct’’
2326 algorithm. Considering an algorithm to be inadequate for the first reason but not for the second
2327 illustrates the difference between mathematics and engineering.
2328 48 One of the most striking applications of probabilistic prime testing has been to the field of
2329
2330 cryptography. Although it is now computationally infeasible to factor an arbitrary 200-digit number,
2331 the primality of such a number can be checked in a few seconds with the Fermat test. This fact forms
2332
2333 \fthe basis of a technique for constructing ‘‘unbreakable codes’’ suggested by Rivest, Shamir, and
2334 Adleman (1977). The resulting RSA algorithm has become a widely used technique for enhancing the
2335 security of electronic communications. Because of this and related developments, the study of prime
2336 numbers, once considered the epitome of a topic in ‘‘pure’’ mathematics to be studied only for its own
2337 sake, now turns out to have important practical applications to cryptography, electronic funds transfer,
2338 and information retrieval.
2339 [Go to first, previous, next page; contents; index]
2340
2341 \f[Go to first, previous, next page; contents; index]
2342
2343 1.3 Formulating Abstractions with Higher-Order Procedures
2344 We have seen that procedures are, in effect, abstractions that describe compound operations on
2345 numbers independent of the particular numbers. For example, when we
2346 (define (cube x) (* x x x))
2347 we are not talking about the cube of a particular number, but rather about a method for obtaining the
2348 cube of any number. Of course we could get along without ever defining this procedure, by always
2349 writing expressions such as
2350 (* 3 3 3)
2351 (* x x x)
2352 (* y y y)
2353 and never mentioning cube explicitly. This would place us at a serious disadvantage, forcing us to
2354 work always at the level of the particular operations that happen to be primitives in the language
2355 (multiplication, in this case) rather than in terms of higher-level operations. Our programs would be
2356 able to compute cubes, but our language would lack the ability to express the concept of cubing. One
2357 of the things we should demand from a powerful programming language is the ability to build
2358 abstractions by assigning names to common patterns and then to work in terms of the abstractions
2359 directly. Procedures provide this ability. This is why all but the most primitive programming
2360 languages include mechanisms for defining procedures.
2361 Yet even in numerical processing we will be severely limited in our ability to create abstractions if we
2362 are restricted to procedures whose parameters must be numbers. Often the same programming pattern
2363 will be used with a number of different procedures. To express such patterns as concepts, we will need
2364 to construct procedures that can accept procedures as arguments or return procedures as values.
2365 Procedures that manipulate procedures are called higher-order procedures. This section shows how
2366 higher-order procedures can serve as powerful abstraction mechanisms, vastly increasing the
2367 expressive power of our language.
2368
2369 1.3.1 Procedures as Arguments
2370 Consider the following three procedures. The first computes the sum of the integers from a through b:
2371 (define (sum-integers a b)
2372 (if (> a b)
2373 0
2374 (+ a (sum-integers (+ a 1) b))))
2375 The second computes the sum of the cubes of the integers in the given range:
2376 (define (sum-cubes a b)
2377 (if (> a b)
2378 0
2379 (+ (cube a) (sum-cubes (+ a 1) b))))
2380
2381 \fThe third computes the sum of a sequence of terms in the series
2382
2383 which converges to
2384
2385 /8 (very slowly): 49
2386
2387 (define (pi-sum a b)
2388 (if (> a b)
2389 0
2390 (+ (/ 1.0 (* a (+ a 2))) (pi-sum (+ a 4) b))))
2391 These three procedures clearly share a common underlying pattern. They are for the most part
2392 identical, differing only in the name of the procedure, the function of a used to compute the term to be
2393 added, and the function that provides the next value of a. We could generate each of the procedures by
2394 filling in slots in the same template:
2395 (define (<name> a b)
2396 (if (> a b)
2397 0
2398 (+ (<term> a)
2399 (<name> (<next> a) b))))
2400 The presence of such a common pattern is strong evidence that there is a useful abstraction waiting to
2401 be brought to the surface. Indeed, mathematicians long ago identified the abstraction of summation of
2402 a series and invented ‘‘sigma notation,’’ for example
2403
2404 to express this concept. The power of sigma notation is that it allows mathematicians to deal with the
2405 concept of summation itself rather than only with particular sums -- for example, to formulate general
2406 results about sums that are independent of the particular series being summed.
2407 Similarly, as program designers, we would like our language to be powerful enough so that we can
2408 write a procedure that expresses the concept of summation itself rather than only procedures that
2409 compute particular sums. We can do so readily in our procedural language by taking the common
2410 template shown above and transforming the ‘‘slots’’ into formal parameters:
2411 (define (sum term a next b)
2412 (if (> a b)
2413 0
2414 (+ (term a)
2415 (sum term (next a) next b))))
2416 Notice that sum takes as its arguments the lower and upper bounds a and b together with the
2417 procedures term and next. We can use sum just as we would any procedure. For example, we can
2418 use it (along with a procedure inc that increments its argument by 1) to define sum-cubes:
2419 (define (inc n) (+ n 1))
2420 (define (sum-cubes a b)
2421 (sum cube a inc b))
2422
2423 \fUsing this, we can compute the sum of the cubes of the integers from 1 to 10:
2424 (sum-cubes 1 10)
2425 3025
2426 With the aid of an identity procedure to compute the term, we can define sum-integers in terms of
2427 sum:
2428 (define (identity x) x)
2429 (define (sum-integers a b)
2430 (sum identity a inc b))
2431 Then we can add up the integers from 1 to 10:
2432 (sum-integers 1 10)
2433 55
2434 We can also define pi-sum in the same way: 50
2435 (define (pi-sum a b)
2436 (define (pi-term x)
2437 (/ 1.0 (* x (+ x 2))))
2438 (define (pi-next x)
2439 (+ x 4))
2440 (sum pi-term a pi-next b))
2441 Using these procedures, we can compute an approximation to
2442
2443 :
2444
2445 (* 8 (pi-sum 1 1000))
2446 3.139592655589783
2447 Once we have sum, we can use it as a building block in formulating further concepts. For instance, the
2448 definite integral of a function f between the limits a and b can be approximated numerically using the
2449 formula
2450
2451 for small values of dx. We can express this directly as a procedure:
2452 (define (integral f a b dx)
2453 (define (add-dx x) (+ x dx))
2454 (* (sum f (+ a (/ dx 2.0)) add-dx b)
2455 dx))
2456 (integral cube 0 1 0.01)
2457 .24998750000000042
2458 (integral cube 0 1 0.001)
2459 .249999875000001
2460 (The exact value of the integral of cube between 0 and 1 is 1/4.)
2461
2462 \fExercise 1.29. Simpson’s Rule is a more accurate method of numerical integration than the method
2463 illustrated above. Using Simpson’s Rule, the integral of a function f between a and b is approximated
2464 as
2465
2466 where h = (b - a)/n, for some even integer n, and y k = f(a + kh). (Increasing n increases the accuracy of
2467 the approximation.) Define a procedure that takes as arguments f, a, b, and n and returns the value of
2468 the integral, computed using Simpson’s Rule. Use your procedure to integrate cube between 0 and 1
2469 (with n = 100 and n = 1000), and compare the results to those of the integral procedure shown
2470 above.
2471 Exercise 1.30. The sum procedure above generates a linear recursion. The procedure can be rewritten
2472 so that the sum is performed iteratively. Show how to do this by filling in the missing expressions in
2473 the following definition:
2474 (define (sum term a next b)
2475 (define (iter a result)
2476 (if <??>
2477 <??>
2478 (iter <??> <??>)))
2479 (iter <??> <??>))
2480 Exercise 1.31.
2481 a. The sum procedure is only the simplest of a vast number of similar abstractions that can be
2482 captured as higher-order procedures. 51 Write an analogous procedure called product that returns
2483 the product of the values of a function at points over a given range. Show how to define factorial
2484 in terms of product. Also use product to compute approximations to using the formula 52
2485
2486 b. If your product procedure generates a recursive process, write one that generates an iterative
2487 process. If it generates an iterative process, write one that generates a recursive process.
2488 Exercise 1.32. a. Show that sum and product (exercise 1.31) are both special cases of a still more
2489 general notion called accumulate that combines a collection of terms, using some general
2490 accumulation function:
2491 (accumulate combiner null-value term a next b)
2492 Accumulate takes as arguments the same term and range specifications as sum and product,
2493 together with a combiner procedure (of two arguments) that specifies how the current term is to be
2494 combined with the accumulation of the preceding terms and a null-value that specifies what base
2495 value to use when the terms run out. Write accumulate and show how sum and product can both
2496 be defined as simple calls to accumulate.
2497 b. If your accumulate procedure generates a recursive process, write one that generates an iterative
2498 process. If it generates an iterative process, write one that generates a recursive process.
2499
2500 \fExercise 1.33. You can obtain an even more general version of accumulate (exercise 1.32) by
2501 introducing the notion of a filter on the terms to be combined. That is, combine only those terms
2502 derived from values in the range that satisfy a specified condition. The resulting
2503 filtered-accumulate abstraction takes the same arguments as accumulate, together with an
2504 additional predicate of one argument that specifies the filter. Write filtered-accumulate as a
2505 procedure. Show how to express the following using filtered-accumulate:
2506 a. the sum of the squares of the prime numbers in the interval a to b (assuming that you have a
2507 prime? predicate already written)
2508 b. the product of all the positive integers less than n that are relatively prime to n (i.e., all positive
2509 integers i < n such that GCD(i,n) = 1).
2510
2511 1.3.2 Constructing Procedures Using Lambda
2512 In using sum as in section 1.3.1, it seems terribly awkward to have to define trivial procedures such as
2513 pi-term and pi-next just so we can use them as arguments to our higher-order procedure. Rather
2514 than define pi-next and pi-term, it would be more convenient to have a way to directly specify
2515 ‘‘the procedure that returns its input incremented by 4’’ and ‘‘the procedure that returns the reciprocal
2516 of its input times its input plus 2.’’ We can do this by introducing the special form lambda, which
2517 creates procedures. Using lambda we can describe what we want as
2518 (lambda (x) (+ x 4))
2519 and
2520 (lambda (x) (/ 1.0 (* x (+ x 2))))
2521 Then our pi-sum procedure can be expressed without defining any auxiliary procedures as
2522 (define (pi-sum a b)
2523 (sum (lambda (x) (/ 1.0 (* x (+ x 2))))
2524 a
2525 (lambda (x) (+ x 4))
2526 b))
2527 Again using lambda, we can write the integral procedure without having to define the auxiliary
2528 procedure add-dx:
2529 (define (integral f a b dx)
2530 (* (sum f
2531 (+ a (/ dx 2.0))
2532 (lambda (x) (+ x dx))
2533 b)
2534 dx))
2535 In general, lambda is used to create procedures in the same way as define, except that no name is
2536 specified for the procedure:
2537 (lambda (<formal-parameters>) <body>)
2538
2539 \fThe resulting procedure is just as much a procedure as one that is created using define. The only
2540 difference is that it has not been associated with any name in the environment. In fact,
2541 (define (plus4 x) (+ x 4))
2542 is equivalent to
2543 (define plus4 (lambda (x) (+ x 4)))
2544 We can read a lambda expression as follows:
2545 (lambda
2546
2547 (x)
2548
2549 the procedure
2550
2551 of an argument x
2552
2553 (+
2554 that adds
2555
2556 x
2557
2558 4))
2559
2560 x and 4
2561
2562 Like any expression that has a procedure as its value, a lambda expression can be used as the
2563 operator in a combination such as
2564 ((lambda (x y z) (+ x y (square z))) 1 2 3)
2565 12
2566 or, more generally, in any context where we would normally use a procedure name. 53
2567
2568 Using let to create local variables
2569 Another use of lambda is in creating local variables. We often need local variables in our procedures
2570 other than those that have been bound as formal parameters. For example, suppose we wish to
2571 compute the function
2572
2573 which we could also express as
2574
2575 In writing a procedure to compute f, we would like to include as local variables not only x and y but
2576 also the names of intermediate quantities like a and b. One way to accomplish this is to use an
2577 auxiliary procedure to bind the local variables:
2578 (define (f x y)
2579 (define (f-helper a b)
2580 (+ (* x (square a))
2581 (* y b)
2582 (* a b)))
2583 (f-helper (+ 1 (* x y))
2584 (- 1 y)))
2585 Of course, we could use a lambda expression to specify an anonymous procedure for binding our
2586 local variables. The body of f then becomes a single call to that procedure:
2587
2588 \f(define (f x y)
2589 ((lambda (a b)
2590 (+ (* x (square a))
2591 (* y b)
2592 (* a b)))
2593 (+ 1 (* x y))
2594 (- 1 y)))
2595 This construct is so useful that there is a special form called let to make its use more convenient.
2596 Using let, the f procedure could be written as
2597 (define (f x y)
2598 (let ((a (+ 1 (* x y)))
2599 (b (- 1 y)))
2600 (+ (* x (square a))
2601 (* y b)
2602 (* a b))))
2603 The general form of a let expression is
2604 (let ((<var 1 > <exp 1 >)
2605 (<var 2 > <exp 2 >)
2606 (<var n > <exp n >))
2607 <body>)
2608 which can be thought of as saying
2609 let
2610
2611 <var 1 > have the value <exp 1 > and
2612 <var 2 > have the value <exp 2 > and
2613
2614 <var n > have the value <exp n >
2615 in
2616
2617 <body>
2618
2619 The first part of the let expression is a list of name-expression pairs. When the let is evaluated,
2620 each name is associated with the value of the corresponding expression. The body of the let is
2621 evaluated with these names bound as local variables. The way this happens is that the let expression
2622 is interpreted as an alternate syntax for
2623 ((lambda (<var 1 > ...<var n >)
2624 <body>)
2625 <exp 1 >
2626 <exp n >)
2627
2628 \fNo new mechanism is required in the interpreter in order to provide local variables. A let expression
2629 is simply syntactic sugar for the underlying lambda application.
2630 We can see from this equivalence that the scope of a variable specified by a let expression is the
2631 body of the let. This implies that:
2632 Let allows one to bind variables as locally as possible to where they are to be used. For example,
2633 if the value of x is 5, the value of the expression
2634 (+ (let ((x 3))
2635 (+ x (* x 10)))
2636 x)
2637 is 38. Here, the x in the body of the let is 3, so the value of the let expression is 33. On the
2638 other hand, the x that is the second argument to the outermost + is still 5.
2639 The variables’ values are computed outside the let. This matters when the expressions that
2640 provide the values for the local variables depend upon variables having the same names as the
2641 local variables themselves. For example, if the value of x is 2, the expression
2642 (let ((x 3)
2643 (y (+ x 2)))
2644 (* x y))
2645 will have the value 12 because, inside the body of the let, x will be 3 and y will be 4 (which is
2646 the outer x plus 2).
2647 Sometimes we can use internal definitions to get the same effect as with let. For example, we could
2648 have defined the procedure f above as
2649 (define (f x y)
2650 (define a (+ 1 (* x y)))
2651 (define b (- 1 y))
2652 (+ (* x (square a))
2653 (* y b)
2654 (* a b)))
2655 We prefer, however, to use let in situations like this and to use internal define only for internal
2656 procedures. 54
2657 Exercise 1.34. Suppose we define the procedure
2658 (define (f g)
2659 (g 2))
2660 Then we have
2661 (f square)
2662 4
2663 (f (lambda (z) (* z (+ z 1))))
2664 6
2665
2666 \fWhat happens if we (perversely) ask the interpreter to evaluate the combination (f f)? Explain.
2667
2668 1.3.3 Procedures as General Methods
2669 We introduced compound procedures in section 1.1.4 as a mechanism for abstracting patterns of
2670 numerical operations so as to make them independent of the particular numbers involved. With
2671 higher-order procedures, such as the integral procedure of section 1.3.1, we began to see a more
2672 powerful kind of abstraction: procedures used to express general methods of computation, independent
2673 of the particular functions involved. In this section we discuss two more elaborate examples -- general
2674 methods for finding zeros and fixed points of functions -- and show how these methods can be
2675 expressed directly as procedures.
2676
2677 Finding roots of equations by the half-interval method
2678 The half-interval method is a simple but powerful technique for finding roots of an equation f(x) = 0,
2679 where f is a continuous function. The idea is that, if we are given points a and b such that f(a) < 0 <
2680 f(b), then f must have at least one zero between a and b. To locate a zero, let x be the average of a and
2681 b and compute f(x). If f(x) > 0, then f must have a zero between a and x. If f(x) < 0, then f must have a
2682 zero between x and b. Continuing in this way, we can identify smaller and smaller intervals on which f
2683 must have a zero. When we reach a point where the interval is small enough, the process stops. Since
2684 the interval of uncertainty is reduced by half at each step of the process, the number of steps required
2685 grows as (log( L/T)), where L is the length of the original interval and T is the error tolerance (that
2686 is, the size of the interval we will consider ‘‘small enough’’). Here is a procedure that implements this
2687 strategy:
2688 (define (search f neg-point pos-point)
2689 (let ((midpoint (average neg-point pos-point)))
2690 (if (close-enough? neg-point pos-point)
2691 midpoint
2692 (let ((test-value (f midpoint)))
2693 (cond ((positive? test-value)
2694 (search f neg-point midpoint))
2695 ((negative? test-value)
2696 (search f midpoint pos-point))
2697 (else midpoint))))))
2698 We assume that we are initially given the function f together with points at which its values are
2699 negative and positive. We first compute the midpoint of the two given points. Next we check to see if
2700 the given interval is small enough, and if so we simply return the midpoint as our answer. Otherwise,
2701 we compute as a test value the value of f at the midpoint. If the test value is positive, then we continue
2702 the process with a new interval running from the original negative point to the midpoint. If the test
2703 value is negative, we continue with the interval from the midpoint to the positive point. Finally, there
2704 is the possibility that the test value is 0, in which case the midpoint is itself the root we are searching
2705 for.
2706 To test whether the endpoints are ‘‘close enough’’ we can use a procedure similar to the one used in
2707 section 1.1.7 for computing square roots: 55
2708 (define (close-enough? x y)
2709 (< (abs (- x y)) 0.001))
2710
2711 \fSearch is awkward to use directly, because we can accidentally give it points at which f’s values do
2712 not have the required sign, in which case we get a wrong answer. Instead we will use search via the
2713 following procedure, which checks to see which of the endpoints has a negative function value and
2714 which has a positive value, and calls the search procedure accordingly. If the function has the same
2715 sign on the two given points, the half-interval method cannot be used, in which case the procedure
2716 signals an error. 56
2717 (define (half-interval-method f a b)
2718 (let ((a-value (f a))
2719 (b-value (f b)))
2720 (cond ((and (negative? a-value) (positive? b-value))
2721 (search f a b))
2722 ((and (negative? b-value) (positive? a-value))
2723 (search f b a))
2724 (else
2725 (error "Values are not of opposite sign" a b)))))
2726 The following example uses the half-interval method to approximate
2727 sin x = 0:
2728
2729 as the root between 2 and 4 of
2730
2731 (half-interval-method sin 2.0 4.0)
2732 3.14111328125
2733 Here is another example, using the half-interval method to search for a root of the equation x 3 - 2x - 3
2734 = 0 between 1 and 2:
2735 (half-interval-method (lambda (x) (- (* x x x) (* 2 x) 3))
2736 1.0
2737 2.0)
2738 1.89306640625
2739
2740 Finding fixed points of functions
2741 A number x is called a fixed point of a function f if x satisfies the equation f(x) = x. For some functions
2742 f we can locate a fixed point by beginning with an initial guess and applying f repeatedly,
2743
2744 until the value does not change very much. Using this idea, we can devise a procedure
2745 fixed-point that takes as inputs a function and an initial guess and produces an approximation to a
2746 fixed point of the function. We apply the function repeatedly until we find two successive values
2747 whose difference is less than some prescribed tolerance:
2748 (define tolerance 0.00001)
2749 (define (fixed-point f first-guess)
2750 (define (close-enough? v1 v2)
2751 (< (abs (- v1 v2)) tolerance))
2752 (define (try guess)
2753 (let ((next (f guess)))
2754 (if (close-enough? guess next)
2755 next
2756 (try next))))
2757
2758 \f(try first-guess))
2759 For example, we can use this method to approximate the fixed point of the cosine function, starting
2760 with 1 as an initial approximation: 57
2761 (fixed-point cos 1.0)
2762 .7390822985224023
2763 Similarly, we can find a solution to the equation y = sin y + cos y:
2764 (fixed-point (lambda (y) (+ (sin y) (cos y)))
2765 1.0)
2766 1.2587315962971173
2767 The fixed-point process is reminiscent of the process we used for finding square roots in section 1.1.7.
2768 Both are based on the idea of repeatedly improving a guess until the result satisfies some criterion. In
2769 fact, we can readily formulate the square-root computation as a fixed-point search. Computing the
2770 square root of some number x requires finding a y such that y 2 = x. Putting this equation into the
2771 equivalent form y = x/y, we recognize that we are looking for a fixed point of the function 58 y x/y,
2772 and we can therefore try to compute square roots as
2773 (define (sqrt x)
2774 (fixed-point (lambda (y) (/ x y))
2775 1.0))
2776 Unfortunately, this fixed-point search does not converge. Consider an initial guess y 1 . The next guess
2777 is y 2 = x/y 1 and the next guess is y 3 = x/y 2 = x/(x/y 1 ) = y 1 . This results in an infinite loop in which
2778 the two guesses y 1 and y 2 repeat over and over, oscillating about the answer.
2779 One way to control such oscillations is to prevent the guesses from changing so much. Since the
2780 answer is always between our guess y and x/y, we can make a new guess that is not as far from y as x/y
2781 by averaging y with x/y, so that the next guess after y is (1/2)(y + x/y) instead of x/y. The process of
2782 making such a sequence of guesses is simply the process of looking for a fixed point of y (1/2)(y +
2783 x/y):
2784 (define (sqrt x)
2785 (fixed-point (lambda (y) (average y (/ x y)))
2786 1.0))
2787 (Note that y = (1/2)(y + x/y) is a simple transformation of the equation y = x/y; to derive it, add y to
2788 both sides of the equation and divide by 2.)
2789 With this modification, the square-root procedure works. In fact, if we unravel the definitions, we can
2790 see that the sequence of approximations to the square root generated here is precisely the same as the
2791 one generated by our original square-root procedure of section 1.1.7. This approach of averaging
2792 successive approximations to a solution, a technique we that we call average damping, often aids the
2793 convergence of fixed-point searches.
2794 Exercise 1.35. Show that the golden ratio (section 1.2.2) is a fixed point of the transformation x
2795 + 1/x, and use this fact to compute by means of the fixed-point procedure.
2796
2797 1
2798
2799 \fExercise 1.36. Modify fixed-point so that it prints the sequence of approximations it generates,
2800 using the newline and display primitives shown in exercise 1.22. Then find a solution to x x =
2801 1000 by finding a fixed point of x
2802 log(1000)/log(x). (Use Scheme’s primitive log procedure,
2803 which computes natural logarithms.) Compare the number of steps this takes with and without average
2804 damping. (Note that you cannot start fixed-point with a guess of 1, as this would cause division
2805 by log(1) = 0.)
2806 Exercise 1.37. a. An infinite continued fraction is an expression of the form
2807
2808 As an example, one can show that the infinite continued fraction expansion with the N i and the D i all
2809 equal to 1 produces 1/ , where is the golden ratio (described in section 1.2.2). One way to
2810 approximate an infinite continued fraction is to truncate the expansion after a given number of terms.
2811 Such a truncation -- a so-called k-term finite continued fraction -- has the form
2812
2813 Suppose that n and d are procedures of one argument (the term index i) that return the N i and D i of
2814 the terms of the continued fraction. Define a procedure cont-frac such that evaluating
2815 (cont-frac n d k) computes the value of the k-term finite continued fraction. Check your
2816 procedure by approximating 1/ using
2817 (cont-frac (lambda (i) 1.0)
2818 (lambda (i) 1.0)
2819 k)
2820 for successive values of k. How large must you make k in order to get an approximation that is
2821 accurate to 4 decimal places?
2822 b. If your cont-frac procedure generates a recursive process, write one that generates an iterative
2823 process. If it generates an iterative process, write one that generates a recursive process.
2824 Exercise 1.38. In 1737, the Swiss mathematician Leonhard Euler published a memoir De
2825 Fractionibus Continuis, which included a continued fraction expansion for e - 2, where e is the base of
2826 the natural logarithms. In this fraction, the N i are all 1, and the D i are successively 1, 2, 1, 1, 4, 1, 1,
2827 6, 1, 1, 8, .... Write a program that uses your cont-frac procedure from exercise 1.37 to
2828 approximate e, based on Euler’s expansion.
2829 Exercise 1.39. A continued fraction representation of the tangent function was published in 1770 by
2830 the German mathematician J.H. Lambert:
2831
2832 \fwhere x is in radians. Define a procedure (tan-cf x k) that computes an approximation to the
2833 tangent function based on Lambert’s formula. K specifies the number of terms to compute, as in
2834 exercise 1.37.
2835
2836 1.3.4 Procedures as Returned Values
2837 The above examples demonstrate how the ability to pass procedures as arguments significantly
2838 enhances the expressive power of our programming language. We can achieve even more expressive
2839 power by creating procedures whose returned values are themselves procedures.
2840 We can illustrate this idea by looking again at the fixed-point example described at the end of
2841 section 1.3.3. We formulated a new version of the square-root procedure as a fixed-point search,
2842 starting with the observation that x is a fixed-point of the function y x/y. Then we used average
2843 damping to make the approximations converge. Average damping is a useful general technique in
2844 itself. Namely, given a function f, we consider the function whose value at x is equal to the average of
2845 x and f(x).
2846 We can express the idea of average damping by means of the following procedure:
2847 (define (average-damp f)
2848 (lambda (x) (average x (f x))))
2849 Average-damp is a procedure that takes as its argument a procedure f and returns as its value a
2850 procedure (produced by the lambda) that, when applied to a number x, produces the average of x and
2851 (f x). For example, applying average-damp to the square procedure produces a procedure
2852 whose value at some number x is the average of x and x 2 . Applying this resulting procedure to 10
2853 returns the average of 10 and 100, or 55: 59
2854 ((average-damp square) 10)
2855 55
2856 Using average-damp, we can reformulate the square-root procedure as follows:
2857 (define (sqrt x)
2858 (fixed-point (average-damp (lambda (y) (/ x y)))
2859 1.0))
2860 Notice how this formulation makes explicit the three ideas in the method: fixed-point search, average
2861 damping, and the function y x/y. It is instructive to compare this formulation of the square-root
2862 method with the original version given in section 1.1.7. Bear in mind that these procedures express the
2863 same process, and notice how much clearer the idea becomes when we express the process in terms of
2864 these abstractions. In general, there are many ways to formulate a process as a procedure. Experienced
2865 programmers know how to choose procedural formulations that are particularly perspicuous, and
2866 where useful elements of the process are exposed as separate entities that can be reused in other
2867 applications. As a simple example of reuse, notice that the cube root of x is a fixed point of the
2868 function y x/y 2 , so we can immediately generalize our square-root procedure to one that extracts
2869
2870 \fcube roots: 60
2871 (define (cube-root x)
2872 (fixed-point (average-damp (lambda (y) (/ x (square y))))
2873 1.0))
2874
2875 Newton’s method
2876 When we first introduced the square-root procedure, in section 1.1.7, we mentioned that this was a
2877 special case of Newton’s method. If x g(x) is a differentiable function, then a solution of the
2878 equation g(x) = 0 is a fixed point of the function x f(x) where
2879
2880 and Dg(x) is the derivative of g evaluated at x. Newton’s method is the use of the fixed-point method
2881 we saw above to approximate a solution of the equation by finding a fixed point of the function f. 61
2882 For many functions g and for sufficiently good initial guesses for x, Newton’s method converges very
2883 rapidly to a solution of g(x) = 0. 62
2884 In order to implement Newton’s method as a procedure, we must first express the idea of derivative.
2885 Note that ‘‘derivative,’’ like average damping, is something that transforms a function into another
2886 function. For instance, the derivative of the function x x 3 is the function x 3x 2 . In general, if g is
2887 a function and dx is a small number, then the derivative Dg of g is the function whose value at any
2888 number x is given (in the limit of small dx) by
2889
2890 Thus, we can express the idea of derivative (taking dx to be, say, 0.00001) as the procedure
2891 (define (deriv g)
2892 (lambda (x)
2893 (/ (- (g (+ x dx)) (g x))
2894 dx)))
2895 along with the definition
2896 (define dx 0.00001)
2897 Like average-damp, deriv is a procedure that takes a procedure as argument and returns a
2898 procedure as value. For example, to approximate the derivative of x x 3 at 5 (whose exact value is
2899 75) we can evaluate
2900 (define (cube x) (* x x x))
2901 ((deriv cube) 5)
2902 75.00014999664018
2903 With the aid of deriv, we can express Newton’s method as a fixed-point process:
2904
2905 \f(define (newton-transform g)
2906 (lambda (x)
2907 (- x (/ (g x) ((deriv g) x)))))
2908 (define (newtons-method g guess)
2909 (fixed-point (newton-transform g) guess))
2910 The newton-transform procedure expresses the formula at the beginning of this section, and
2911 newtons-method is readily defined in terms of this. It takes as arguments a procedure that
2912 computes the function for which we want to find a zero, together with an initial guess. For instance, to
2913 find the square root of x, we can use Newton’s method to find a zero of the function y y 2 - x starting
2914 with an initial guess of 1. 63 This provides yet another form of the square-root procedure:
2915 (define (sqrt x)
2916 (newtons-method (lambda (y) (- (square y) x))
2917 1.0))
2918
2919 Abstractions and first-class procedures
2920 We’ve seen two ways to express the square-root computation as an instance of a more general method,
2921 once as a fixed-point search and once using Newton’s method. Since Newton’s method was itself
2922 expressed as a fixed-point process, we actually saw two ways to compute square roots as fixed points.
2923 Each method begins with a function and finds a fixed point of some transformation of the function. We
2924 can express this general idea itself as a procedure:
2925 (define (fixed-point-of-transform g transform guess)
2926 (fixed-point (transform g) guess))
2927 This very general procedure takes as its arguments a procedure g that computes some function, a
2928 procedure that transforms g, and an initial guess. The returned result is a fixed point of the
2929 transformed function.
2930 Using this abstraction, we can recast the first square-root computation from this section (where we
2931 look for a fixed point of the average-damped version of y x/y) as an instance of this general method:
2932 (define (sqrt x)
2933 (fixed-point-of-transform (lambda (y) (/ x y))
2934 average-damp
2935 1.0))
2936 Similarly, we can express the second square-root computation from this section (an instance of
2937 Newton’s method that finds a fixed point of the Newton transform of y y 2 - x) as
2938 (define (sqrt x)
2939 (fixed-point-of-transform (lambda (y) (- (square y) x))
2940 newton-transform
2941 1.0))
2942 We began section 1.3 with the observation that compound procedures are a crucial abstraction
2943 mechanism, because they permit us to express general methods of computing as explicit elements in
2944 our programming language. Now we’ve seen how higher-order procedures permit us to manipulate
2945 these general methods to create further abstractions.
2946
2947 \fAs programmers, we should be alert to opportunities to identify the underlying abstractions in our
2948 programs and to build upon them and generalize them to create more powerful abstractions. This is not
2949 to say that one should always write programs in the most abstract way possible; expert programmers
2950 know how to choose the level of abstraction appropriate to their task. But it is important to be able to
2951 think in terms of these abstractions, so that we can be ready to apply them in new contexts. The
2952 significance of higher-order procedures is that they enable us to represent these abstractions explicitly
2953 as elements in our programming language, so that they can be handled just like other computational
2954 elements.
2955 In general, programming languages impose restrictions on the ways in which computational elements
2956 can be manipulated. Elements with the fewest restrictions are said to have first-class status. Some of
2957 the ‘‘rights and privileges’’ of first-class elements are: 64
2958 They may be named by variables.
2959 They may be passed as arguments to procedures.
2960 They may be returned as the results of procedures.
2961 They may be included in data structures. 65
2962 Lisp, unlike other common programming languages, awards procedures full first-class status. This
2963 poses challenges for efficient implementation, but the resulting gain in expressive power is
2964 enormous. 66
2965 Exercise 1.40. Define a procedure cubic that can be used together with the newtons-method
2966 procedure in expressions of the form
2967 (newtons-method (cubic a b c) 1)
2968 to approximate zeros of the cubic x 3 + ax 2 + bx + c.
2969 Exercise 1.41. Define a procedure double that takes a procedure of one argument as argument and
2970 returns a procedure that applies the original procedure twice. For example, if inc is a procedure that
2971 adds 1 to its argument, then (double inc) should be a procedure that adds 2. What value is
2972 returned by
2973 (((double (double double)) inc) 5)
2974 Exercise 1.42. Let f and g be two one-argument functions. The composition f after g is defined to be
2975 the function x f(g(x)). Define a procedure compose that implements composition. For example, if
2976 inc is a procedure that adds 1 to its argument,
2977 ((compose square inc) 6)
2978 49
2979 Exercise 1.43. If f is a numerical function and n is a positive integer, then we can form the nth
2980 repeated application of f, which is defined to be the function whose value at x is f(f(...(f(x))...)).
2981 For example, if f is the function x x + 1, then the nth repeated application of f is the function x x +
2982 n. If f is the operation of squaring a number, then the nth repeated application of f is the function that
2983 raises its argument to the 2 n th power. Write a procedure that takes as inputs a procedure that computes
2984 f and a positive integer n and returns the procedure that computes the nth repeated application of f.
2985 Your procedure should be able to be used as follows:
2986
2987 \f((repeated square 2) 5)
2988 625
2989 Hint: You may find it convenient to use compose from exercise 1.42.
2990 Exercise 1.44. The idea of smoothing a function is an important concept in signal processing. If f is a
2991 function and dx is some small number, then the smoothed version of f is the function whose value at a
2992 point x is the average of f(x - dx), f(x), and f(x + dx). Write a procedure smooth that takes as input a
2993 procedure that computes f and returns a procedure that computes the smoothed f. It is sometimes
2994 valuable to repeatedly smooth a function (that is, smooth the smoothed function, and so on) to
2995 obtained the n-fold smoothed function. Show how to generate the n-fold smoothed function of any
2996 given function using smooth and repeated from exercise 1.43.
2997 Exercise 1.45. We saw in section 1.3.3 that attempting to compute square roots by naively finding a
2998 fixed point of y x/y does not converge, and that this can be fixed by average damping. The same
2999 method works for finding cube roots as fixed points of the average-damped y x/y 2 . Unfortunately,
3000 the process does not work for fourth roots -- a single average damp is not enough to make a
3001 fixed-point search for y x/y 3 converge. On the other hand, if we average damp twice (i.e., use the
3002 average damp of the average damp of y x/y 3 ) the fixed-point search does converge. Do some
3003 experiments to determine how many average damps are required to compute nth roots as a fixed-point
3004 search based upon repeated average damping of y x/y n-1 . Use this to implement a simple procedure
3005 for computing nth roots using fixed-point, average-damp, and the repeated procedure of
3006 exercise 1.43. Assume that any arithmetic operations you need are available as primitives.
3007 Exercise 1.46. Several of the numerical methods described in this chapter are instances of an
3008 extremely general computational strategy known as iterative improvement. Iterative improvement says
3009 that, to compute something, we start with an initial guess for the answer, test if the guess is good
3010 enough, and otherwise improve the guess and continue the process using the improved guess as the
3011 new guess. Write a procedure iterative-improve that takes two procedures as arguments: a
3012 method for telling whether a guess is good enough and a method for improving a guess.
3013 Iterative-improve should return as its value a procedure that takes a guess as argument and
3014 keeps improving the guess until it is good enough. Rewrite the sqrt procedure of section 1.1.7 and
3015 the fixed-point procedure of section 1.3.3 in terms of iterative-improve.
3016 49 This series, usually written in the equivalent form ( /4) = 1 - (1/3) + (1/5) - (1/7) + ···, is due to
3017
3018 Leibniz. We’ll see how to use this as the basis for some fancy numerical tricks in section 3.5.3.
3019 50 Notice that we have used block structure (section 1.1.8) to embed the definitions of pi-next and
3020
3021 pi-term within pi-sum, since these procedures are unlikely to be useful for any other purpose. We
3022 will see how to get rid of them altogether in section 1.3.2.
3023 51 The intent of exercises 1.31-1.33 is to demonstrate the expressive power that is attained by using an
3024
3025 appropriate abstraction to consolidate many seemingly disparate operations. However, though
3026 accumulation and filtering are elegant ideas, our hands are somewhat tied in using them at this point
3027 since we do not yet have data structures to provide suitable means of combination for these
3028 abstractions. We will return to these ideas in section 2.2.3 when we show how to use sequences as
3029 interfaces for combining filters and accumulators to build even more powerful abstractions. We will
3030 see there how these methods really come into their own as a powerful and elegant approach to
3031 designing programs.
3032
3033 \f52 This formula was discovered by the seventeenth-century English mathematician John Wallis.
3034 53 It would be clearer and less intimidating to people learning Lisp if a name more obvious than
3035
3036 lambda, such as make-procedure, were used. But the convention is firmly entrenched. The
3037 notation is adopted from the calculus, a mathematical formalism introduced by the mathematical
3038 logician Alonzo Church (1941). Church developed the calculus to provide a rigorous foundation for
3039 studying the notions of function and function application. The calculus has become a basic tool for
3040 mathematical investigations of the semantics of programming languages.
3041 54 Understanding internal definitions well enough to be sure a program means what we intend it to
3042
3043 mean requires a more elaborate model of the evaluation process than we have presented in this
3044 chapter. The subtleties do not arise with internal definitions of procedures, however. We will return to
3045 this issue in section 4.1.6, after we learn more about evaluation.
3046 55 We have used 0.001 as a representative ‘‘small’’ number to indicate a tolerance for the acceptable
3047
3048 error in a calculation. The appropriate tolerance for a real calculation depends upon the problem to be
3049 solved and the limitations of the computer and the algorithm. This is often a very subtle consideration,
3050 requiring help from a numerical analyst or some other kind of magician.
3051 56 This can be accomplished using error, which takes as arguments a number of items that are
3052
3053 printed as error messages.
3054 57 Try this during a boring lecture: Set your calculator to radians mode and then repeatedly press the
3055
3056 cos button until you obtain the fixed point.
3057 58
3058
3059 (pronounced ‘‘maps to’’) is the mathematician’s way of writing lambda. y
3060 (lambda(y) (/ x y)), that is, the function whose value at y is x/y.
3061
3062 x/y means
3063
3064 59 Observe that this is a combination whose operator is itself a combination. Exercise 1.4 already
3065
3066 demonstrated the ability to form such combinations, but that was only a toy example. Here we begin to
3067 see the real need for such combinations -- when applying a procedure that is obtained as the value
3068 returned by a higher-order procedure.
3069 60 See exercise 1.45 for a further generalization.
3070 61 Elementary calculus books usually describe Newton’s method in terms of the sequence of
3071
3072 approximations x n+1 = x n - g(x n )/Dg(x n ). Having language for talking about processes and using the
3073 idea of fixed points simplifies the description of the method.
3074 62 Newton’s method does not always converge to an answer, but it can be shown that in favorable
3075
3076 cases each iteration doubles the number-of-digits accuracy of the approximation to the solution. In
3077 such cases, Newton’s method will converge much more rapidly than the half-interval method.
3078 63 For finding square roots, Newton’s method converges rapidly to the correct solution from any
3079
3080 starting point.
3081 64 The notion of first-class status of programming-language elements is due to the British computer
3082
3083 scientist Christopher Strachey (1916-1975).
3084 65 We’ll see examples of this after we introduce data structures in chapter 2.
3085
3086 \f66 The major implementation cost of first-class procedures is that allowing procedures to be returned
3087
3088 as values requires reserving storage for a procedure’s free variables even while the procedure is not
3089 executing. In the Scheme implementation we will study in section 4.1, these variables are stored in the
3090 procedure’s environment.
3091 [Go to first, previous, next page; contents; index]
3092
3093 \f[Go to first, previous, next page; contents; index]
3094
3095 Chapter 2
3096 Building Abstractions with Data
3097 We now come to the decisive step of mathematical
3098 abstraction: we forget about what the symbols stand for.
3099 ...[The mathematician] need not be idle; there are many
3100 operations which he may carry out with these symbols,
3101 without ever having to look at the things they stand for.
3102 Hermann Weyl, The Mathematical Way of Thinking
3103 We concentrated in chapter 1 on computational processes and on the role of procedures in program
3104 design. We saw how to use primitive data (numbers) and primitive operations (arithmetic operations),
3105 how to combine procedures to form compound procedures through composition, conditionals, and the
3106 use of parameters, and how to abstract procedures by using define. We saw that a procedure can be
3107 regarded as a pattern for the local evolution of a process, and we classified, reasoned about, and
3108 performed simple algorithmic analyses of some common patterns for processes as embodied in
3109 procedures. We also saw that higher-order procedures enhance the power of our language by enabling
3110 us to manipulate, and thereby to reason in terms of, general methods of computation. This is much of
3111 the essence of programming.
3112 In this chapter we are going to look at more complex data. All the procedures in chapter 1 operate on
3113 simple numerical data, and simple data are not sufficient for many of the problems we wish to address
3114 using computation. Programs are typically designed to model complex phenomena, and more often
3115 than not one must construct computational objects that have several parts in order to model real-world
3116 phenomena that have several aspects. Thus, whereas our focus in chapter 1 was on building
3117 abstractions by combining procedures to form compound procedures, we turn in this chapter to another
3118 key aspect of any programming language: the means it provides for building abstractions by
3119 combining data objects to form compound data.
3120 Why do we want compound data in a programming language? For the same reasons that we want
3121 compound procedures: to elevate the conceptual level at which we can design our programs, to
3122 increase the modularity of our designs, and to enhance the expressive power of our language. Just as
3123 the ability to define procedures enables us to deal with processes at a higher conceptual level than that
3124 of the primitive operations of the language, the ability to construct compound data objects enables us
3125 to deal with data at a higher conceptual level than that of the primitive data objects of the language.
3126 Consider the task of designing a system to perform arithmetic with rational numbers. We could
3127 imagine an operation add-rat that takes two rational numbers and produces their sum. In terms of
3128 simple data, a rational number can be thought of as two integers: a numerator and a denominator.
3129 Thus, we could design a program in which each rational number would be represented by two integers
3130 (a numerator and a denominator) and where add-rat would be implemented by two procedures (one
3131 producing the numerator of the sum and one producing the denominator). But this would be awkward,
3132 because we would then need to explicitly keep track of which numerators corresponded to which
3133 denominators. In a system intended to perform many operations on many rational numbers, such
3134 bookkeeping details would clutter the programs substantially, to say nothing of what they would do to
3135
3136 \four minds. It would be much better if we could ‘‘glue together’’ a numerator and denominator to form
3137 a pair -- a compound data object -- that our programs could manipulate in a way that would be
3138 consistent with regarding a rational number as a single conceptual unit.
3139 The use of compound data also enables us to increase the modularity of our programs. If we can
3140 manipulate rational numbers directly as objects in their own right, then we can separate the part of our
3141 program that deals with rational numbers per se from the details of how rational numbers may be
3142 represented as pairs of integers. The general technique of isolating the parts of a program that deal
3143 with how data objects are represented from the parts of a program that deal with how data objects are
3144 used is a powerful design methodology called data abstraction. We will see how data abstraction
3145 makes programs much easier to design, maintain, and modify.
3146 The use of compound data leads to a real increase in the expressive power of our programming
3147 language. Consider the idea of forming a ‘‘linear combination’’ ax + by. We might like to write a
3148 procedure that would accept a, b, x, and y as arguments and return the value of ax + by. This presents
3149 no difficulty if the arguments are to be numbers, because we can readily define the procedure
3150 (define (linear-combination a b x y)
3151 (+ (* a x) (* b y)))
3152 But suppose we are not concerned only with numbers. Suppose we would like to express, in
3153 procedural terms, the idea that one can form linear combinations whenever addition and multiplication
3154 are defined -- for rational numbers, complex numbers, polynomials, or whatever. We could express
3155 this as a procedure of the form
3156 (define (linear-combination a b x y)
3157 (add (mul a x) (mul b y)))
3158 where add and mul are not the primitive procedures + and * but rather more complex things that will
3159 perform the appropriate operations for whatever kinds of data we pass in as the arguments a, b, x, and
3160 y. The key point is that the only thing linear-combination should need to know about a, b, x,
3161 and y is that the procedures add and mul will perform the appropriate manipulations. From the
3162 perspective of the procedure linear-combination, it is irrelevant what a, b, x, and y are and
3163 even more irrelevant how they might happen to be represented in terms of more primitive data. This
3164 same example shows why it is important that our programming language provide the ability to
3165 manipulate compound objects directly: Without this, there is no way for a procedure such as
3166 linear-combination to pass its arguments along to add and mul without having to know their
3167 detailed structure. 1 We begin this chapter by implementing the rational-number arithmetic system
3168 mentioned above. This will form the background for our discussion of compound data and data
3169 abstraction. As with compound procedures, the main issue to be addressed is that of abstraction as a
3170 technique for coping with complexity, and we will see how data abstraction enables us to erect suitable
3171 abstraction barriers between different parts of a program.
3172 We will see that the key to forming compound data is that a programming language should provide
3173 some kind of ‘‘glue’’ so that data objects can be combined to form more complex data objects. There
3174 are many possible kinds of glue. Indeed, we will discover how to form compound data using no
3175 special ‘‘data’’ operations at all, only procedures. This will further blur the distinction between
3176 ‘‘procedure’’ and ‘‘data,’’ which was already becoming tenuous toward the end of chapter 1. We will
3177 also explore some conventional techniques for representing sequences and trees. One key idea in
3178 dealing with compound data is the notion of closure -- that the glue we use for combining data objects
3179 should allow us to combine not only primitive data objects, but compound data objects as well.
3180 Another key idea is that compound data objects can serve as conventional interfaces for combining
3181
3182 \fprogram modules in mix-and-match ways. We illustrate some of these ideas by presenting a simple
3183 graphics language that exploits closure.
3184 We will then augment the representational power of our language by introducing symbolic expressions
3185 -- data whose elementary parts can be arbitrary symbols rather than only numbers. We explore various
3186 alternatives for representing sets of objects. We will find that, just as a given numerical function can
3187 be computed by many different computational processes, there are many ways in which a given data
3188 structure can be represented in terms of simpler objects, and the choice of representation can have
3189 significant impact on the time and space requirements of processes that manipulate the data. We will
3190 investigate these ideas in the context of symbolic differentiation, the representation of sets, and the
3191 encoding of information.
3192 Next we will take up the problem of working with data that may be represented differently by different
3193 parts of a program. This leads to the need to implement generic operations, which must handle many
3194 different types of data. Maintaining modularity in the presence of generic operations requires more
3195 powerful abstraction barriers than can be erected with simple data abstraction alone. In particular, we
3196 introduce data-directed programming as a technique that allows individual data representations to be
3197 designed in isolation and then combined additively (i.e., without modification). To illustrate the power
3198 of this approach to system design, we close the chapter by applying what we have learned to the
3199 implementation of a package for performing symbolic arithmetic on polynomials, in which the
3200 coefficients of the polynomials can be integers, rational numbers, complex numbers, and even other
3201 polynomials.
3202 1 The ability to directly manipulate procedures provides an analogous increase in the expressive
3203
3204 power of a programming language. For example, in section 1.3.1 we introduced the sum procedure,
3205 which takes a procedure term as an argument and computes the sum of the values of term over
3206 some specified interval. In order to define sum, it is crucial that we be able to speak of a procedure
3207 such as term as an entity in its own right, without regard for how term might be expressed with
3208 more primitive operations. Indeed, if we did not have the notion of ‘‘a procedure,’’ it is doubtful that
3209 we would ever even think of the possibility of defining an operation such as sum. Moreover, insofar as
3210 performing the summation is concerned, the details of how term may be constructed from more
3211 primitive operations are irrelevant.
3212 [Go to first, previous, next page; contents; index]
3213
3214 \f[Go to first, previous, next page; contents; index]
3215
3216 2.1 Introduction to Data Abstraction
3217 In section 1.1.8, we noted that a procedure used as an element in creating a more complex procedure
3218 could be regarded not only as a collection of particular operations but also as a procedural abstraction.
3219 That is, the details of how the procedure was implemented could be suppressed, and the particular
3220 procedure itself could be replaced by any other procedure with the same overall behavior. In other
3221 words, we could make an abstraction that would separate the way the procedure would be used from
3222 the details of how the procedure would be implemented in terms of more primitive procedures. The
3223 analogous notion for compound data is called data abstraction. Data abstraction is a methodology that
3224 enables us to isolate how a compound data object is used from the details of how it is constructed from
3225 more primitive data objects.
3226 The basic idea of data abstraction is to structure the programs that are to use compound data objects so
3227 that they operate on ‘‘abstract data.’’ That is, our programs should use data in such a way as to make
3228 no assumptions about the data that are not strictly necessary for performing the task at hand. At the
3229 same time, a ‘‘concrete’’ data representation is defined independent of the programs that use the data.
3230 The interface between these two parts of our system will be a set of procedures, called selectors and
3231 constructors, that implement the abstract data in terms of the concrete representation. To illustrate this
3232 technique, we will consider how to design a set of procedures for manipulating rational numbers.
3233
3234 2.1.1 Example: Arithmetic Operations for Rational Numbers
3235 Suppose we want to do arithmetic with rational numbers. We want to be able to add, subtract,
3236 multiply, and divide them and to test whether two rational numbers are equal.
3237 Let us begin by assuming that we already have a way of constructing a rational number from a
3238 numerator and a denominator. We also assume that, given a rational number, we have a way of
3239 extracting (or selecting) its numerator and its denominator. Let us further assume that the constructor
3240 and selectors are available as procedures:
3241 (make-rat <n> <d>) returns the rational number whose numerator is the integer <n> and
3242 whose denominator is the integer <d>.
3243 (numer <x>) returns the numerator of the rational number <x>.
3244 (denom <x>) returns the denominator of the rational number <x>.
3245 We are using here a powerful strategy of synthesis: wishful thinking. We haven’t yet said how a
3246 rational number is represented, or how the procedures numer, denom, and make-rat should be
3247 implemented. Even so, if we did have these three procedures, we could then add, subtract, multiply,
3248 divide, and test equality by using the following relations:
3249
3250 \fWe can express these rules as procedures:
3251 (define (add-rat x y)
3252 (make-rat (+ (* (numer
3253 (* (numer
3254 (* (denom x)
3255 (define (sub-rat x y)
3256 (make-rat (- (* (numer
3257 (* (numer
3258 (* (denom x)
3259 (define (mul-rat x y)
3260 (make-rat (* (numer x)
3261 (* (denom x)
3262 (define (div-rat x y)
3263 (make-rat (* (numer x)
3264 (* (denom x)
3265 (define (equal-rat? x y)
3266 (= (* (numer x) (denom
3267 (* (numer y) (denom
3268
3269 x) (denom y))
3270 y) (denom x)))
3271 (denom y))))
3272 x) (denom y))
3273 y) (denom x)))
3274 (denom y))))
3275 (numer y))
3276 (denom y))))
3277 (denom y))
3278 (numer y))))
3279 y))
3280 x))))
3281
3282 Now we have the operations on rational numbers defined in terms of the selector and constructor
3283 procedures numer, denom, and make-rat. But we haven’t yet defined these. What we need is some
3284 way to glue together a numerator and a denominator to form a rational number.
3285
3286 Pairs
3287 To enable us to implement the concrete level of our data abstraction, our language provides a
3288 compound structure called a pair, which can be constructed with the primitive procedure cons. This
3289 procedure takes two arguments and returns a compound data object that contains the two arguments as
3290 parts. Given a pair, we can extract the parts using the primitive procedures car and cdr. 2 Thus, we
3291 can use cons, car, and cdr as follows:
3292 (define x (cons 1 2))
3293 (car x)
3294 1
3295 (cdr x)
3296 2
3297 Notice that a pair is a data object that can be given a name and manipulated, just like a primitive data
3298 object. Moreover, cons can be used to form pairs whose elements are pairs, and so on:
3299
3300 \f(define x
3301 (define y
3302 (define z
3303 (car (car
3304 1
3305 (car (cdr
3306 3
3307
3308 (cons 1 2))
3309 (cons 3 4))
3310 (cons x y))
3311 z))
3312 z))
3313
3314 In section 2.2 we will see how this ability to combine pairs means that pairs can be used as
3315 general-purpose building blocks to create all sorts of complex data structures. The single
3316 compound-data primitive pair, implemented by the procedures cons, car, and cdr, is the only glue
3317 we need. Data objects constructed from pairs are called list-structured data.
3318
3319 Representing rational numbers
3320 Pairs offer a natural way to complete the rational-number system. Simply represent a rational number
3321 as a pair of two integers: a numerator and a denominator. Then make-rat, numer, and denom are
3322 readily implemented as follows: 3
3323 (define (make-rat n d) (cons n d))
3324 (define (numer x) (car x))
3325 (define (denom x) (cdr x))
3326 Also, in order to display the results of our computations, we can print rational numbers by printing the
3327 numerator, a slash, and the denominator: 4
3328 (define (print-rat x)
3329 (newline)
3330 (display (numer x))
3331 (display "/")
3332 (display (denom x)))
3333 Now we can try our rational-number procedures:
3334 (define one-half (make-rat 1 2))
3335 (print-rat one-half)
3336 1/2
3337 (define one-third (make-rat 1 3))
3338 (print-rat (add-rat one-half one-third))
3339 5/6
3340 (print-rat (mul-rat one-half one-third))
3341 1/6
3342 (print-rat (add-rat one-third one-third))
3343 6/9
3344 As the final example shows, our rational-number implementation does not reduce rational numbers to
3345 lowest terms. We can remedy this by changing make-rat. If we have a gcd procedure like the one
3346 in section 1.2.5 that produces the greatest common divisor of two integers, we can use gcd to reduce
3347 the numerator and the denominator to lowest terms before constructing the pair:
3348
3349 \f(define (make-rat n d)
3350 (let ((g (gcd n d)))
3351 (cons (/ n g) (/ d g))))
3352 Now we have
3353 (print-rat (add-rat one-third one-third))
3354 2/3
3355 as desired. This modification was accomplished by changing the constructor make-rat without
3356 changing any of the procedures (such as add-rat and mul-rat) that implement the actual
3357 operations.
3358 Exercise 2.1. Define a better version of make-rat that handles both positive and negative
3359 arguments. Make-rat should normalize the sign so that if the rational number is positive, both the
3360 numerator and denominator are positive, and if the rational number is negative, only the numerator is
3361 negative.
3362
3363 2.1.2 Abstraction Barriers
3364 Before continuing with more examples of compound data and data abstraction, let us consider some of
3365 the issues raised by the rational-number example. We defined the rational-number operations in terms
3366 of a constructor make-rat and selectors numer and denom. In general, the underlying idea of data
3367 abstraction is to identify for each type of data object a basic set of operations in terms of which all
3368 manipulations of data objects of that type will be expressed, and then to use only those operations in
3369 manipulating the data.
3370 We can envision the structure of the rational-number system as shown in figure 2.1. The horizontal
3371 lines represent abstraction barriers that isolate different ‘‘levels’’ of the system. At each level, the
3372 barrier separates the programs (above) that use the data abstraction from the programs (below) that
3373 implement the data abstraction. Programs that use rational numbers manipulate them solely in terms of
3374 the procedures supplied ‘‘for public use’’ by the rational-number package: add-rat, sub-rat,
3375 mul-rat, div-rat, and equal-rat?. These, in turn, are implemented solely in terms of the
3376 constructor and selectors make-rat, numer, and denom, which themselves are implemented in
3377 terms of pairs. The details of how pairs are implemented are irrelevant to the rest of the
3378 rational-number package so long as pairs can be manipulated by the use of cons, car, and cdr. In
3379 effect, procedures at each level are the interfaces that define the abstraction barriers and connect the
3380 different levels.
3381
3382 \fFigure 2.1: Data-abstraction barriers in the rational-number package.
3383 Figure 2.1: Data-abstraction barriers in the rational-number package.
3384 This simple idea has many advantages. One advantage is that it makes programs much easier to
3385 maintain and to modify. Any complex data structure can be represented in a variety of ways with the
3386 primitive data structures provided by a programming language. Of course, the choice of representation
3387 influences the programs that operate on it; thus, if the representation were to be changed at some later
3388 time, all such programs might have to be modified accordingly. This task could be time-consuming
3389 and expensive in the case of large programs unless the dependence on the representation were to be
3390 confined by design to a very few program modules.
3391 For example, an alternate way to address the problem of reducing rational numbers to lowest terms is
3392 to perform the reduction whenever we access the parts of a rational number, rather than when we
3393 construct it. This leads to different constructor and selector procedures:
3394 (define (make-rat n d)
3395 (cons n d))
3396 (define (numer x)
3397 (let ((g (gcd (car x) (cdr x))))
3398 (/ (car x) g)))
3399 (define (denom x)
3400 (let ((g (gcd (car x) (cdr x))))
3401 (/ (cdr x) g)))
3402 The difference between this implementation and the previous one lies in when we compute the gcd. If
3403 in our typical use of rational numbers we access the numerators and denominators of the same rational
3404 numbers many times, it would be preferable to compute the gcd when the rational numbers are
3405 constructed. If not, we may be better off waiting until access time to compute the gcd. In any case,
3406 when we change from one representation to the other, the procedures add-rat, sub-rat, and so on
3407 do not have to be modified at all.
3408 Constraining the dependence on the representation to a few interface procedures helps us design
3409 programs as well as modify them, because it allows us to maintain the flexibility to consider alternate
3410 implementations. To continue with our simple example, suppose we are designing a rational-number
3411 package and we can’t decide initially whether to perform the gcd at construction time or at selection
3412
3413 \ftime. The data-abstraction methodology gives us a way to defer that decision without losing the ability
3414 to make progress on the rest of the system.
3415 Exercise 2.2. Consider the problem of representing line segments in a plane. Each segment is
3416 represented as a pair of points: a starting point and an ending point. Define a constructor
3417 make-segment and selectors start-segment and end-segment that define the representation
3418 of segments in terms of points. Furthermore, a point can be represented as a pair of numbers: the x
3419 coordinate and the y coordinate. Accordingly, specify a constructor make-point and selectors
3420 x-point and y-point that define this representation. Finally, using your selectors and
3421 constructors, define a procedure midpoint-segment that takes a line segment as argument and
3422 returns its midpoint (the point whose coordinates are the average of the coordinates of the endpoints).
3423 To try your procedures, you’ll need a way to print points:
3424 (define (print-point p)
3425 (newline)
3426 (display "(")
3427 (display (x-point p))
3428 (display ",")
3429 (display (y-point p))
3430 (display ")"))
3431 Exercise 2.3. Implement a representation for rectangles in a plane. (Hint: You may want to make use
3432 of exercise 2.2.) In terms of your constructors and selectors, create procedures that compute the
3433 perimeter and the area of a given rectangle. Now implement a different representation for rectangles.
3434 Can you design your system with suitable abstraction barriers, so that the same perimeter and area
3435 procedures will work using either representation?
3436
3437 2.1.3 What Is Meant by Data?
3438 We began the rational-number implementation in section 2.1.1 by implementing the rational-number
3439 operations add-rat, sub-rat, and so on in terms of three unspecified procedures: make-rat,
3440 numer, and denom. At that point, we could think of the operations as being defined in terms of data
3441 objects -- numerators, denominators, and rational numbers -- whose behavior was specified by the
3442 latter three procedures.
3443 But exactly what is meant by data? It is not enough to say ‘‘whatever is implemented by the given
3444 selectors and constructors.’’ Clearly, not every arbitrary set of three procedures can serve as an
3445 appropriate basis for the rational-number implementation. We need to guarantee that, if we construct a
3446 rational number x from a pair of integers n and d, then extracting the numer and the denom of x and
3447 dividing them should yield the same result as dividing n by d. In other words, make-rat, numer,
3448 and denom must satisfy the condition that, for any integer n and any non-zero integer d, if x is
3449 (make-rat n d), then
3450
3451 In fact, this is the only condition make-rat, numer, and denom must fulfill in order to form a
3452 suitable basis for a rational-number representation. In general, we can think of data as defined by some
3453 collection of selectors and constructors, together with specified conditions that these procedures must
3454 fulfill in order to be a valid representation. 5
3455
3456 \fThis point of view can serve to define not only ‘‘high-level’’ data objects, such as rational numbers,
3457 but lower-level objects as well. Consider the notion of a pair, which we used in order to define our
3458 rational numbers. We never actually said what a pair was, only that the language supplied procedures
3459 cons, car, and cdr for operating on pairs. But the only thing we need to know about these three
3460 operations is that if we glue two objects together using cons we can retrieve the objects using car
3461 and cdr. That is, the operations satisfy the condition that, for any objects x and y, if z is (cons x
3462 y) then (car z) is x and (cdr z) is y. Indeed, we mentioned that these three procedures are
3463 included as primitives in our language. However, any triple of procedures that satisfies the above
3464 condition can be used as the basis for implementing pairs. This point is illustrated strikingly by the fact
3465 that we could implement cons, car, and cdr without using any data structures at all but only using
3466 procedures. Here are the definitions:
3467 (define (cons x y)
3468 (define (dispatch m)
3469 (cond ((= m 0) x)
3470 ((= m 1) y)
3471 (else (error "Argument not 0 or 1 -- CONS" m))))
3472 dispatch)
3473 (define (car z) (z 0))
3474 (define (cdr z) (z 1))
3475 This use of procedures corresponds to nothing like our intuitive notion of what data should be.
3476 Nevertheless, all we need to do to show that this is a valid way to represent pairs is to verify that these
3477 procedures satisfy the condition given above.
3478 The subtle point to notice is that the value returned by (cons x y) is a procedure -- namely the
3479 internally defined procedure dispatch, which takes one argument and returns either x or y
3480 depending on whether the argument is 0 or 1. Correspondingly, (car z) is defined to apply z to 0.
3481 Hence, if z is the procedure formed by (cons x y), then z applied to 0 will yield x. Thus, we have
3482 shown that (car (cons x y)) yields x, as desired. Similarly, (cdr (cons x y)) applies the
3483 procedure returned by (cons x y) to 1, which returns y. Therefore, this procedural implementation
3484 of pairs is a valid implementation, and if we access pairs using only cons, car, and cdr we cannot
3485 distinguish this implementation from one that uses ‘‘real’’ data structures.
3486 The point of exhibiting the procedural representation of pairs is not that our language works this way
3487 (Scheme, and Lisp systems in general, implement pairs directly, for efficiency reasons) but that it
3488 could work this way. The procedural representation, although obscure, is a perfectly adequate way to
3489 represent pairs, since it fulfills the only conditions that pairs need to fulfill. This example also
3490 demonstrates that the ability to manipulate procedures as objects automatically provides the ability to
3491 represent compound data. This may seem a curiosity now, but procedural representations of data will
3492 play a central role in our programming repertoire. This style of programming is often called message
3493 passing, and we will be using it as a basic tool in chapter 3 when we address the issues of modeling
3494 and simulation.
3495 Exercise 2.4. Here is an alternative procedural representation of pairs. For this representation, verify
3496 that (car (cons x y)) yields x for any objects x and y.
3497 (define (cons x y)
3498 (lambda (m) (m x y)))
3499 (define (car z)
3500 (z (lambda (p q) p)))
3501
3502 \fWhat is the corresponding definition of cdr? (Hint: To verify that this works, make use of the
3503 substitution model of section 1.1.5.)
3504 Exercise 2.5. Show that we can represent pairs of nonnegative integers using only numbers and
3505 arithmetic operations if we represent the pair a and b as the integer that is the product 2 a 3 b . Give the
3506 corresponding definitions of the procedures cons, car, and cdr.
3507 Exercise 2.6. In case representing pairs as procedures wasn’t mind-boggling enough, consider that, in
3508 a language that can manipulate procedures, we can get by without numbers (at least insofar as
3509 nonnegative integers are concerned) by implementing 0 and the operation of adding 1 as
3510 (define zero (lambda (f) (lambda (x) x)))
3511 (define (add-1 n)
3512 (lambda (f) (lambda (x) (f ((n f) x)))))
3513 This representation is known as Church numerals, after its inventor, Alonzo Church, the logician who
3514 invented the calculus.
3515 Define one and two directly (not in terms of zero and add-1). (Hint: Use substitution to evaluate
3516 (add-1 zero)). Give a direct definition of the addition procedure + (not in terms of repeated
3517 application of add-1).
3518
3519 2.1.4 Extended Exercise: Interval Arithmetic
3520 Alyssa P. Hacker is designing a system to help people solve engineering problems. One feature she
3521 wants to provide in her system is the ability to manipulate inexact quantities (such as measured
3522 parameters of physical devices) with known precision, so that when computations are done with such
3523 approximate quantities the results will be numbers of known precision.
3524 Electrical engineers will be using Alyssa’s system to compute electrical quantities. It is sometimes
3525 necessary for them to compute the value of a parallel equivalent resistance R p of two resistors R 1 and
3526 R 2 using the formula
3527
3528 Resistance values are usually known only up to some tolerance guaranteed by the manufacturer of the
3529 resistor. For example, if you buy a resistor labeled ‘‘6.8 ohms with 10% tolerance’’ you can only be
3530 sure that the resistor has a resistance between 6.8 - 0.68 = 6.12 and 6.8 + 0.68 = 7.48 ohms. Thus, if
3531 you have a 6.8-ohm 10% resistor in parallel with a 4.7-ohm 5% resistor, the resistance of the
3532 combination can range from about 2.58 ohms (if the two resistors are at the lower bounds) to about
3533 2.97 ohms (if the two resistors are at the upper bounds).
3534 Alyssa’s idea is to implement ‘‘interval arithmetic’’ as a set of arithmetic operations for combining
3535 ‘‘intervals’’ (objects that represent the range of possible values of an inexact quantity). The result of
3536 adding, subtracting, multiplying, or dividing two intervals is itself an interval, representing the range
3537 of the result.
3538 Alyssa postulates the existence of an abstract object called an ‘‘interval’’ that has two endpoints: a
3539 lower bound and an upper bound. She also presumes that, given the endpoints of an interval, she can
3540 construct the interval using the data constructor make-interval. Alyssa first writes a procedure for
3541
3542 \fadding two intervals. She reasons that the minimum value the sum could be is the sum of the two
3543 lower bounds and the maximum value it could be is the sum of the two upper bounds:
3544 (define (add-interval x y)
3545 (make-interval (+ (lower-bound x) (lower-bound y))
3546 (+ (upper-bound x) (upper-bound y))))
3547 Alyssa also works out the product of two intervals by finding the minimum and the maximum of the
3548 products of the bounds and using them as the bounds of the resulting interval. (Min and max are
3549 primitives that find the minimum or maximum of any number of arguments.)
3550 (define (mul-interval x y)
3551 (let ((p1 (* (lower-bound x) (lower-bound
3552 (p2 (* (lower-bound x) (upper-bound
3553 (p3 (* (upper-bound x) (lower-bound
3554 (p4 (* (upper-bound x) (upper-bound
3555 (make-interval (min p1 p2 p3 p4)
3556 (max p1 p2 p3 p4))))
3557
3558 y)))
3559 y)))
3560 y)))
3561 y))))
3562
3563 To divide two intervals, Alyssa multiplies the first by the reciprocal of the second. Note that the
3564 bounds of the reciprocal interval are the reciprocal of the upper bound and the reciprocal of the lower
3565 bound, in that order.
3566 (define (div-interval x y)
3567 (mul-interval x
3568 (make-interval (/ 1.0 (upper-bound y))
3569 (/ 1.0 (lower-bound y)))))
3570 Exercise 2.7. Alyssa’s program is incomplete because she has not specified the implementation of the
3571 interval abstraction. Here is a definition of the interval constructor:
3572 (define (make-interval a b) (cons a b))
3573 Define selectors upper-bound and lower-bound to complete the implementation.
3574 Exercise 2.8. Using reasoning analogous to Alyssa’s, describe how the difference of two intervals
3575 may be computed. Define a corresponding subtraction procedure, called sub-interval.
3576 Exercise 2.9. The width of an interval is half of the difference between its upper and lower bounds.
3577 The width is a measure of the uncertainty of the number specified by the interval. For some arithmetic
3578 operations the width of the result of combining two intervals is a function only of the widths of the
3579 argument intervals, whereas for others the width of the combination is not a function of the widths of
3580 the argument intervals. Show that the width of the sum (or difference) of two intervals is a function
3581 only of the widths of the intervals being added (or subtracted). Give examples to show that this is not
3582 true for multiplication or division.
3583 Exercise 2.10. Ben Bitdiddle, an expert systems programmer, looks over Alyssa’s shoulder and
3584 comments that it is not clear what it means to divide by an interval that spans zero. Modify Alyssa’s
3585 code to check for this condition and to signal an error if it occurs.
3586
3587 \fExercise 2.11. In passing, Ben also cryptically comments: ‘‘By testing the signs of the endpoints of
3588 the intervals, it is possible to break mul-interval into nine cases, only one of which requires more
3589 than two multiplications.’’ Rewrite this procedure using Ben’s suggestion.
3590 After debugging her program, Alyssa shows it to a potential user, who complains that her program
3591 solves the wrong problem. He wants a program that can deal with numbers represented as a center
3592 value and an additive tolerance; for example, he wants to work with intervals such as 3.5± 0.15 rather
3593 than [3.35, 3.65]. Alyssa returns to her desk and fixes this problem by supplying an alternate
3594 constructor and alternate selectors:
3595 (define (make-center-width c w)
3596 (make-interval (- c w) (+ c w)))
3597 (define (center i)
3598 (/ (+ (lower-bound i) (upper-bound i)) 2))
3599 (define (width i)
3600 (/ (- (upper-bound i) (lower-bound i)) 2))
3601 Unfortunately, most of Alyssa’s users are engineers. Real engineering situations usually involve
3602 measurements with only a small uncertainty, measured as the ratio of the width of the interval to the
3603 midpoint of the interval. Engineers usually specify percentage tolerances on the parameters of devices,
3604 as in the resistor specifications given earlier.
3605 Exercise 2.12. Define a constructor make-center-percent that takes a center and a percentage
3606 tolerance and produces the desired interval. You must also define a selector percent that produces
3607 the percentage tolerance for a given interval. The center selector is the same as the one shown
3608 above.
3609 Exercise 2.13. Show that under the assumption of small percentage tolerances there is a simple
3610 formula for the approximate percentage tolerance of the product of two intervals in terms of the
3611 tolerances of the factors. You may simplify the problem by assuming that all numbers are positive.
3612 After considerable work, Alyssa P. Hacker delivers her finished system. Several years later, after she
3613 has forgotten all about it, she gets a frenzied call from an irate user, Lem E. Tweakit. It seems that
3614 Lem has noticed that the formula for parallel resistors can be written in two algebraically equivalent
3615 ways:
3616
3617 and
3618
3619 He has written the following two programs, each of which computes the parallel-resistors formula
3620 differently:
3621 (define (par1 r1 r2)
3622 (div-interval (mul-interval r1 r2)
3623 (add-interval r1 r2)))
3624 (define (par2 r1 r2)
3625 (let ((one (make-interval 1 1)))
3626
3627 \f(div-interval one
3628 (add-interval (div-interval one r1)
3629 (div-interval one r2)))))
3630 Lem complains that Alyssa’s program gives different answers for the two ways of computing. This is a
3631 serious complaint.
3632 Exercise 2.14. Demonstrate that Lem is right. Investigate the behavior of the system on a variety of
3633 arithmetic expressions. Make some intervals A and B, and use them in computing the expressions A/A
3634 and A/B. You will get the most insight by using intervals whose width is a small percentage of the
3635 center value. Examine the results of the computation in center-percent form (see exercise 2.12).
3636 Exercise 2.15. Eva Lu Ator, another user, has also noticed the different intervals computed by
3637 different but algebraically equivalent expressions. She says that a formula to compute with intervals
3638 using Alyssa’s system will produce tighter error bounds if it can be written in such a form that no
3639 variable that represents an uncertain number is repeated. Thus, she says, par2 is a ‘‘better’’ program
3640 for parallel resistances than par1. Is she right? Why?
3641 Exercise 2.16. Explain, in general, why equivalent algebraic expressions may lead to different
3642 answers. Can you devise an interval-arithmetic package that does not have this shortcoming, or is this
3643 task impossible? (Warning: This problem is very difficult.)
3644 2 The name cons stands for ‘‘construct.’’ The names car and cdr derive from the original
3645
3646 implementation of Lisp on the IBM 704. That machine had an addressing scheme that allowed one to
3647 reference the ‘‘address’’ and ‘‘decrement’’ parts of a memory location. Car stands for ‘‘Contents of
3648 Address part of Register’’ and cdr (pronounced ‘‘could-er’’) stands for ‘‘Contents of Decrement part
3649 of Register.’’
3650 3 Another way to define the selectors and constructor is
3651
3652 (define make-rat cons)
3653 (define numer car)
3654 (define denom cdr)
3655 The first definition associates the name make-rat with the value of the expression cons, which is
3656 the primitive procedure that constructs pairs. Thus make-rat and cons are names for the same
3657 primitive constructor.
3658 Defining selectors and constructors in this way is efficient: Instead of make-rat calling cons,
3659 make-rat is cons, so there is only one procedure called, not two, when make-rat is called. On
3660 the other hand, doing this defeats debugging aids that trace procedure calls or put breakpoints on
3661 procedure calls: You may want to watch make-rat being called, but you certainly don’t want to
3662 watch every call to cons.
3663 We have chosen not to use this style of definition in this book.
3664 4 Display is the Scheme primitive for printing data. The Scheme primitive newline starts a new
3665
3666 line for printing. Neither of these procedures returns a useful value, so in the uses of print-rat
3667 below, we show only what print-rat prints, not what the interpreter prints as the value returned by
3668 print-rat.
3669
3670 \f5 Surprisingly, this idea is very difficult to formulate rigorously. There are two approaches to giving
3671
3672 such a formulation. One, pioneered by C. A. R. Hoare (1972), is known as the method of abstract
3673 models. It formalizes the ‘‘procedures plus conditions’’ specification as outlined in the
3674 rational-number example above. Note that the condition on the rational-number representation was
3675 stated in terms of facts about integers (equality and division). In general, abstract models define new
3676 kinds of data objects in terms of previously defined types of data objects. Assertions about data objects
3677 can therefore be checked by reducing them to assertions about previously defined data objects.
3678 Another approach, introduced by Zilles at MIT, by Goguen, Thatcher, Wagner, and Wright at IBM
3679 (see Thatcher, Wagner, and Wright 1978), and by Guttag at Toronto (see Guttag 1977), is called
3680 algebraic specification. It regards the ‘‘procedures’’ as elements of an abstract algebraic system whose
3681 behavior is specified by axioms that correspond to our ‘‘conditions,’’ and uses the techniques of
3682 abstract algebra to check assertions about data objects. Both methods are surveyed in the paper by
3683 Liskov and Zilles (1975).
3684 [Go to first, previous, next page; contents; index]
3685
3686 \f[Go to first, previous, next page; contents; index]
3687
3688 2.2 Hierarchical Data and the Closure Property
3689 As we have seen, pairs provide a primitive ‘‘glue’’ that we can use to construct compound data
3690 objects. Figure 2.2 shows a standard way to visualize a pair -- in this case, the pair formed by (cons
3691 1 2). In this representation, which is called box-and-pointer notation, each object is shown as a
3692 pointer to a box. The box for a primitive object contains a representation of the object. For example,
3693 the box for a number contains a numeral. The box for a pair is actually a double box, the left part
3694 containing (a pointer to) the car of the pair and the right part containing the cdr.
3695 We have already seen that cons can be used to combine not only numbers but pairs as well. (You
3696 made use of this fact, or should have, in doing exercises 2.2 and 2.3.) As a consequence, pairs provide
3697 a universal building block from which we can construct all sorts of data structures. Figure 2.3 shows
3698 two ways to use pairs to combine the numbers 1, 2, 3, and 4.
3699
3700 Figure 2.2: Box-and-pointer representation of (cons 1 2).
3701 Figure 2.2: Box-and-pointer representation of (cons 1 2).
3702
3703 Figure 2.3: Two ways to combine 1, 2, 3, and 4 using pairs.
3704 Figure 2.3: Two ways to combine 1, 2, 3, and 4 using pairs.
3705 The ability to create pairs whose elements are pairs is the essence of list structure’s importance as a
3706 representational tool. We refer to this ability as the closure property of cons. In general, an operation
3707 for combining data objects satisfies the closure property if the results of combining things with that
3708 operation can themselves be combined using the same operation. 6 Closure is the key to power in any
3709 means of combination because it permits us to create hierarchical structures -- structures made up of
3710 parts, which themselves are made up of parts, and so on.
3711
3712 \fFrom the outset of chapter 1, we’ve made essential use of closure in dealing with procedures, because
3713 all but the very simplest programs rely on the fact that the elements of a combination can themselves
3714 be combinations. In this section, we take up the consequences of closure for compound data. We
3715 describe some conventional techniques for using pairs to represent sequences and trees, and we exhibit
3716 a graphics language that illustrates closure in a vivid way. 7
3717
3718 2.2.1 Representing Sequences
3719
3720 Figure 2.4: The sequence 1, 2, 3, 4 represented as a chain of pairs.
3721 Figure 2.4: The sequence 1, 2, 3, 4 represented as a chain of pairs.
3722 One of the useful structures we can build with pairs is a sequence -- an ordered collection of data
3723 objects. There are, of course, many ways to represent sequences in terms of pairs. One particularly
3724 straightforward representation is illustrated in figure 2.4, where the sequence 1, 2, 3, 4 is represented
3725 as a chain of pairs. The car of each pair is the corresponding item in the chain, and the cdr of the
3726 pair is the next pair in the chain. The cdr of the final pair signals the end of the sequence by pointing
3727 to a distinguished value that is not a pair, represented in box-and-pointer diagrams as a diagonal line
3728 and in programs as the value of the variable nil. The entire sequence is constructed by nested cons
3729 operations:
3730 (cons 1
3731 (cons 2
3732 (cons 3
3733 (cons 4 nil))))
3734 Such a sequence of pairs, formed by nested conses, is called a list, and Scheme provides a primitive
3735 called list to help in constructing lists. 8 The above sequence could be produced by (list 1 2 3
3736 4). In general,
3737 (list <a 1 > <a 2 > ... <a n >)
3738 is equivalent to
3739 (cons <a 1 > (cons <a 2 > (cons ... (cons <a n > nil) ...)))
3740 Lisp systems conventionally print lists by printing the sequence of elements, enclosed in parentheses.
3741 Thus, the data object in figure 2.4 is printed as (1 2 3 4):
3742 (define one-through-four (list 1 2 3 4))
3743 one-through-four
3744 (1 2 3 4)
3745
3746 \fBe careful not to confuse the expression (list 1 2 3 4) with the list (1 2 3 4), which is the
3747 result obtained when the expression is evaluated. Attempting to evaluate the expression (1 2 3 4)
3748 will signal an error when the interpreter tries to apply the procedure 1 to arguments 2, 3, and 4.
3749 We can think of car as selecting the first item in the list, and of cdr as selecting the sublist
3750 consisting of all but the first item. Nested applications of car and cdr can be used to extract the
3751 second, third, and subsequent items in the list. 9 The constructor cons makes a list like the original
3752 one, but with an additional item at the beginning.
3753 (car one-through-four)
3754 1
3755 (cdr one-through-four)
3756 (2 3 4)
3757 (car (cdr one-through-four))
3758 2
3759 (cons 10 one-through-four)
3760 (10 1 2 3 4)
3761 (cons 5 one-through-four)
3762 (5 1 2 3 4)
3763 The value of nil, used to terminate the chain of pairs, can be thought of as a sequence of no elements,
3764 the empty list. The word nil is a contraction of the Latin word nihil, which means ‘‘nothing.’’ 10
3765
3766 List operations
3767 The use of pairs to represent sequences of elements as lists is accompanied by conventional
3768 programming techniques for manipulating lists by successively ‘‘cdring down’’ the lists. For
3769 example, the procedure list-ref takes as arguments a list and a number n and returns the nth item
3770 of the list. It is customary to number the elements of the list beginning with 0. The method for
3771 computing list-ref is the following:
3772 For n = 0, list-ref should return the car of the list.
3773 Otherwise, list-ref should return the (n - 1)st item of the cdr of the list.
3774 (define (list-ref items n)
3775 (if (= n 0)
3776 (car items)
3777 (list-ref (cdr items) (- n 1))))
3778 (define squares (list 1 4 9 16 25))
3779 (list-ref squares 3)
3780 16
3781 Often we cdr down the whole list. To aid in this, Scheme includes a primitive predicate null?,
3782 which tests whether its argument is the empty list. The procedure length, which returns the number
3783 of items in a list, illustrates this typical pattern of use:
3784 (define (length items)
3785 (if (null? items)
3786 0
3787 (+ 1 (length (cdr items)))))
3788 (define odds (list 1 3 5 7))
3789
3790 \f(length odds)
3791 4
3792 The length procedure implements a simple recursive plan. The reduction step is:
3793 The length of any list is 1 plus the length of the cdr of the list.
3794 This is applied successively until we reach the base case:
3795 The length of the empty list is 0.
3796 We could also compute length in an iterative style:
3797 (define (length items)
3798 (define (length-iter a count)
3799 (if (null? a)
3800 count
3801 (length-iter (cdr a) (+ 1 count))))
3802 (length-iter items 0))
3803 Another conventional programming technique is to ‘‘cons up’’ an answer list while cdring down a
3804 list, as in the procedure append, which takes two lists as arguments and combines their elements to
3805 make a new list:
3806 (append squares odds)
3807 (1 4 9 16 25 1 3 5 7)
3808 (append odds squares)
3809 (1 3 5 7 1 4 9 16 25)
3810 Append is also implemented using a recursive plan. To append lists list1 and list2, do the
3811 following:
3812 If list1 is the empty list, then the result is just list2.
3813 Otherwise, append the cdr of list1 and list2, and cons the car of list1 onto the
3814 result:
3815 (define (append list1 list2)
3816 (if (null? list1)
3817 list2
3818 (cons (car list1) (append (cdr list1) list2))))
3819 Exercise 2.17. Define a procedure last-pair that returns the list that contains only the last
3820 element of a given (nonempty) list:
3821 (last-pair (list 23 72 149 34))
3822 (34)
3823 Exercise 2.18. Define a procedure reverse that takes a list as argument and returns a list of the
3824 same elements in reverse order:
3825
3826 \f(reverse (list 1 4 9 16 25))
3827 (25 16 9 4 1)
3828 Exercise 2.19. Consider the change-counting program of section 1.2.2. It would be nice to be able to
3829 easily change the currency used by the program, so that we could compute the number of ways to
3830 change a British pound, for example. As the program is written, the knowledge of the currency is
3831 distributed partly into the procedure first-denomination and partly into the procedure
3832 count-change (which knows that there are five kinds of U.S. coins). It would be nicer to be able to
3833 supply a list of coins to be used for making change.
3834 We want to rewrite the procedure cc so that its second argument is a list of the values of the coins to
3835 use rather than an integer specifying which coins to use. We could then have lists that defined each
3836 kind of currency:
3837 (define us-coins (list 50 25 10 5 1))
3838 (define uk-coins (list 100 50 20 10 5 2 1 0.5))
3839 We could then call cc as follows:
3840 (cc 100 us-coins)
3841 292
3842 To do this will require changing the program cc somewhat. It will still have the same form, but it will
3843 access its second argument differently, as follows:
3844 (define (cc amount coin-values)
3845 (cond ((= amount 0) 1)
3846 ((or (< amount 0) (no-more? coin-values)) 0)
3847 (else
3848 (+ (cc amount
3849 (except-first-denomination coin-values))
3850 (cc (- amount
3851 (first-denomination coin-values))
3852 coin-values)))))
3853 Define the procedures first-denomination, except-first-denomination, and
3854 no-more? in terms of primitive operations on list structures. Does the order of the list
3855 coin-values affect the answer produced by cc? Why or why not?
3856 Exercise 2.20. The procedures +, *, and list take arbitrary numbers of arguments. One way to
3857 define such procedures is to use define with dotted-tail notation. In a procedure definition, a
3858 parameter list that has a dot before the last parameter name indicates that, when the procedure is
3859 called, the initial parameters (if any) will have as values the initial arguments, as usual, but the final
3860 parameter’s value will be a list of any remaining arguments. For instance, given the definition
3861 (define (f x y . z) <body>)
3862 the procedure f can be called with two or more arguments. If we evaluate
3863 (f 1 2 3 4 5 6)
3864
3865 \fthen in the body of f, x will be 1, y will be 2, and z will be the list (3 4 5 6). Given the definition
3866 (define (g . w) <body>)
3867 the procedure g can be called with zero or more arguments. If we evaluate
3868 (g 1 2 3 4 5 6)
3869 then in the body of g, w will be the list (1 2 3 4 5 6). 11
3870 Use this notation to write a procedure same-parity that takes one or more integers and returns a
3871 list of all the arguments that have the same even-odd parity as the first argument. For example,
3872 (same-parity 1 2 3 4 5 6 7)
3873 (1 3 5 7)
3874 (same-parity 2 3 4 5 6 7)
3875 (2 4 6)
3876
3877 Mapping over lists
3878 One extremely useful operation is to apply some transformation to each element in a list and generate
3879 the list of results. For instance, the following procedure scales each number in a list by a given factor:
3880 (define (scale-list items factor)
3881 (if (null? items)
3882 nil
3883 (cons (* (car items) factor)
3884 (scale-list (cdr items) factor))))
3885 (scale-list (list 1 2 3 4 5) 10)
3886 (10 20 30 40 50)
3887 We can abstract this general idea and capture it as a common pattern expressed as a higher-order
3888 procedure, just as in section 1.3. The higher-order procedure here is called map. Map takes as
3889 arguments a procedure of one argument and a list, and returns a list of the results produced by
3890 applying the procedure to each element in the list: 12
3891 (define (map proc items)
3892 (if (null? items)
3893 nil
3894 (cons (proc (car items))
3895 (map proc (cdr items)))))
3896 (map abs (list -10 2.5 -11.6 17))
3897 (10 2.5 11.6 17)
3898 (map (lambda (x) (* x x))
3899 (list 1 2 3 4))
3900 (1 4 9 16)
3901 Now we can give a new definition of scale-list in terms of map:
3902 (define (scale-list items factor)
3903 (map (lambda (x) (* x factor))
3904 items))
3905
3906 \fMap is an important construct, not only because it captures a common pattern, but because it
3907 establishes a higher level of abstraction in dealing with lists. In the original definition of
3908 scale-list, the recursive structure of the program draws attention to the element-by-element
3909 processing of the list. Defining scale-list in terms of map suppresses that level of detail and
3910 emphasizes that scaling transforms a list of elements to a list of results. The difference between the
3911 two definitions is not that the computer is performing a different process (it isn’t) but that we think
3912 about the process differently. In effect, map helps establish an abstraction barrier that isolates the
3913 implementation of procedures that transform lists from the details of how the elements of the list are
3914 extracted and combined. Like the barriers shown in figure 2.1, this abstraction gives us the flexibility
3915 to change the low-level details of how sequences are implemented, while preserving the conceptual
3916 framework of operations that transform sequences to sequences. Section 2.2.3 expands on this use of
3917 sequences as a framework for organizing programs.
3918 Exercise 2.21. The procedure square-list takes a list of numbers as argument and returns a list
3919 of the squares of those numbers.
3920 (square-list (list 1 2 3 4))
3921 (1 4 9 16)
3922 Here are two different definitions of square-list. Complete both of them by filling in the missing
3923 expressions:
3924 (define (square-list items)
3925 (if (null? items)
3926 nil
3927 (cons <??> <??>)))
3928 (define (square-list items)
3929 (map <??> <??>))
3930 Exercise 2.22. Louis Reasoner tries to rewrite the first square-list procedure of exercise 2.21 so
3931 that it evolves an iterative process:
3932 (define (square-list items)
3933 (define (iter things answer)
3934 (if (null? things)
3935 answer
3936 (iter (cdr things)
3937 (cons (square (car things))
3938 answer))))
3939 (iter items nil))
3940 Unfortunately, defining square-list this way produces the answer list in the reverse order of the
3941 one desired. Why?
3942 Louis then tries to fix his bug by interchanging the arguments to cons:
3943 (define (square-list items)
3944 (define (iter things answer)
3945 (if (null? things)
3946 answer
3947 (iter (cdr things)
3948 (cons answer
3949
3950 \f(square (car things))))))
3951 (iter items nil))
3952 This doesn’t work either. Explain.
3953 Exercise 2.23. The procedure for-each is similar to map. It takes as arguments a procedure and a
3954 list of elements. However, rather than forming a list of the results, for-each just applies the
3955 procedure to each of the elements in turn, from left to right. The values returned by applying the
3956 procedure to the elements are not used at all -- for-each is used with procedures that perform an
3957 action, such as printing. For example,
3958 (for-each (lambda (x) (newline) (display x))
3959 (list 57 321 88))
3960 57
3961 321
3962 88
3963 The value returned by the call to for-each (not illustrated above) can be something arbitrary, such
3964 as true. Give an implementation of for-each.
3965
3966 2.2.2 Hierarchical Structures
3967 The representation of sequences in terms of lists generalizes naturally to represent sequences whose
3968 elements may themselves be sequences. For example, we can regard the object ((1 2) 3 4)
3969 constructed by
3970 (cons (list 1 2) (list 3 4))
3971 as a list of three items, the first of which is itself a list, (1 2). Indeed, this is suggested by the form in
3972 which the result is printed by the interpreter. Figure 2.5 shows the representation of this structure in
3973 terms of pairs.
3974
3975 Figure 2.5: Structure formed by (cons (list 1 2) (list 3 4)).
3976 Figure 2.5: Structure formed by (cons (list 1 2) (list 3 4)).
3977 Another way to think of sequences whose elements are sequences is as trees. The elements of the
3978 sequence are the branches of the tree, and elements that are themselves sequences are subtrees.
3979 Figure 2.6 shows the structure in figure 2.5 viewed as a tree.
3980
3981 \fFigure 2.6: The list structure in figure 2.5 viewed as a tree.
3982 Figure 2.6: The list structure in figure 2.5 viewed as a tree.
3983 Recursion is a natural tool for dealing with tree structures, since we can often reduce operations on
3984 trees to operations on their branches, which reduce in turn to operations on the branches of the
3985 branches, and so on, until we reach the leaves of the tree. As an example, compare the length
3986 procedure of section 2.2.1 with the count-leaves procedure, which returns the total number of
3987 leaves of a tree:
3988 (define x (cons (list 1 2) (list 3 4)))
3989 (length x)
3990 3
3991 (count-leaves x)
3992 4
3993 (list x x)
3994 (((1 2) 3 4) ((1 2) 3 4))
3995 (length (list x x))
3996 2
3997 (count-leaves (list x x))
3998 8
3999 To implement count-leaves, recall the recursive plan for computing length:
4000 Length of a list x is 1 plus length of the cdr of x.
4001 Length of the empty list is 0.
4002 Count-leaves is similar. The value for the empty list is the same:
4003 Count-leaves of the empty list is 0.
4004 But in the reduction step, where we strip off the car of the list, we must take into account that the
4005 car may itself be a tree whose leaves we need to count. Thus, the appropriate reduction step is
4006 Count-leaves of a tree x is count-leaves of the car of x plus count-leaves of the
4007 cdr of x.
4008 Finally, by taking cars we reach actual leaves, so we need another base case:
4009 Count-leaves of a leaf is 1.
4010
4011 \fTo aid in writing recursive procedures on trees, Scheme provides the primitive predicate pair?,
4012 which tests whether its argument is a pair. Here is the complete procedure: 13
4013 (define (count-leaves x)
4014 (cond ((null? x) 0)
4015 ((not (pair? x)) 1)
4016 (else (+ (count-leaves (car x))
4017 (count-leaves (cdr x))))))
4018 Exercise 2.24. Suppose we evaluate the expression (list 1 (list 2 (list 3 4))). Give
4019 the result printed by the interpreter, the corresponding box-and-pointer structure, and the interpretation
4020 of this as a tree (as in figure 2.6).
4021 Exercise 2.25. Give combinations of cars and cdrs that will pick 7 from each of the following lists:
4022 (1 3 (5 7) 9)
4023 ((7))
4024 (1 (2 (3 (4 (5 (6 7))))))
4025 Exercise 2.26. Suppose we define x and y to be two lists:
4026 (define x (list 1 2 3))
4027 (define y (list 4 5 6))
4028 What result is printed by the interpreter in response to evaluating each of the following expressions:
4029 (append x y)
4030 (cons x y)
4031 (list x y)
4032 Exercise 2.27. Modify your reverse procedure of exercise 2.18 to produce a deep-reverse
4033 procedure that takes a list as argument and returns as its value the list with its elements reversed and
4034 with all sublists deep-reversed as well. For example,
4035 (define x (list (list 1 2) (list 3 4)))
4036 x
4037 ((1 2) (3 4))
4038 (reverse x)
4039 ((3 4) (1 2))
4040 (deep-reverse x)
4041 ((4 3) (2 1))
4042 Exercise 2.28. Write a procedure fringe that takes as argument a tree (represented as a list) and
4043 returns a list whose elements are all the leaves of the tree arranged in left-to-right order. For example,
4044 (define x (list (list 1 2) (list 3 4)))
4045 (fringe x)
4046 (1 2 3 4)
4047 (fringe (list x x))
4048 (1 2 3 4 1 2 3 4)
4049
4050 \fExercise 2.29. A binary mobile consists of two branches, a left branch and a right branch. Each
4051 branch is a rod of a certain length, from which hangs either a weight or another binary mobile. We can
4052 represent a binary mobile using compound data by constructing it from two branches (for example,
4053 using list):
4054 (define (make-mobile left right)
4055 (list left right))
4056 A branch is constructed from a length (which must be a number) together with a structure,
4057 which may be either a number (representing a simple weight) or another mobile:
4058 (define (make-branch length structure)
4059 (list length structure))
4060 a. Write the corresponding selectors left-branch and right-branch, which return the
4061 branches of a mobile, and branch-length and branch-structure, which return the
4062 components of a branch.
4063 b. Using your selectors, define a procedure total-weight that returns the total weight of a mobile.
4064 c. A mobile is said to be balanced if the torque applied by its top-left branch is equal to that applied
4065 by its top-right branch (that is, if the length of the left rod multiplied by the weight hanging from that
4066 rod is equal to the corresponding product for the right side) and if each of the submobiles hanging off
4067 its branches is balanced. Design a predicate that tests whether a binary mobile is balanced.
4068 d. Suppose we change the representation of mobiles so that the constructors are
4069 (define
4070 (cons
4071 (define
4072 (cons
4073
4074 (make-mobile left right)
4075 left right))
4076 (make-branch length structure)
4077 length structure))
4078
4079 How much do you need to change your programs to convert to the new representation?
4080
4081 Mapping over trees
4082 Just as map is a powerful abstraction for dealing with sequences, map together with recursion is a
4083 powerful abstraction for dealing with trees. For instance, the scale-tree procedure, analogous to
4084 scale-list of section 2.2.1, takes as arguments a numeric factor and a tree whose leaves are
4085 numbers. It returns a tree of the same shape, where each number is multiplied by the factor. The
4086 recursive plan for scale-tree is similar to the one for count-leaves:
4087 (define (scale-tree tree factor)
4088 (cond ((null? tree) nil)
4089 ((not (pair? tree)) (* tree factor))
4090 (else (cons (scale-tree (car tree) factor)
4091 (scale-tree (cdr tree) factor)))))
4092 (scale-tree (list 1 (list 2 (list 3 4) 5) (list 6 7))
4093 10)
4094 (10 (20 (30 40) 50) (60 70))
4095
4096 \fAnother way to implement scale-tree is to regard the tree as a sequence of sub-trees and use map.
4097 We map over the sequence, scaling each sub-tree in turn, and return the list of results. In the base case,
4098 where the tree is a leaf, we simply multiply by the factor:
4099 (define (scale-tree tree factor)
4100 (map (lambda (sub-tree)
4101 (if (pair? sub-tree)
4102 (scale-tree sub-tree factor)
4103 (* sub-tree factor)))
4104 tree))
4105 Many tree operations can be implemented by similar combinations of sequence operations and
4106 recursion.
4107 Exercise 2.30. Define a procedure square-tree analogous to the square-list procedure of
4108 exercise 2.21. That is, square-list should behave as follows:
4109 (square-tree
4110 (list 1
4111 (list 2 (list 3 4) 5)
4112 (list 6 7)))
4113 (1 (4 (9 16) 25) (36 49))
4114 Define square-tree both directly (i.e., without using any higher-order procedures) and also by
4115 using map and recursion.
4116 Exercise 2.31. Abstract your answer to exercise 2.30 to produce a procedure tree-map with the
4117 property that square-tree could be defined as
4118 (define (square-tree tree) (tree-map square tree))
4119 Exercise 2.32. We can represent a set as a list of distinct elements, and we can represent the set of all
4120 subsets of the set as a list of lists. For example, if the set is (1 2 3), then the set of all subsets is
4121 (() (3) (2) (2 3) (1) (1 3) (1 2) (1 2 3)). Complete the following definition of a
4122 procedure that generates the set of subsets of a set and give a clear explanation of why it works:
4123 (define (subsets s)
4124 (if (null? s)
4125 (list nil)
4126 (let ((rest (subsets (cdr s))))
4127 (append rest (map <??> rest)))))
4128
4129 2.2.3 Sequences as Conventional Interfaces
4130 In working with compound data, we’ve stressed how data abstraction permits us to design programs
4131 without becoming enmeshed in the details of data representations, and how abstraction preserves for
4132 us the flexibility to experiment with alternative representations. In this section, we introduce another
4133 powerful design principle for working with data structures -- the use of conventional interfaces.
4134 In section 1.3 we saw how program abstractions, implemented as higher-order procedures, can capture
4135 common patterns in programs that deal with numerical data. Our ability to formulate analogous
4136 operations for working with compound data depends crucially on the style in which we manipulate our
4137
4138 \fdata structures. Consider, for example, the following procedure, analogous to the count-leaves
4139 procedure of section 2.2.2, which takes a tree as argument and computes the sum of the squares of the
4140 leaves that are odd:
4141 (define (sum-odd-squares tree)
4142 (cond ((null? tree) 0)
4143 ((not (pair? tree))
4144 (if (odd? tree) (square tree) 0))
4145 (else (+ (sum-odd-squares (car tree))
4146 (sum-odd-squares (cdr tree))))))
4147 On the surface, this procedure is very different from the following one, which constructs a list of all
4148 the even Fibonacci numbers Fib(k), where k is less than or equal to a given integer n:
4149 (define (even-fibs n)
4150 (define (next k)
4151 (if (> k n)
4152 nil
4153 (let ((f (fib k)))
4154 (if (even? f)
4155 (cons f (next (+ k 1)))
4156 (next (+ k 1))))))
4157 (next 0))
4158 Despite the fact that these two procedures are structurally very different, a more abstract description of
4159 the two computations reveals a great deal of similarity. The first program
4160 enumerates the leaves of a tree;
4161 filters them, selecting the odd ones;
4162 squares each of the selected ones; and
4163 accumulates the results using +, starting with 0.
4164 The second program
4165 enumerates the integers from 0 to n;
4166 computes the Fibonacci number for each integer;
4167 filters them, selecting the even ones; and
4168 accumulates the results using cons, starting with the empty list.
4169 A signal-processing engineer would find it natural to conceptualize these processes in terms of signals
4170 flowing through a cascade of stages, each of which implements part of the program plan, as shown in
4171 figure 2.7. In sum-odd-squares, we begin with an enumerator, which generates a ‘‘signal’’
4172 consisting of the leaves of a given tree. This signal is passed through a filter, which eliminates all but
4173 the odd elements. The resulting signal is in turn passed through a map, which is a ‘‘transducer’’ that
4174 applies the square procedure to each element. The output of the map is then fed to an accumulator,
4175 which combines the elements using +, starting from an initial 0. The plan for even-fibs is
4176 analogous.
4177
4178 \fFigure 2.7: The signal-flow plans for the procedures sum-odd-squares (top) and
4179 even-fibs (bottom) reveal the commonality between the two programs.
4180 Figure 2.7: The signal-flow plans for the procedures sum-odd-squares (top) and even-fibs
4181 (bottom) reveal the commonality between the two programs.
4182 Unfortunately, the two procedure definitions above fail to exhibit this signal-flow structure. For
4183 instance, if we examine the sum-odd-squares procedure, we find that the enumeration is
4184 implemented partly by the null? and pair? tests and partly by the tree-recursive structure of the
4185 procedure. Similarly, the accumulation is found partly in the tests and partly in the addition used in the
4186 recursion. In general, there are no distinct parts of either procedure that correspond to the elements in
4187 the signal-flow description. Our two procedures decompose the computations in a different way,
4188 spreading the enumeration over the program and mingling it with the map, the filter, and the
4189 accumulation. If we could organize our programs to make the signal-flow structure manifest in the
4190 procedures we write, this would increase the conceptual clarity of the resulting code.
4191
4192 Sequence Operations
4193 The key to organizing programs so as to more clearly reflect the signal-flow structure is to concentrate
4194 on the ‘‘signals’’ that flow from one stage in the process to the next. If we represent these signals as
4195 lists, then we can use list operations to implement the processing at each of the stages. For instance,
4196 we can implement the mapping stages of the signal-flow diagrams using the map procedure from
4197 section 2.2.1:
4198 (map square (list 1 2 3 4 5))
4199 (1 4 9 16 25)
4200 Filtering a sequence to select only those elements that satisfy a given predicate is accomplished by
4201 (define (filter predicate sequence)
4202 (cond ((null? sequence) nil)
4203 ((predicate (car sequence))
4204 (cons (car sequence)
4205 (filter predicate (cdr sequence))))
4206 (else (filter predicate (cdr sequence)))))
4207 For example,
4208 (filter odd? (list 1 2 3 4 5))
4209 (1 3 5)
4210
4211 \fAccumulations can be implemented by
4212 (define (accumulate op initial sequence)
4213 (if (null? sequence)
4214 initial
4215 (op (car sequence)
4216 (accumulate op initial (cdr sequence)))))
4217 (accumulate + 0 (list 1 2 3 4 5))
4218 15
4219 (accumulate * 1 (list 1 2 3 4 5))
4220 120
4221 (accumulate cons nil (list 1 2 3 4 5))
4222 (1 2 3 4 5)
4223 All that remains to implement signal-flow diagrams is to enumerate the sequence of elements to be
4224 processed. For even-fibs, we need to generate the sequence of integers in a given range, which we
4225 can do as follows:
4226 (define (enumerate-interval low high)
4227 (if (> low high)
4228 nil
4229 (cons low (enumerate-interval (+ low 1) high))))
4230 (enumerate-interval 2 7)
4231 (2 3 4 5 6 7)
4232 To enumerate the leaves of a tree, we can use 14
4233 (define (enumerate-tree tree)
4234 (cond ((null? tree) nil)
4235 ((not (pair? tree)) (list tree))
4236 (else (append (enumerate-tree (car tree))
4237 (enumerate-tree (cdr tree))))))
4238 (enumerate-tree (list 1 (list 2 (list 3 4)) 5))
4239 (1 2 3 4 5)
4240 Now we can reformulate sum-odd-squares and even-fibs as in the signal-flow diagrams. For
4241 sum-odd-squares, we enumerate the sequence of leaves of the tree, filter this to keep only the odd
4242 numbers in the sequence, square each element, and sum the results:
4243 (define (sum-odd-squares tree)
4244 (accumulate +
4245 0
4246 (map square
4247 (filter odd?
4248 (enumerate-tree tree)))))
4249 For even-fibs, we enumerate the integers from 0 to n, generate the Fibonacci number for each of
4250 these integers, filter the resulting sequence to keep only the even elements, and accumulate the results
4251 into a list:
4252
4253 \f(define (even-fibs n)
4254 (accumulate cons
4255 nil
4256 (filter even?
4257 (map fib
4258 (enumerate-interval 0 n)))))
4259 The value of expressing programs as sequence operations is that this helps us make program designs
4260 that are modular, that is, designs that are constructed by combining relatively independent pieces. We
4261 can encourage modular design by providing a library of standard components together with a
4262 conventional interface for connecting the components in flexible ways.
4263 Modular construction is a powerful strategy for controlling complexity in engineering design. In real
4264 signal-processing applications, for example, designers regularly build systems by cascading elements
4265 selected from standardized families of filters and transducers. Similarly, sequence operations provide a
4266 library of standard program elements that we can mix and match. For instance, we can reuse pieces
4267 from the sum-odd-squares and even-fibs procedures in a program that constructs a list of the
4268 squares of the first n + 1 Fibonacci numbers:
4269 (define (list-fib-squares n)
4270 (accumulate cons
4271 nil
4272 (map square
4273 (map fib
4274 (enumerate-interval 0 n)))))
4275 (list-fib-squares 10)
4276 (0 1 1 4 9 25 64 169 441 1156 3025)
4277 We can rearrange the pieces and use them in computing the product of the odd integers in a sequence:
4278 (define (product-of-squares-of-odd-elements sequence)
4279 (accumulate *
4280 1
4281 (map square
4282 (filter odd? sequence))))
4283 (product-of-squares-of-odd-elements (list 1 2 3 4 5))
4284 225
4285 We can also formulate conventional data-processing applications in terms of sequence operations.
4286 Suppose we have a sequence of personnel records and we want to find the salary of the highest-paid
4287 programmer. Assume that we have a selector salary that returns the salary of a record, and a
4288 predicate programmer? that tests if a record is for a programmer. Then we can write
4289 (define (salary-of-highest-paid-programmer records)
4290 (accumulate max
4291 0
4292 (map salary
4293 (filter programmer? records))))
4294
4295 \fThese examples give just a hint of the vast range of operations that can be expressed as sequence
4296 operations. 15
4297 Sequences, implemented here as lists, serve as a conventional interface that permits us to combine
4298 processing modules. Additionally, when we uniformly represent structures as sequences, we have
4299 localized the data-structure dependencies in our programs to a small number of sequence operations.
4300 By changing these, we can experiment with alternative representations of sequences, while leaving the
4301 overall design of our programs intact. We will exploit this capability in section 3.5, when we
4302 generalize the sequence-processing paradigm to admit infinite sequences.
4303 Exercise 2.33. Fill in the missing expressions to complete the following definitions of some basic
4304 list-manipulation operations as accumulations:
4305 (define (map p sequence)
4306 (accumulate (lambda (x y) <??>) nil sequence))
4307 (define (append seq1 seq2)
4308 (accumulate cons <??> <??>))
4309 (define (length sequence)
4310 (accumulate <??> 0 sequence))
4311 Exercise 2.34. Evaluating a polynomial in x at a given value of x can be formulated as an
4312 accumulation. We evaluate the polynomial
4313
4314 using a well-known algorithm called Horner’s rule, which structures the computation as
4315
4316 In other words, we start with a n , multiply by x, add a n-1 , multiply by x, and so on, until we reach
4317 a 0 . 16 Fill in the following template to produce a procedure that evaluates a polynomial using
4318 Horner’s rule. Assume that the coefficients of the polynomial are arranged in a sequence, from a 0
4319 through a n .
4320 (define (horner-eval x coefficient-sequence)
4321 (accumulate (lambda (this-coeff higher-terms) <??>)
4322 0
4323 coefficient-sequence))
4324 For example, to compute 1 + 3x + 5x 3 + x 5 at x = 2 you would evaluate
4325 (horner-eval 2 (list 1 3 0 5 0 1))
4326 Exercise 2.35. Redefine count-leaves from section 2.2.2 as an accumulation:
4327 (define (count-leaves t)
4328 (accumulate <??> <??> (map <??> <??>)))
4329 Exercise 2.36. The procedure accumulate-n is similar to accumulate except that it takes as its
4330 third argument a sequence of sequences, which are all assumed to have the same number of elements.
4331 It applies the designated accumulation procedure to combine all the first elements of the sequences, all
4332 the second elements of the sequences, and so on, and returns a sequence of the results. For instance, if
4333
4334 \fs is a sequence containing four sequences, ((1 2 3) (4 5 6) (7 8 9) (10 11 12)),
4335 then the value of (accumulate-n + 0 s) should be the sequence (22 26 30). Fill in the
4336 missing expressions in the following definition of accumulate-n:
4337 (define (accumulate-n op init seqs)
4338 (if (null? (car seqs))
4339 nil
4340 (cons (accumulate op init <??>)
4341 (accumulate-n op init <??>))))
4342 Exercise 2.37. Suppose we represent vectors v = (v i ) as sequences of numbers, and matrices m =
4343 (m ij ) as sequences of vectors (the rows of the matrix). For example, the matrix
4344
4345 is represented as the sequence ((1 2 3 4) (4 5 6 6) (6 7 8 9)). With this representation,
4346 we can use sequence operations to concisely express the basic matrix and vector operations. These
4347 operations (which are described in any book on matrix algebra) are the following:
4348
4349 We can define the dot product as 17
4350 (define (dot-product v w)
4351 (accumulate + 0 (map * v w)))
4352 Fill in the missing expressions in the following procedures for computing the other matrix operations.
4353 (The procedure accumulate-n is defined in exercise 2.36.)
4354 (define (matrix-*-vector m v)
4355 (map <??> m))
4356 (define (transpose mat)
4357 (accumulate-n <??> <??> mat))
4358 (define (matrix-*-matrix m n)
4359 (let ((cols (transpose n)))
4360 (map <??> m)))
4361 Exercise 2.38. The accumulate procedure is also known as fold-right, because it combines
4362 the first element of the sequence with the result of combining all the elements to the right. There is
4363 also a fold-left, which is similar to fold-right, except that it combines elements working in
4364 the opposite direction:
4365
4366 \f(define (fold-left op initial sequence)
4367 (define (iter result rest)
4368 (if (null? rest)
4369 result
4370 (iter (op result (car rest))
4371 (cdr rest))))
4372 (iter initial sequence))
4373 What are the values of
4374 (fold-right / 1 (list 1 2 3))
4375 (fold-left / 1 (list 1 2 3))
4376 (fold-right list nil (list 1 2 3))
4377 (fold-left list nil (list 1 2 3))
4378 Give a property that op should satisfy to guarantee that fold-right and fold-left will produce
4379 the same values for any sequence.
4380 Exercise 2.39. Complete the following definitions of reverse (exercise 2.18) in terms of
4381 fold-right and fold-left from exercise 2.38:
4382 (define (reverse sequence)
4383 (fold-right (lambda (x y) <??>) nil sequence))
4384 (define (reverse sequence)
4385 (fold-left (lambda (x y) <??>) nil sequence))
4386
4387 Nested Mappings
4388 We can extend the sequence paradigm to include many computations that are commonly expressed
4389 using nested loops. 18 Consider this problem: Given a positive integer n, find all ordered pairs of
4390 distinct positive integers i and j, where 1< j< i< n, such that i + j is prime. For example, if n is 6, then
4391 the pairs are the following:
4392
4393 A natural way to organize this computation is to generate the sequence of all ordered pairs of positive
4394 integers less than or equal to n, filter to select those pairs whose sum is prime, and then, for each pair
4395 (i, j) that passes through the filter, produce the triple (i,j,i + j).
4396 Here is a way to generate the sequence of pairs: For each integer i< n, enumerate the integers j<i, and
4397 for each such i and j generate the pair (i,j). In terms of sequence operations, we map along the
4398 sequence (enumerate-interval 1 n). For each i in this sequence, we map along the sequence
4399 (enumerate-interval 1 (- i 1)). For each j in this latter sequence, we generate the pair
4400 (list i j). This gives us a sequence of pairs for each i. Combining all the sequences for all the i
4401 (by accumulating with append) produces the required sequence of pairs: 19
4402 (accumulate append
4403 nil
4404 (map (lambda (i)
4405 (map (lambda (j) (list i j))
4406
4407 \f(enumerate-interval 1 (- i 1))))
4408 (enumerate-interval 1 n)))
4409 The combination of mapping and accumulating with append is so common in this sort of program
4410 that we will isolate it as a separate procedure:
4411 (define (flatmap proc seq)
4412 (accumulate append nil (map proc seq)))
4413 Now filter this sequence of pairs to find those whose sum is prime. The filter predicate is called for
4414 each element of the sequence; its argument is a pair and it must extract the integers from the pair.
4415 Thus, the predicate to apply to each element in the sequence is
4416 (define (prime-sum? pair)
4417 (prime? (+ (car pair) (cadr pair))))
4418 Finally, generate the sequence of results by mapping over the filtered pairs using the following
4419 procedure, which constructs a triple consisting of the two elements of the pair along with their sum:
4420 (define (make-pair-sum pair)
4421 (list (car pair) (cadr pair) (+ (car pair) (cadr pair))))
4422 Combining all these steps yields the complete procedure:
4423 (define (prime-sum-pairs n)
4424 (map make-pair-sum
4425 (filter prime-sum?
4426 (flatmap
4427 (lambda (i)
4428 (map (lambda (j) (list i j))
4429 (enumerate-interval 1 (- i 1))))
4430 (enumerate-interval 1 n)))))
4431 Nested mappings are also useful for sequences other than those that enumerate intervals. Suppose we
4432 wish to generate all the permutations of a set S; that is, all the ways of ordering the items in the set. For
4433 instance, the permutations of {1,2,3} are {1,2,3}, { 1,3,2}, {2,1,3}, { 2,3,1}, { 3,1,2}, and { 3,2,1}.
4434 Here is a plan for generating the permutations of S: For each item x in S, recursively generate the
4435 sequence of permutations of S - x, 20 and adjoin x to the front of each one. This yields, for each x in S,
4436 the sequence of permutations of S that begin with x. Combining these sequences for all x gives all the
4437 permutations of S: 21
4438 (define (permutations s)
4439 (if (null? s)
4440 ; empty set?
4441 (list nil)
4442 ; sequence containing empty set
4443 (flatmap (lambda (x)
4444 (map (lambda (p) (cons x p))
4445 (permutations (remove x s))))
4446 s)))
4447 Notice how this strategy reduces the problem of generating permutations of S to the problem of
4448 generating the permutations of sets with fewer elements than S. In the terminal case, we work our way
4449 down to the empty list, which represents a set of no elements. For this, we generate (list nil),
4450
4451 \fwhich is a sequence with one item, namely the set with no elements. The remove procedure used in
4452 permutations returns all the items in a given sequence except for a given item. This can be
4453 expressed as a simple filter:
4454 (define (remove item sequence)
4455 (filter (lambda (x) (not (= x item)))
4456 sequence))
4457 Exercise 2.40. Define a procedure unique-pairs that, given an integer n, generates the sequence
4458 of pairs (i,j) with 1< j< i< n. Use unique-pairs to simplify the definition of prime-sum-pairs
4459 given above.
4460 Exercise 2.41. Write a procedure to find all ordered triples of distinct positive integers i, j, and k less
4461 than or equal to a given integer n that sum to a given integer s.
4462 Exercise 2.42.
4463
4464 Figure 2.8: A solution to the eight-queens puzzle.
4465 Figure 2.8: A solution to the eight-queens puzzle.
4466 The ‘‘eight-queens puzzle’’ asks how to place eight queens on a chessboard so that no queen is in
4467 check from any other (i.e., no two queens are in the same row, column, or diagonal). One possible
4468 solution is shown in figure 2.8. One way to solve the puzzle is to work across the board, placing a
4469 queen in each column. Once we have placed k - 1 queens, we must place the kth queen in a position
4470 where it does not check any of the queens already on the board. We can formulate this approach
4471 recursively: Assume that we have already generated the sequence of all possible ways to place k - 1
4472 queens in the first k - 1 columns of the board. For each of these ways, generate an extended set of
4473 positions by placing a queen in each row of the kth column. Now filter these, keeping only the
4474 positions for which the queen in the kth column is safe with respect to the other queens. This produces
4475 the sequence of all ways to place k queens in the first k columns. By continuing this process, we will
4476 produce not only one solution, but all solutions to the puzzle.
4477
4478 \fWe implement this solution as a procedure queens, which returns a sequence of all solutions to the
4479 problem of placing n queens on an n× n chessboard. Queens has an internal procedure queen-cols
4480 that returns the sequence of all ways to place queens in the first k columns of the board.
4481 (define (queens board-size)
4482 (define (queen-cols k)
4483 (if (= k 0)
4484 (list empty-board)
4485 (filter
4486 (lambda (positions) (safe? k positions))
4487 (flatmap
4488 (lambda (rest-of-queens)
4489 (map (lambda (new-row)
4490 (adjoin-position new-row k rest-of-queens))
4491 (enumerate-interval 1 board-size)))
4492 (queen-cols (- k 1))))))
4493 (queen-cols board-size))
4494 In this procedure rest-of-queens is a way to place k - 1 queens in the first k - 1 columns, and
4495 new-row is a proposed row in which to place the queen for the kth column. Complete the program by
4496 implementing the representation for sets of board positions, including the procedure
4497 adjoin-position, which adjoins a new row-column position to a set of positions, and
4498 empty-board, which represents an empty set of positions. You must also write the procedure
4499 safe?, which determines for a set of positions, whether the queen in the kth column is safe with
4500 respect to the others. (Note that we need only check whether the new queen is safe -- the other queens
4501 are already guaranteed safe with respect to each other.)
4502 Exercise 2.43. Louis Reasoner is having a terrible time doing exercise 2.42. His queens procedure
4503 seems to work, but it runs extremely slowly. (Louis never does manage to wait long enough for it to
4504 solve even the 6× 6 case.) When Louis asks Eva Lu Ator for help, she points out that he has
4505 interchanged the order of the nested mappings in the flatmap, writing it as
4506 (flatmap
4507 (lambda (new-row)
4508 (map (lambda (rest-of-queens)
4509 (adjoin-position new-row k rest-of-queens))
4510 (queen-cols (- k 1))))
4511 (enumerate-interval 1 board-size))
4512 Explain why this interchange makes the program run slowly. Estimate how long it will take Louis’s
4513 program to solve the eight-queens puzzle, assuming that the program in exercise 2.42 solves the puzzle
4514 in time T.
4515
4516 2.2.4 Example: A Picture Language
4517 This section presents a simple language for drawing pictures that illustrates the power of data
4518 abstraction and closure, and also exploits higher-order procedures in an essential way. The language is
4519 designed to make it easy to experiment with patterns such as the ones in figure 2.9, which are
4520 composed of repeated elements that are shifted and scaled. 22 In this language, the data objects being
4521 combined are represented as procedures rather than as list structure. Just as cons, which satisfies the
4522 closure property, allowed us to easily build arbitrarily complicated list structure, the operations in this
4523
4524 \flanguage, which also satisfy the closure property, allow us to easily build arbitrarily complicated
4525 patterns.
4526
4527 Figure 2.9: Designs generated with the picture language.
4528 Figure 2.9: Designs generated with the picture language.
4529
4530 The picture language
4531 When we began our study of programming in section 1.1, we emphasized the importance of describing
4532 a language by focusing on the language’s primitives, its means of combination, and its means of
4533 abstraction. We’ll follow that framework here.
4534 Part of the elegance of this picture language is that there is only one kind of element, called a painter.
4535 A painter draws an image that is shifted and scaled to fit within a designated parallelogram-shaped
4536 frame. For example, there’s a primitive painter we’ll call wave that makes a crude line drawing, as
4537 shown in figure 2.10. The actual shape of the drawing depends on the frame -- all four images in
4538 figure 2.10 are produced by the same wave painter, but with respect to four different frames. Painters
4539 can be more elaborate than this: The primitive painter called rogers paints a picture of MIT’s
4540 founder, William Barton Rogers, as shown in figure 2.11. 23 The four images in figure 2.11 are drawn
4541 with respect to the same four frames as the wave images in figure 2.10.
4542 To combine images, we use various operations that construct new painters from given painters. For
4543 example, the beside operation takes two painters and produces a new, compound painter that draws
4544 the first painter’s image in the left half of the frame and the second painter’s image in the right half of
4545 the frame. Similarly, below takes two painters and produces a compound painter that draws the first
4546 painter’s image below the second painter’s image. Some operations transform a single painter to
4547 produce a new painter. For example, flip-vert takes a painter and produces a painter that draws its
4548 image upside-down, and flip-horiz produces a painter that draws the original painter’s image
4549 left-to-right reversed.
4550
4551 \fFigure 2.10: Images produced by the wave painter, with respect to four different frames. The
4552 frames, shown with dotted lines, are not part of the images.
4553 Figure 2.10: Images produced by the wave painter, with respect to four different frames. The frames,
4554 shown with dotted lines, are not part of the images.
4555
4556 \fFigure 2.11: Images of William Barton Rogers, founder and first president of MIT, painted with
4557 respect to the same four frames as in figure 2.10 (original image reprinted with the permission of
4558 the MIT Museum).
4559 Figure 2.11: Images of William Barton Rogers, founder and first president of MIT, painted with
4560 respect to the same four frames as in figure 2.10 (original image reprinted with the permission of the
4561 MIT Museum).
4562 Figure 2.12 shows the drawing of a painter called wave4 that is built up in two stages starting from
4563 wave:
4564 (define wave2 (beside wave (flip-vert wave)))
4565 (define wave4 (below wave2 wave2))
4566
4567 \f(define wave2
4568 (beside wave (flip-vert wave)))
4569
4570 (define wave4
4571 (below wave2 wave2))
4572
4573 Figure 2.12: Creating a complex figure, starting from the wave painter of figure 2.10.
4574 Figure 2.12: Creating a complex figure, starting from the wave painter of figure 2.10.
4575 In building up a complex image in this manner we are exploiting the fact that painters are closed under
4576 the language’s means of combination. The beside or below of two painters is itself a painter;
4577 therefore, we can use it as an element in making more complex painters. As with building up list
4578 structure using cons, the closure of our data under the means of combination is crucial to the ability
4579 to create complex structures while using only a few operations.
4580 Once we can combine painters, we would like to be able to abstract typical patterns of combining
4581 painters. We will implement the painter operations as Scheme procedures. This means that we don’t
4582 need a special abstraction mechanism in the picture language: Since the means of combination are
4583 ordinary Scheme procedures, we automatically have the capability to do anything with painter
4584 operations that we can do with procedures. For example, we can abstract the pattern in wave4 as
4585 (define (flipped-pairs painter)
4586 (let ((painter2 (beside painter (flip-vert painter))))
4587 (below painter2 painter2)))
4588 and define wave4 as an instance of this pattern:
4589 (define wave4 (flipped-pairs wave))
4590 We can also define recursive operations. Here’s one that makes painters split and branch towards the
4591 right as shown in figures 2.13 and 2.14:
4592 (define (right-split painter n)
4593 (if (= n 0)
4594 painter
4595 (let ((smaller (right-split painter (- n 1))))
4596 (beside painter (below smaller smaller)))))
4597
4598 \fright-split n
4599
4600 corner-split n
4601
4602 Figure 2.13: Recursive plans for right-split and corner-split.
4603 Figure 2.13: Recursive plans for right-split and corner-split.
4604 We can produce balanced patterns by branching upwards as well as towards the right (see
4605 exercise 2.44 and figures 2.13 and 2.14):
4606 (define (corner-split painter n)
4607 (if (= n 0)
4608 painter
4609 (let ((up (up-split painter (- n 1)))
4610 (right (right-split painter (- n 1))))
4611 (let ((top-left (beside up up))
4612 (bottom-right (below right right))
4613 (corner (corner-split painter (- n 1))))
4614 (beside (below painter top-left)
4615 (below bottom-right corner))))))
4616
4617 \f(right-split wave 4)
4618
4619 (corner-split wave 4)
4620
4621 (right-split rogers 4)
4622
4623 (corner-split rogers 4)
4624
4625 Figure 2.14: The recursive operations right-split and corner-split applied to the
4626 painters wave and rogers. Combining four corner-split figures produces symmetric
4627 square-limit designs as shown in figure 2.9.
4628 Figure 2.14: The recursive operations right-split and corner-split applied to the painters
4629 wave and rogers. Combining four corner-split figures produces symmetric square-limit
4630 designs as shown in figure 2.9.
4631 By placing four copies of a corner-split appropriately, we obtain a pattern called
4632 square-limit, whose application to wave and rogers is shown in figure 2.9:
4633 (define (square-limit painter n)
4634 (let ((quarter (corner-split painter n)))
4635 (let ((half (beside (flip-horiz quarter) quarter)))
4636 (below (flip-vert half) half))))
4637 Exercise 2.44. Define the procedure up-split used by corner-split. It is similar to
4638 right-split, except that it switches the roles of below and beside.
4639
4640 \fHigher-order operations
4641 In addition to abstracting patterns of combining painters, we can work at a higher level, abstracting
4642 patterns of combining painter operations. That is, we can view the painter operations as elements to
4643 manipulate and can write means of combination for these elements -- procedures that take painter
4644 operations as arguments and create new painter operations.
4645 For example, flipped-pairs and square-limit each arrange four copies of a painter’s image
4646 in a square pattern; they differ only in how they orient the copies. One way to abstract this pattern of
4647 painter combination is with the following procedure, which takes four one-argument painter operations
4648 and produces a painter operation that transforms a given painter with those four operations and
4649 arranges the results in a square. Tl, tr, bl, and br are the transformations to apply to the top left
4650 copy, the top right copy, the bottom left copy, and the bottom right copy, respectively.
4651 (define (square-of-four tl tr bl br)
4652 (lambda (painter)
4653 (let ((top (beside (tl painter) (tr painter)))
4654 (bottom (beside (bl painter) (br painter))))
4655 (below bottom top))))
4656 Then flipped-pairs can be defined in terms of square-of-four as follows: 24
4657 (define (flipped-pairs painter)
4658 (let ((combine4 (square-of-four identity flip-vert
4659 identity flip-vert)))
4660 (combine4 painter)))
4661 and square-limit can be expressed as 25
4662 (define (square-limit painter n)
4663 (let ((combine4 (square-of-four flip-horiz identity
4664 rotate180 flip-vert)))
4665 (combine4 (corner-split painter n))))
4666 Exercise 2.45. Right-split and up-split can be expressed as instances of a general splitting
4667 operation. Define a procedure split with the property that evaluating
4668 (define right-split (split beside below))
4669 (define up-split (split below beside))
4670 produces procedures right-split and up-split with the same behaviors as the ones already
4671 defined.
4672
4673 Frames
4674 Before we can show how to implement painters and their means of combination, we must first
4675 consider frames. A frame can be described by three vectors -- an origin vector and two edge vectors.
4676 The origin vector specifies the offset of the frame’s origin from some absolute origin in the plane, and
4677 the edge vectors specify the offsets of the frame’s corners from its origin. If the edges are
4678 perpendicular, the frame will be rectangular. Otherwise the frame will be a more general
4679 parallelogram.
4680
4681 \fFigure 2.15 shows a frame and its associated vectors. In accordance with data abstraction, we need not
4682 be specific yet about how frames are represented, other than to say that there is a constructor
4683 make-frame, which takes three vectors and produces a frame, and three corresponding selectors
4684 origin-frame, edge1-frame, and edge2-frame (see exercise 2.47).
4685
4686 Figure 2.15: A frame is described by three vectors -- an origin and two edges.
4687 Figure 2.15: A frame is described by three vectors -- an origin and two edges.
4688 We will use coordinates in the unit square (0< x,y< 1) to specify images. With each frame, we
4689 associate a frame coordinate map, which will be used to shift and scale images to fit the frame. The
4690 map transforms the unit square into the frame by mapping the vector v = (x,y) to the vector sum
4691
4692 For example, (0,0) is mapped to the origin of the frame, (1,1) to the vertex diagonally opposite the
4693 origin, and (0.5,0.5) to the center of the frame. We can create a frame’s coordinate map with the
4694 following procedure: 26
4695 (define (frame-coord-map frame)
4696 (lambda (v)
4697 (add-vect
4698 (origin-frame frame)
4699 (add-vect (scale-vect (xcor-vect v)
4700 (edge1-frame frame))
4701 (scale-vect (ycor-vect v)
4702 (edge2-frame frame))))))
4703 Observe that applying frame-coord-map to a frame returns a procedure that, given a vector,
4704 returns a vector. If the argument vector is in the unit square, the result vector will be in the frame. For
4705 example,
4706 ((frame-coord-map a-frame) (make-vect 0 0))
4707
4708 \freturns the same vector as
4709 (origin-frame a-frame)
4710 Exercise 2.46. A two-dimensional vector v running from the origin to a point can be represented as a
4711 pair consisting of an x-coordinate and a y-coordinate. Implement a data abstraction for vectors by
4712 giving a constructor make-vect and corresponding selectors xcor-vect and ycor-vect. In
4713 terms of your selectors and constructor, implement procedures add-vect, sub-vect, and
4714 scale-vect that perform the operations vector addition, vector subtraction, and multiplying a
4715 vector by a scalar:
4716
4717 Exercise 2.47. Here are two possible constructors for frames:
4718 (define
4719 (list
4720 (define
4721 (cons
4722
4723 (make-frame origin edge1 edge2)
4724 origin edge1 edge2))
4725 (make-frame origin edge1 edge2)
4726 origin (cons edge1 edge2)))
4727
4728 For each constructor supply the appropriate selectors to produce an implementation for frames.
4729
4730 Painters
4731 A painter is represented as a procedure that, given a frame as argument, draws a particular image
4732 shifted and scaled to fit the frame. That is to say, if p is a painter and f is a frame, then we produce p’s
4733 image in f by calling p with f as argument.
4734 The details of how primitive painters are implemented depend on the particular characteristics of the
4735 graphics system and the type of image to be drawn. For instance, suppose we have a procedure
4736 draw-line that draws a line on the screen between two specified points. Then we can create
4737 painters for line drawings, such as the wave painter in figure 2.10, from lists of line segments as
4738 follows: 27
4739 (define (segments->painter segment-list)
4740 (lambda (frame)
4741 (for-each
4742 (lambda (segment)
4743 (draw-line
4744 ((frame-coord-map frame) (start-segment segment))
4745 ((frame-coord-map frame) (end-segment segment))))
4746 segment-list)))
4747 The segments are given using coordinates with respect to the unit square. For each segment in the list,
4748 the painter transforms the segment endpoints with the frame coordinate map and draws a line between
4749 the transformed points.
4750
4751 \fRepresenting painters as procedures erects a powerful abstraction barrier in the picture language. We
4752 can create and intermix all sorts of primitive painters, based on a variety of graphics capabilities. The
4753 details of their implementation do not matter. Any procedure can serve as a painter, provided that it
4754 takes a frame as argument and draws something scaled to fit the frame. 28
4755 Exercise 2.48. A directed line segment in the plane can be represented as a pair of vectors -- the
4756 vector running from the origin to the start-point of the segment, and the vector running from the origin
4757 to the end-point of the segment. Use your vector representation from exercise 2.46 to define a
4758 representation for segments with a constructor make-segment and selectors start-segment and
4759 end-segment.
4760 Exercise 2.49. Use segments->painter to define the following primitive painters:
4761 a. The painter that draws the outline of the designated frame.
4762 b. The painter that draws an ‘‘X’’ by connecting opposite corners of the frame.
4763 c. The painter that draws a diamond shape by connecting the midpoints of the sides of the frame.
4764 d. The wave painter.
4765
4766 Transforming and combining painters
4767 An operation on painters (such as flip-vert or beside) works by creating a painter that invokes
4768 the original painters with respect to frames derived from the argument frame. Thus, for example,
4769 flip-vert doesn’t have to know how a painter works in order to flip it -- it just has to know how to
4770 turn a frame upside down: The flipped painter just uses the original painter, but in the inverted frame.
4771 Painter operations are based on the procedure transform-painter, which takes as arguments a
4772 painter and information on how to transform a frame and produces a new painter. The transformed
4773 painter, when called on a frame, transforms the frame and calls the original painter on the transformed
4774 frame. The arguments to transform-painter are points (represented as vectors) that specify the
4775 corners of the new frame: When mapped into the frame, the first point specifies the new frame’s origin
4776 and the other two specify the ends of its edge vectors. Thus, arguments within the unit square specify a
4777 frame contained within the original frame.
4778 (define (transform-painter painter origin corner1 corner2)
4779 (lambda (frame)
4780 (let ((m (frame-coord-map frame)))
4781 (let ((new-origin (m origin)))
4782 (painter
4783 (make-frame new-origin
4784 (sub-vect (m corner1) new-origin)
4785 (sub-vect (m corner2) new-origin)))))))
4786 Here’s how to flip painter images vertically:
4787 (define (flip-vert painter)
4788 (transform-painter painter
4789 (make-vect 0.0 1.0)
4790 ; new origin
4791 (make-vect 1.0 1.0)
4792 ; new end of edge1
4793 (make-vect 0.0 0.0))) ; new end of edge2
4794
4795 \fUsing transform-painter, we can easily define new transformations. For example, we can
4796 define a painter that shrinks its image to the upper-right quarter of the frame it is given:
4797 (define (shrink-to-upper-right painter)
4798 (transform-painter painter
4799 (make-vect 0.5 0.5)
4800 (make-vect 1.0 0.5)
4801 (make-vect 0.5 1.0)))
4802 Other transformations rotate images counterclockwise by 90 degrees 29
4803 (define (rotate90 painter)
4804 (transform-painter painter
4805 (make-vect 1.0 0.0)
4806 (make-vect 1.0 1.0)
4807 (make-vect 0.0 0.0)))
4808 or squash images towards the center of the frame: 30
4809 (define (squash-inwards painter)
4810 (transform-painter painter
4811 (make-vect 0.0 0.0)
4812 (make-vect 0.65 0.35)
4813 (make-vect 0.35 0.65)))
4814 Frame transformation is also the key to defining means of combining two or more painters. The
4815 beside procedure, for example, takes two painters, transforms them to paint in the left and right
4816 halves of an argument frame respectively, and produces a new, compound painter. When the
4817 compound painter is given a frame, it calls the first transformed painter to paint in the left half of the
4818 frame and calls the second transformed painter to paint in the right half of the frame:
4819 (define (beside painter1 painter2)
4820 (let ((split-point (make-vect 0.5 0.0)))
4821 (let ((paint-left
4822 (transform-painter painter1
4823 (make-vect 0.0
4824 split-point
4825 (make-vect 0.0
4826 (paint-right
4827 (transform-painter painter2
4828 split-point
4829 (make-vect 1.0
4830 (make-vect 0.5
4831 (lambda (frame)
4832 (paint-left frame)
4833 (paint-right frame)))))
4834
4835 0.0)
4836 1.0)))
4837
4838 0.0)
4839 1.0))))
4840
4841 Observe how the painter data abstraction, and in particular the representation of painters as procedures,
4842 makes beside easy to implement. The beside procedure need not know anything about the details
4843 of the component painters other than that each painter will draw something in its designated frame.
4844
4845 \fExercise 2.50. Define the transformation flip-horiz, which flips painters horizontally, and
4846 transformations that rotate painters counterclockwise by 180 degrees and 270 degrees.
4847 Exercise 2.51. Define the below operation for painters. Below takes two painters as arguments. The
4848 resulting painter, given a frame, draws with the first painter in the bottom of the frame and with the
4849 second painter in the top. Define below in two different ways -- first by writing a procedure that is
4850 analogous to the beside procedure given above, and again in terms of beside and suitable rotation
4851 operations (from exercise 2.50).
4852
4853 Levels of language for robust design
4854 The picture language exercises some of the critical ideas we’ve introduced about abstraction with
4855 procedures and data. The fundamental data abstractions, painters, are implemented using procedural
4856 representations, which enables the language to handle different basic drawing capabilities in a uniform
4857 way. The means of combination satisfy the closure property, which permits us to easily build up
4858 complex designs. Finally, all the tools for abstracting procedures are available to us for abstracting
4859 means of combination for painters.
4860 We have also obtained a glimpse of another crucial idea about languages and program design. This is
4861 the approach of stratified design, the notion that a complex system should be structured as a sequence
4862 of levels that are described using a sequence of languages. Each level is constructed by combining
4863 parts that are regarded as primitive at that level, and the parts constructed at each level are used as
4864 primitives at the next level. The language used at each level of a stratified design has primitives,
4865 means of combination, and means of abstraction appropriate to that level of detail.
4866 Stratified design pervades the engineering of complex systems. For example, in computer engineering,
4867 resistors and transistors are combined (and described using a language of analog circuits) to produce
4868 parts such as and-gates and or-gates, which form the primitives of a language for digital-circuit
4869 design. 31 These parts are combined to build processors, bus structures, and memory systems, which
4870 are in turn combined to form computers, using languages appropriate to computer architecture.
4871 Computers are combined to form distributed systems, using languages appropriate for describing
4872 network interconnections, and so on.
4873 As a tiny example of stratification, our picture language uses primitive elements (primitive painters)
4874 that are created using a language that specifies points and lines to provide the lists of line segments for
4875 segments->painter, or the shading details for a painter like rogers. The bulk of our
4876 description of the picture language focused on combining these primitives, using geometric combiners
4877 such as beside and below. We also worked at a higher level, regarding beside and below as
4878 primitives to be manipulated in a language whose operations, such as square-of-four, capture
4879 common patterns of combining geometric combiners.
4880 Stratified design helps make programs robust, that is, it makes it likely that small changes in a
4881 specification will require correspondingly small changes in the program. For instance, suppose we
4882 wanted to change the image based on wave shown in figure 2.9. We could work at the lowest level to
4883 change the detailed appearance of the wave element; we could work at the middle level to change the
4884 way corner-split replicates the wave; we could work at the highest level to change how
4885 square-limit arranges the four copies of the corner. In general, each level of a stratified design
4886 provides a different vocabulary for expressing the characteristics of the system, and a different kind of
4887 ability to change it.
4888
4889 \fExercise 2.52. Make changes to the square limit of wave shown in figure 2.9 by working at each of
4890 the levels described above. In particular:
4891 a. Add some segments to the primitive wave painter of exercise 2.49 (to add a smile, for example).
4892 b. Change the pattern constructed by corner-split (for example, by using only one copy of the
4893 up-split and right-split images instead of two).
4894 c. Modify the version of square-limit that uses square-of-four so as to assemble the
4895 corners in a different pattern. (For example, you might make the big Mr. Rogers look outward from
4896 each corner of the square.)
4897 6 The use of the word ‘‘closure’’ here comes from abstract algebra, where a set of elements is said to
4898
4899 be closed under an operation if applying the operation to elements in the set produces an element that
4900 is again an element of the set. The Lisp community also (unfortunately) uses the word ‘‘closure’’ to
4901 describe a totally unrelated concept: A closure is an implementation technique for representing
4902 procedures with free variables. We do not use the word ‘‘closure’’ in this second sense in this book.
4903 7 The notion that a means of combination should satisfy closure is a straightforward idea.
4904
4905 Unfortunately, the data combiners provided in many popular programming languages do not satisfy
4906 closure, or make closure cumbersome to exploit. In Fortran or Basic, one typically combines data
4907 elements by assembling them into arrays -- but one cannot form arrays whose elements are themselves
4908 arrays. Pascal and C admit structures whose elements are structures. However, this requires that the
4909 programmer manipulate pointers explicitly, and adhere to the restriction that each field of a structure
4910 can contain only elements of a prespecified form. Unlike Lisp with its pairs, these languages have no
4911 built-in general-purpose glue that makes it easy to manipulate compound data in a uniform way. This
4912 limitation lies behind Alan Perlis’s comment in his foreword to this book: ‘‘In Pascal the plethora of
4913 declarable data structures induces a specialization within functions that inhibits and penalizes casual
4914 cooperation. It is better to have 100 functions operate on one data structure than to have 10 functions
4915 operate on 10 data structures.’’
4916 8 In this book, we use list to mean a chain of pairs terminated by the end-of-list marker. In contrast,
4917
4918 the term list structure refers to any data structure made out of pairs, not just to lists.
4919 9 Since nested applications of car and cdr are cumbersome to write, Lisp dialects provide
4920
4921 abbreviations for them -- for instance,
4922
4923 The names of all such procedures start with c and end with r. Each a between them stands for a car
4924 operation and each d for a cdr operation, to be applied in the same order in which they appear in the
4925 name. The names car and cdr persist because simple combinations like cadr are pronounceable.
4926 10 It’s remarkable how much energy in the standardization of Lisp dialects has been dissipated in
4927
4928 arguments that are literally over nothing: Should nil be an ordinary name? Should the value of nil
4929 be a symbol? Should it be a list? Should it be a pair? In Scheme, nil is an ordinary name, which we
4930 use in this section as a variable whose value is the end-of-list marker (just as true is an ordinary
4931 variable that has a true value). Other dialects of Lisp, including Common Lisp, treat nil as a special
4932 symbol. The authors of this book, who have endured too many language standardization brawls, would
4933 like to avoid the entire issue. Once we have introduced quotation in section 2.3, we will denote the
4934 empty list as ’() and dispense with the variable nil entirely.
4935
4936 \f11 To define f and g using lambda we would write
4937
4938 (define f (lambda (x y . z) <body>))
4939 (define g (lambda w <body>))
4940 12 Scheme standardly provides a map procedure that is more general than the one described here. This
4941
4942 more general map takes a procedure of n arguments, together with n lists, and applies the procedure to
4943 all the first elements of the lists, all the second elements of the lists, and so on, returning a list of the
4944 results. For example:
4945 (map + (list 1 2 3) (list 40 50 60) (list 700 800 900))
4946 (741 852 963)
4947 (map (lambda (x y) (+ x (* 2 y)))
4948 (list 1 2 3)
4949 (list 4 5 6))
4950 (9 12 15)
4951 13 The order of the first two clauses in the cond matters, since the empty list satisfies null? and
4952
4953 also is not a pair.
4954 14 This is, in fact, precisely the fringe procedure from exercise 2.28. Here we’ve renamed it to
4955
4956 emphasize that it is part of a family of general sequence-manipulation procedures.
4957 15 Richard Waters (1979) developed a program that automatically analyzes traditional Fortran
4958
4959 programs, viewing them in terms of maps, filters, and accumulations. He found that fully 90 percent of
4960 the code in the Fortran Scientific Subroutine Package fits neatly into this paradigm. One of the reasons
4961 for the success of Lisp as a programming language is that lists provide a standard medium for
4962 expressing ordered collections so that they can be manipulated using higher-order operations. The
4963 programming language APL owes much of its power and appeal to a similar choice. In APL all data
4964 are represented as arrays, and there is a universal and convenient set of generic operators for all sorts
4965 of array operations.
4966 16 According to Knuth (1981), this rule was formulated by W. G. Horner early in the nineteenth
4967
4968 century, but the method was actually used by Newton over a hundred years earlier. Horner’s rule
4969 evaluates the polynomial using fewer additions and multiplications than does the straightforward
4970 method of first computing a n x n , then adding a n-1 x n-1 , and so on. In fact, it is possible to prove that
4971 any algorithm for evaluating arbitrary polynomials must use at least as many additions and
4972 multiplications as does Horner’s rule, and thus Horner’s rule is an optimal algorithm for polynomial
4973 evaluation. This was proved (for the number of additions) by A. M. Ostrowski in a 1954 paper that
4974 essentially founded the modern study of optimal algorithms. The analogous statement for
4975 multiplications was proved by V. Y. Pan in 1966. The book by Borodin and Munro (1975) provides an
4976 overview of these and other results about optimal algorithms.
4977 17 This definition uses the extended version of map described in footnote 12.
4978 18 This approach to nested mappings was shown to us by David Turner, whose languages KRC and
4979
4980 Miranda provide elegant formalisms for dealing with these constructs. The examples in this section
4981 (see also exercise 2.42) are adapted from Turner 1981. In section 3.5.3, we’ll see how this approach
4982 generalizes to infinite sequences.
4983
4984 \f19 We’re representing a pair here as a list of two elements rather than as a Lisp pair. Thus, the ‘‘pair’’
4985
4986 (i,j) is represented as (list i j), not (cons i j).
4987 20 The set S - x is the set of all elements of S, excluding x.
4988 21 Semicolons in Scheme code are used to introduce comments. Everything from the semicolon to the
4989
4990 end of the line is ignored by the interpreter. In this book we don’t use many comments; we try to make
4991 our programs self-documenting by using descriptive names.
4992 22 The picture language is based on the language Peter Henderson created to construct images like
4993
4994 M.C. Escher’s ‘‘Square Limit’’ woodcut (see Henderson 1982). The woodcut incorporates a repeated
4995 scaled pattern, similar to the arrangements drawn using the square-limit procedure in this
4996 section.
4997 23 William Barton Rogers (1804-1882) was the founder and first president of MIT. A geologist and
4998
4999 talented teacher, he taught at William and Mary College and at the University of Virginia. In 1859 he
5000 moved to Boston, where he had more time for research, worked on a plan for establishing a
5001 ‘‘polytechnic institute,’’ and served as Massachusetts’s first State Inspector of Gas Meters.
5002 When MIT was established in 1861, Rogers was elected its first president. Rogers espoused an ideal of
5003 ‘‘useful learning’’ that was different from the university education of the time, with its overemphasis
5004 on the classics, which, as he wrote, ‘‘stand in the way of the broader, higher and more practical
5005 instruction and discipline of the natural and social sciences.’’ This education was likewise to be
5006 different from narrow trade-school education. In Rogers’s words:
5007 The world-enforced distinction between the practical and the scientific worker is utterly futile,
5008 and the whole experience of modern times has demonstrated its utter worthlessness.
5009 Rogers served as president of MIT until 1870, when he resigned due to ill health. In 1878 the second
5010 president of MIT, John Runkle, resigned under the pressure of a financial crisis brought on by the
5011 Panic of 1873 and strain of fighting off attempts by Harvard to take over MIT. Rogers returned to hold
5012 the office of president until 1881.
5013 Rogers collapsed and died while addressing MIT’s graduating class at the commencement exercises of
5014 1882. Runkle quoted Rogers’s last words in a memorial address delivered that same year:
5015 ‘‘As I stand here today and see what the Institute is, ... I call to mind the beginnings of science.
5016 I remember one hundred and fifty years ago Stephen Hales published a pamphlet on the subject of
5017 illuminating gas, in which he stated that his researches had demonstrated that 128 grains of
5018 bituminous coal -- ’’
5019 ‘‘Bituminous coal,’’ these were his last words on earth. Here he bent forward, as if consulting
5020 some notes on the table before him, then slowly regaining an erect position, threw up his hands,
5021 and was translated from the scene of his earthly labors and triumphs to ‘‘the tomorrow of death,’’
5022 where the mysteries of life are solved, and the disembodied spirit finds unending satisfaction in
5023 contemplating the new and still unfathomable mysteries of the infinite future.
5024 In the words of Francis A. Walker (MIT’s third president):
5025 All his life he had borne himself most faithfully and heroically, and he died as so good a knight
5026 would surely have wished, in harness, at his post, and in the very part and act of public duty.
5027
5028 \f24 Equivalently, we could write
5029
5030 (define flipped-pairs
5031 (square-of-four identity flip-vert identity flip-vert))
5032 25 Rotate180 rotates a painter by 180 degrees (see exercise 2.50). Instead of rotate180 we
5033
5034 could say (compose flip-vert flip-horiz), using the compose procedure from
5035 exercise 1.42.
5036 26 Frame-coord-map uses the vector operations described in exercise 2.46 below, which we
5037
5038 assume have been implemented using some representation for vectors. Because of data abstraction, it
5039 doesn’t matter what this vector representation is, so long as the vector operations behave correctly.
5040 27 Segments->painter uses the representation for line segments described in exercise 2.48
5041
5042 below. It also uses the for-each procedure described in exercise 2.23.
5043 28 For example, the rogers painter of figure 2.11 was constructed from a gray-level image. For each
5044
5045 point in a given frame, the rogers painter determines the point in the image that is mapped to it
5046 under the frame coordinate map, and shades it accordingly. By allowing different types of painters, we
5047 are capitalizing on the abstract data idea discussed in section 2.1.3, where we argued that a
5048 rational-number representation could be anything at all that satisfies an appropriate condition. Here
5049 we’re using the fact that a painter can be implemented in any way at all, so long as it draws something
5050 in the designated frame. Section 2.1.3 also showed how pairs could be implemented as procedures.
5051 Painters are our second example of a procedural representation for data.
5052 29 Rotate90 is a pure rotation only for square frames, because it also stretches and shrinks the
5053
5054 image to fit into the rotated frame.
5055 30 The diamond-shaped images in figures 2.10 and 2.11 were created with squash-inwards
5056
5057 applied to wave and rogers.
5058 31 Section 3.3.4 describes one such language.
5059
5060 [Go to first, previous, next page; contents; index]
5061
5062 \f[Go to first, previous, next page; contents; index]
5063
5064 2.3 Symbolic Data
5065 All the compound data objects we have used so far were constructed ultimately from numbers. In this
5066 section we extend the representational capability of our language by introducing the ability to work
5067 with arbitrary symbols as data.
5068
5069 2.3.1 Quotation
5070 If we can form compound data using symbols, we can have lists such as
5071 (a b c d)
5072 (23 45 17)
5073 ((Norah 12) (Molly 9) (Anna 7) (Lauren 6) (Charlotte 4))
5074 Lists containing symbols can look just like the expressions of our language:
5075 (* (+ 23 45) (+ x 9))
5076 (define (fact n) (if (= n 1) 1 (* n (fact (- n 1)))))
5077 In order to manipulate symbols we need a new element in our language: the ability to quote a data
5078 object. Suppose we want to construct the list (a b). We can’t accomplish this with (list a b),
5079 because this expression constructs a list of the values of a and b rather than the symbols themselves.
5080 This issue is well known in the context of natural languages, where words and sentences may be
5081 regarded either as semantic entities or as character strings (syntactic entities). The common practice in
5082 natural languages is to use quotation marks to indicate that a word or a sentence is to be treated
5083 literally as a string of characters. For instance, the first letter of ‘‘John’’ is clearly ‘‘J.’’ If we tell
5084 somebody ‘‘say your name aloud,’’ we expect to hear that person’s name. However, if we tell
5085 somebody ‘‘say ‘your name’ aloud,’’ we expect to hear the words ‘‘your name.’’ Note that we are
5086 forced to nest quotation marks to describe what somebody else might say. 32
5087 We can follow this same practice to identify lists and symbols that are to be treated as data objects
5088 rather than as expressions to be evaluated. However, our format for quoting differs from that of natural
5089 languages in that we place a quotation mark (traditionally, the single quote symbol ’) only at the
5090 beginning of the object to be quoted. We can get away with this in Scheme syntax because we rely on
5091 blanks and parentheses to delimit objects. Thus, the meaning of the single quote character is to quote
5092 the next object. 33
5093 Now we can distinguish between symbols and their values:
5094 (define a 1)
5095 (define b 2)
5096 (list a b)
5097 (1 2)
5098 (list ’a ’b)
5099 (a b)
5100 (list ’a b)
5101 (a 2)
5102
5103 \fQuotation also allows us to type in compound objects, using the conventional printed representation
5104 for lists: 34
5105 (car ’(a b c))
5106 a
5107 (cdr ’(a b c))
5108 (b c)
5109 In keeping with this, we can obtain the empty list by evaluating ’(), and thus dispense with the
5110 variable nil.
5111 One additional primitive used in manipulating symbols is eq?, which takes two symbols as arguments
5112 and tests whether they are the same. 35 Using eq?, we can implement a useful procedure called memq.
5113 This takes two arguments, a symbol and a list. If the symbol is not contained in the list (i.e., is not eq?
5114 to any item in the list), then memq returns false. Otherwise, it returns the sublist of the list beginning
5115 with the first occurrence of the symbol:
5116 (define (memq item x)
5117 (cond ((null? x) false)
5118 ((eq? item (car x)) x)
5119 (else (memq item (cdr x)))))
5120 For example, the value of
5121 (memq ’apple ’(pear banana prune))
5122 is false, whereas the value of
5123 (memq ’apple ’(x (apple sauce) y apple pear))
5124 is (apple pear).
5125 Exercise 2.53. What would the interpreter print in response to evaluating each of the following
5126 expressions?
5127 (list ’a ’b ’c)
5128 (list (list ’george))
5129 (cdr ’((x1 x2) (y1 y2)))
5130 (cadr ’((x1 x2) (y1 y2)))
5131 (pair? (car ’(a short list)))
5132 (memq ’red ’((red shoes) (blue socks)))
5133 (memq ’red ’(red shoes blue socks))
5134 Exercise 2.54. Two lists are said to be equal? if they contain equal elements arranged in the same
5135 order. For example,
5136 (equal? ’(this is a list) ’(this is a list))
5137 is true, but
5138 (equal? ’(this is a list) ’(this (is a) list))
5139
5140 \fis false. To be more precise, we can define equal? recursively in terms of the basic eq? equality of
5141 symbols by saying that a and b are equal? if they are both symbols and the symbols are eq?, or if
5142 they are both lists such that (car a) is equal? to (car b) and (cdr a) is equal? to (cdr
5143 b). Using this idea, implement equal? as a procedure. 36
5144 Exercise 2.55. Eva Lu Ator types to the interpreter the expression
5145 (car ’’abracadabra)
5146 To her surprise, the interpreter prints back quote. Explain.
5147
5148 2.3.2 Example: Symbolic Differentiation
5149 As an illustration of symbol manipulation and a further illustration of data abstraction, consider the
5150 design of a procedure that performs symbolic differentiation of algebraic expressions. We would like
5151 the procedure to take as arguments an algebraic expression and a variable and to return the derivative
5152 of the expression with respect to the variable. For example, if the arguments to the procedure are ax 2
5153 + bx + c and x, the procedure should return 2ax + b. Symbolic differentiation is of special historical
5154 significance in Lisp. It was one of the motivating examples behind the development of a computer
5155 language for symbol manipulation. Furthermore, it marked the beginning of the line of research that
5156 led to the development of powerful systems for symbolic mathematical work, which are currently
5157 being used by a growing number of applied mathematicians and physicists.
5158 In developing the symbolic-differentiation program, we will follow the same strategy of data
5159 abstraction that we followed in developing the rational-number system of section 2.1.1. That is, we
5160 will first define a differentiation algorithm that operates on abstract objects such as ‘‘sums,’’
5161 ‘‘products,’’ and ‘‘variables’’ without worrying about how these are to be represented. Only afterward
5162 will we address the representation problem.
5163
5164 The differentiation program with abstract data
5165 In order to keep things simple, we will consider a very simple symbolic-differentiation program that
5166 handles expressions that are built up using only the operations of addition and multiplication with two
5167 arguments. Differentiation of any such expression can be carried out by applying the following
5168 reduction rules:
5169
5170 Observe that the latter two rules are recursive in nature. That is, to obtain the derivative of a sum we
5171 first find the derivatives of the terms and add them. Each of the terms may in turn be an expression
5172 that needs to be decomposed. Decomposing into smaller and smaller pieces will eventually produce
5173 pieces that are either constants or variables, whose derivatives will be either 0 or 1.
5174
5175 \fTo embody these rules in a procedure we indulge in a little wishful thinking, as we did in designing
5176 the rational-number implementation. If we had a means for representing algebraic expressions, we
5177 should be able to tell whether an expression is a sum, a product, a constant, or a variable. We should
5178 be able to extract the parts of an expression. For a sum, for example we want to be able to extract the
5179 addend (first term) and the augend (second term). We should also be able to construct expressions
5180 from parts. Let us assume that we already have procedures to implement the following selectors,
5181 constructors, and predicates:
5182 (variable? e)
5183
5184 Is e a variable?
5185
5186 (same-variable? v1 v2)
5187
5188 Are v1 and v2 the same variable?
5189
5190 (sum? e)
5191
5192 Is e a sum?
5193
5194 (addend e)
5195
5196 Addend of the sum e.
5197
5198 (augend e)
5199
5200 Augend of the sum e.
5201
5202 (make-sum a1 a2)
5203
5204 Construct the sum of a1 and a2.
5205
5206 (product? e)
5207
5208 Is e a product?
5209
5210 (multiplier e)
5211
5212 Multiplier of the product e.
5213
5214 (multiplicand e)
5215
5216 Multiplicand of the product e.
5217
5218 (make-product m1 m2)
5219
5220 Construct the product of m1 and m2.
5221
5222 Using these, and the primitive predicate number?, which identifies numbers, we can express the
5223 differentiation rules as the following procedure:
5224 (define (deriv exp var)
5225 (cond ((number? exp) 0)
5226 ((variable? exp)
5227 (if (same-variable? exp var) 1 0))
5228 ((sum? exp)
5229 (make-sum (deriv (addend exp) var)
5230 (deriv (augend exp) var)))
5231 ((product? exp)
5232 (make-sum
5233 (make-product (multiplier exp)
5234 (deriv (multiplicand exp) var))
5235 (make-product (deriv (multiplier exp) var)
5236 (multiplicand exp))))
5237 (else
5238 (error "unknown expression type -- DERIV" exp))))
5239 This deriv procedure incorporates the complete differentiation algorithm. Since it is expressed in
5240 terms of abstract data, it will work no matter how we choose to represent algebraic expressions, as
5241 long as we design a proper set of selectors and constructors. This is the issue we must address next.
5242
5243 \fRepresenting algebraic expressions
5244 We can imagine many ways to use list structure to represent algebraic expressions. For example, we
5245 could use lists of symbols that mirror the usual algebraic notation, representing ax + b as the list (a *
5246 x + b). However, one especially straightforward choice is to use the same parenthesized prefix
5247 notation that Lisp uses for combinations; that is, to represent ax + b as (+ (* a x) b). Then our
5248 data representation for the differentiation problem is as follows:
5249 The variables are symbols. They are identified by the primitive predicate symbol?:
5250 (define (variable? x) (symbol? x))
5251 Two variables are the same if the symbols representing them are eq?:
5252 (define (same-variable? v1 v2)
5253 (and (variable? v1) (variable? v2) (eq? v1 v2)))
5254 Sums and products are constructed as lists:
5255 (define (make-sum a1 a2) (list ’+ a1 a2))
5256 (define (make-product m1 m2) (list ’* m1 m2))
5257 A sum is a list whose first element is the symbol +:
5258 (define (sum? x)
5259 (and (pair? x) (eq? (car x) ’+)))
5260 The addend is the second item of the sum list:
5261 (define (addend s) (cadr s))
5262 The augend is the third item of the sum list:
5263 (define (augend s) (caddr s))
5264 A product is a list whose first element is the symbol *:
5265 (define (product? x)
5266 (and (pair? x) (eq? (car x) ’*)))
5267 The multiplier is the second item of the product list:
5268 (define (multiplier p) (cadr p))
5269 The multiplicand is the third item of the product list:
5270 (define (multiplicand p) (caddr p))
5271 Thus, we need only combine these with the algorithm as embodied by deriv in order to have a
5272 working symbolic-differentiation program. Let us look at some examples of its behavior:
5273
5274 \f(deriv ’(+ x 3) ’x)
5275 (+ 1 0)
5276 (deriv ’(* x y) ’x)
5277 (+ (* x 0) (* 1 y))
5278 (deriv ’(* (* x y) (+ x 3)) ’x)
5279 (+ (* (* x y) (+ 1 0))
5280 (* (+ (* x 0) (* 1 y))
5281 (+ x 3)))
5282 The program produces answers that are correct; however, they are unsimplified. It is true that
5283
5284 but we would like the program to know that x · 0 = 0, 1 · y = y, and 0 + y = y. The answer for the
5285 second example should have been simply y. As the third example shows, this becomes a serious issue
5286 when the expressions are complex.
5287 Our difficulty is much like the one we encountered with the rational-number implementation: we
5288 haven’t reduced answers to simplest form. To accomplish the rational-number reduction, we needed to
5289 change only the constructors and the selectors of the implementation. We can adopt a similar strategy
5290 here. We won’t change deriv at all. Instead, we will change make-sum so that if both summands
5291 are numbers, make-sum will add them and return their sum. Also, if one of the summands is 0, then
5292 make-sum will return the other summand.
5293 (define (make-sum a1 a2)
5294 (cond ((=number? a1 0) a2)
5295 ((=number? a2 0) a1)
5296 ((and (number? a1) (number? a2)) (+ a1 a2))
5297 (else (list ’+ a1 a2))))
5298 This uses the procedure =number?, which checks whether an expression is equal to a given number:
5299 (define (=number? exp num)
5300 (and (number? exp) (= exp num)))
5301 Similarly, we will change make-product to build in the rules that 0 times anything is 0 and 1 times
5302 anything is the thing itself:
5303 (define (make-product m1 m2)
5304 (cond ((or (=number? m1 0) (=number? m2 0)) 0)
5305 ((=number? m1 1) m2)
5306 ((=number? m2 1) m1)
5307 ((and (number? m1) (number? m2)) (* m1 m2))
5308 (else (list ’* m1 m2))))
5309 Here is how this version works on our three examples:
5310 (deriv ’(+ x 3) ’x)
5311 1
5312 (deriv ’(* x y) ’x)
5313 y
5314
5315 \f(deriv ’(* (* x y) (+ x 3)) ’x)
5316 (+ (* x y) (* y (+ x 3)))
5317 Although this is quite an improvement, the third example shows that there is still a long way to go
5318 before we get a program that puts expressions into a form that we might agree is ‘‘simplest.’’ The
5319 problem of algebraic simplification is complex because, among other reasons, a form that may be
5320 simplest for one purpose may not be for another.
5321 Exercise 2.56. Show how to extend the basic differentiator to handle more kinds of expressions. For
5322 instance, implement the differentiation rule
5323
5324 by adding a new clause to the deriv program and defining appropriate procedures
5325 exponentiation?, base, exponent, and make-exponentiation. (You may use the
5326 symbol ** to denote exponentiation.) Build in the rules that anything raised to the power 0 is 1 and
5327 anything raised to the power 1 is the thing itself.
5328 Exercise 2.57. Extend the differentiation program to handle sums and products of arbitrary numbers
5329 of (two or more) terms. Then the last example above could be expressed as
5330 (deriv ’(* x y (+ x 3)) ’x)
5331 Try to do this by changing only the representation for sums and products, without changing the
5332 deriv procedure at all. For example, the addend of a sum would be the first term, and the augend
5333 would be the sum of the rest of the terms.
5334 Exercise 2.58. Suppose we want to modify the differentiation program so that it works with ordinary
5335 mathematical notation, in which + and * are infix rather than prefix operators. Since the differentiation
5336 program is defined in terms of abstract data, we can modify it to work with different representations of
5337 expressions solely by changing the predicates, selectors, and constructors that define the representation
5338 of the algebraic expressions on which the differentiator is to operate.
5339 a. Show how to do this in order to differentiate algebraic expressions presented in infix form, such as
5340 (x + (3 * (x + (y + 2)))). To simplify the task, assume that + and * always take two
5341 arguments and that expressions are fully parenthesized.
5342 b. The problem becomes substantially harder if we allow standard algebraic notation, such as (x + 3
5343 * (x + y + 2)), which drops unnecessary parentheses and assumes that multiplication is done
5344 before addition. Can you design appropriate predicates, selectors, and constructors for this notation
5345 such that our derivative program still works?
5346
5347 2.3.3 Example: Representing Sets
5348 In the previous examples we built representations for two kinds of compound data objects: rational
5349 numbers and algebraic expressions. In one of these examples we had the choice of simplifying
5350 (reducing) the expressions at either construction time or selection time, but other than that the choice
5351 of a representation for these structures in terms of lists was straightforward. When we turn to the
5352 representation of sets, the choice of a representation is not so obvious. Indeed, there are a number of
5353 possible representations, and they differ significantly from one another in several ways.
5354
5355 \fInformally, a set is simply a collection of distinct objects. To give a more precise definition we can
5356 employ the method of data abstraction. That is, we define ‘‘set’’ by specifying the operations that are
5357 to be used on sets. These are union-set, intersection-set, element-of-set?, and
5358 adjoin-set. Element-of-set? is a predicate that determines whether a given element is a
5359 member of a set. Adjoin-set takes an object and a set as arguments and returns a set that contains
5360 the elements of the original set and also the adjoined element. Union-set computes the union of two
5361 sets, which is the set containing each element that appears in either argument. Intersection-set
5362 computes the intersection of two sets, which is the set containing only elements that appear in both
5363 arguments. From the viewpoint of data abstraction, we are free to design any representation that
5364 implements these operations in a way consistent with the interpretations given above. 37
5365
5366 Sets as unordered lists
5367 One way to represent a set is as a list of its elements in which no element appears more than once. The
5368 empty set is represented by the empty list. In this representation, element-of-set? is similar to
5369 the procedure memq of section 2.3.1. It uses equal? instead of eq? so that the set elements need not
5370 be symbols:
5371 (define (element-of-set? x set)
5372 (cond ((null? set) false)
5373 ((equal? x (car set)) true)
5374 (else (element-of-set? x (cdr set)))))
5375 Using this, we can write adjoin-set. If the object to be adjoined is already in the set, we just return
5376 the set. Otherwise, we use cons to add the object to the list that represents the set:
5377 (define (adjoin-set x set)
5378 (if (element-of-set? x set)
5379 set
5380 (cons x set)))
5381 For intersection-set we can use a recursive strategy. If we know how to form the intersection
5382 of set2 and the cdr of set1, we only need to decide whether to include the car of set1 in this.
5383 But this depends on whether (car set1) is also in set2. Here is the resulting procedure:
5384 (define (intersection-set set1 set2)
5385 (cond ((or (null? set1) (null? set2)) ’())
5386 ((element-of-set? (car set1) set2)
5387 (cons (car set1)
5388 (intersection-set (cdr set1) set2)))
5389 (else (intersection-set (cdr set1) set2))))
5390 In designing a representation, one of the issues we should be concerned with is efficiency. Consider
5391 the number of steps required by our set operations. Since they all use element-of-set?, the speed
5392 of this operation has a major impact on the efficiency of the set implementation as a whole. Now, in
5393 order to check whether an object is a member of a set, element-of-set? may have to scan the
5394 entire set. (In the worst case, the object turns out not to be in the set.) Hence, if the set has n elements,
5395 element-of-set? might take up to n steps. Thus, the number of steps required grows as (n).
5396 The number of steps required by adjoin-set, which uses this operation, also grows as (n). For
5397 intersection-set, which does an element-of-set? check for each element of set1, the
5398 number of steps required grows as the product of the sizes of the sets involved, or (n 2 ) for two sets
5399 of size n. The same will be true of union-set.
5400
5401 \fExercise 2.59. Implement the union-set operation for the unordered-list representation of sets.
5402 Exercise 2.60. We specified that a set would be represented as a list with no duplicates. Now suppose
5403 we allow duplicates. For instance, the set {1,2,3} could be represented as the list (2 3 2 1 3 2
5404 2). Design procedures element-of-set?, adjoin-set, union-set, and
5405 intersection-set that operate on this representation. How does the efficiency of each compare
5406 with the corresponding procedure for the non-duplicate representation? Are there applications for
5407 which you would use this representation in preference to the non-duplicate one?
5408
5409 Sets as ordered lists
5410 One way to speed up our set operations is to change the representation so that the set elements are
5411 listed in increasing order. To do this, we need some way to compare two objects so that we can say
5412 which is bigger. For example, we could compare symbols lexicographically, or we could agree on
5413 some method for assigning a unique number to an object and then compare the elements by comparing
5414 the corresponding numbers. To keep our discussion simple, we will consider only the case where the
5415 set elements are numbers, so that we can compare elements using > and <. We will represent a set of
5416 numbers by listing its elements in increasing order. Whereas our first representation above allowed us
5417 to represent the set {1,3,6,10} by listing the elements in any order, our new representation allows only
5418 the list (1 3 6 10).
5419 One advantage of ordering shows up in element-of-set?: In checking for the presence of an
5420 item, we no longer have to scan the entire set. If we reach a set element that is larger than the item we
5421 are looking for, then we know that the item is not in the set:
5422 (define (element-of-set? x set)
5423 (cond ((null? set) false)
5424 ((= x (car set)) true)
5425 ((< x (car set)) false)
5426 (else (element-of-set? x (cdr set)))))
5427 How many steps does this save? In the worst case, the item we are looking for may be the largest one
5428 in the set, so the number of steps is the same as for the unordered representation. On the other hand, if
5429 we search for items of many different sizes we can expect that sometimes we will be able to stop
5430 searching at a point near the beginning of the list and that other times we will still need to examine
5431 most of the list. On the average we should expect to have to examine about half of the items in the set.
5432 Thus, the average number of steps required will be about n/2. This is still (n) growth, but it does
5433 save us, on the average, a factor of 2 in number of steps over the previous implementation.
5434 We obtain a more impressive speedup with intersection-set. In the unordered representation
5435 this operation required (n 2 ) steps, because we performed a complete scan of set2 for each element
5436 of set1. But with the ordered representation, we can use a more clever method. Begin by comparing
5437 the initial elements, x1 and x2, of the two sets. If x1 equals x2, then that gives an element of the
5438 intersection, and the rest of the intersection is the intersection of the cdrs of the two sets. Suppose,
5439 however, that x1 is less than x2. Since x2 is the smallest element in set2, we can immediately
5440 conclude that x1 cannot appear anywhere in set2 and hence is not in the intersection. Hence, the
5441 intersection is equal to the intersection of set2 with the cdr of set1. Similarly, if x2 is less than
5442 x1, then the intersection is given by the intersection of set1 with the cdr of set2. Here is the
5443 procedure:
5444
5445 \f(define (intersection-set set1 set2)
5446 (if (or (null? set1) (null? set2))
5447 ’()
5448 (let ((x1 (car set1)) (x2 (car set2)))
5449 (cond ((= x1 x2)
5450 (cons x1
5451 (intersection-set (cdr set1)
5452 (cdr set2))))
5453 ((< x1 x2)
5454 (intersection-set (cdr set1) set2))
5455 ((< x2 x1)
5456 (intersection-set set1 (cdr set2)))))))
5457 To estimate the number of steps required by this process, observe that at each step we reduce the
5458 intersection problem to computing intersections of smaller sets -- removing the first element from
5459 set1 or set2 or both. Thus, the number of steps required is at most the sum of the sizes of set1
5460 and set2, rather than the product of the sizes as with the unordered representation. This is (n)
5461 growth rather than (n 2 ) -- a considerable speedup, even for sets of moderate size.
5462 Exercise 2.61. Give an implementation of adjoin-set using the ordered representation. By
5463 analogy with element-of-set? show how to take advantage of the ordering to produce a
5464 procedure that requires on the average about half as many steps as with the unordered representation.
5465 Exercise 2.62. Give a
5466
5467 (n) implementation of union-set for sets represented as ordered lists.
5468
5469 Sets as binary trees
5470 We can do better than the ordered-list representation by arranging the set elements in the form of a
5471 tree. Each node of the tree holds one element of the set, called the ‘‘entry’’ at that node, and a link to
5472 each of two other (possibly empty) nodes. The ‘‘left’’ link points to elements smaller than the one at
5473 the node, and the ‘‘right’’ link to elements greater than the one at the node. Figure 2.16 shows some
5474 trees that represent the set {1,3,5,7,9,11}. The same set may be represented by a tree in a number of
5475 different ways. The only thing we require for a valid representation is that all elements in the left
5476 subtree be smaller than the node entry and that all elements in the right subtree be larger.
5477
5478 Figure 2.16: Various binary trees that represent the set { 1,3,5,7,9,11 }.
5479 Figure 2.16: Various binary trees that represent the set { 1,3,5,7,9,11 }.
5480
5481 \fThe advantage of the tree representation is this: Suppose we want to check whether a number x is
5482 contained in a set. We begin by comparing x with the entry in the top node. If x is less than this, we
5483 know that we need only search the left subtree; if x is greater, we need only search the right subtree.
5484 Now, if the tree is ‘‘balanced,’’ each of these subtrees will be about half the size of the original. Thus,
5485 in one step we have reduced the problem of searching a tree of size n to searching a tree of size n/2.
5486 Since the size of the tree is halved at each step, we should expect that the number of steps needed to
5487 search a tree of size n grows as (log n). 38 For large sets, this will be a significant speedup over the
5488 previous representations.
5489 We can represent trees by using lists. Each node will be a list of three items: the entry at the node, the
5490 left subtree, and the right subtree. A left or a right subtree of the empty list will indicate that there is no
5491 subtree connected there. We can describe this representation by the following procedures: 39
5492 (define
5493 (define
5494 (define
5495 (define
5496 (list
5497
5498 (entry tree) (car tree))
5499 (left-branch tree) (cadr tree))
5500 (right-branch tree) (caddr tree))
5501 (make-tree entry left right)
5502 entry left right))
5503
5504 Now we can write the element-of-set? procedure using the strategy described above:
5505 (define (element-of-set? x set)
5506 (cond ((null? set) false)
5507 ((= x (entry set)) true)
5508 ((< x (entry set))
5509 (element-of-set? x (left-branch set)))
5510 ((> x (entry set))
5511 (element-of-set? x (right-branch set)))))
5512 Adjoining an item to a set is implemented similarly and also requires (log n) steps. To adjoin an
5513 item x, we compare x with the node entry to determine whether x should be added to the right or to
5514 the left branch, and having adjoined x to the appropriate branch we piece this newly constructed
5515 branch together with the original entry and the other branch. If x is equal to the entry, we just return
5516 the node. If we are asked to adjoin x to an empty tree, we generate a tree that has x as the entry and
5517 empty right and left branches. Here is the procedure:
5518 (define (adjoin-set x set)
5519 (cond ((null? set) (make-tree x ’() ’()))
5520 ((= x (entry set)) set)
5521 ((< x (entry set))
5522 (make-tree (entry set)
5523 (adjoin-set x (left-branch set))
5524 (right-branch set)))
5525 ((> x (entry set))
5526 (make-tree (entry set)
5527 (left-branch set)
5528 (adjoin-set x (right-branch set))))))
5529 The above claim that searching the tree can be performed in a logarithmic number of steps rests on the
5530 assumption that the tree is ‘‘balanced,’’ i.e., that the left and the right subtree of every tree have
5531 approximately the same number of elements, so that each subtree contains about half the elements of
5532 its parent. But how can we be certain that the trees we construct will be balanced? Even if we start
5533
5534 \fwith a balanced tree, adding elements with adjoin-set may produce an unbalanced result. Since
5535 the position of a newly adjoined element depends on how the element compares with the items already
5536 in the set, we can expect that if we add elements ‘‘randomly’’ the tree will tend to be balanced on the
5537 average. But this is not a guarantee. For example, if we start with an empty set and adjoin the numbers
5538 1 through 7 in sequence we end up with the highly unbalanced tree shown in figure 2.17. In this tree
5539 all the left subtrees are empty, so it has no advantage over a simple ordered list. One way to solve this
5540 problem is to define an operation that transforms an arbitrary tree into a balanced tree with the same
5541 elements. Then we can perform this transformation after every few adjoin-set operations to keep
5542 our set in balance. There are also other ways to solve this problem, most of which involve designing
5543 new data structures for which searching and insertion both can be done in (log n) steps. 40
5544
5545 Figure 2.17: Unbalanced tree produced by adjoining 1 through 7 in sequence.
5546 Figure 2.17: Unbalanced tree produced by adjoining 1 through 7 in sequence.
5547 Exercise 2.63. Each of the following two procedures converts a binary tree to a list.
5548 (define (tree->list-1 tree)
5549 (if (null? tree)
5550 ’()
5551 (append (tree->list-1 (left-branch tree))
5552 (cons (entry tree)
5553 (tree->list-1 (right-branch tree))))))
5554 (define (tree->list-2 tree)
5555 (define (copy-to-list tree result-list)
5556 (if (null? tree)
5557 result-list
5558 (copy-to-list (left-branch tree)
5559 (cons (entry tree)
5560 (copy-to-list (right-branch tree)
5561 result-list)))))
5562 (copy-to-list tree ’()))
5563 a. Do the two procedures produce the same result for every tree? If not, how do the results differ?
5564 What lists do the two procedures produce for the trees in figure 2.16?
5565
5566 \fb. Do the two procedures have the same order of growth in the number of steps required to convert a
5567 balanced tree with n elements to a list? If not, which one grows more slowly?
5568 Exercise 2.64. The following procedure list->tree converts an ordered list to a balanced binary
5569 tree. The helper procedure partial-tree takes as arguments an integer n and list of at least n
5570 elements and constructs a balanced tree containing the first n elements of the list. The result returned
5571 by partial-tree is a pair (formed with cons) whose car is the constructed tree and whose cdr
5572 is the list of elements not included in the tree.
5573 (define (list->tree elements)
5574 (car (partial-tree elements (length elements))))
5575 (define (partial-tree elts n)
5576 (if (= n 0)
5577 (cons ’() elts)
5578 (let ((left-size (quotient (- n 1) 2)))
5579 (let ((left-result (partial-tree elts left-size)))
5580 (let ((left-tree (car left-result))
5581 (non-left-elts (cdr left-result))
5582 (right-size (- n (+ left-size 1))))
5583 (let ((this-entry (car non-left-elts))
5584 (right-result (partial-tree (cdr non-left-elts)
5585 right-size)))
5586 (let ((right-tree (car right-result))
5587 (remaining-elts (cdr right-result)))
5588 (cons (make-tree this-entry left-tree right-tree)
5589 remaining-elts))))))))
5590 a. Write a short paragraph explaining as clearly as you can how partial-tree works. Draw the
5591 tree produced by list->tree for the list (1 3 5 7 9 11).
5592 b. What is the order of growth in the number of steps required by list->tree to convert a list of n
5593 elements?
5594 Exercise 2.65. Use the results of exercises 2.63 and 2.64 to give (n) implementations of
5595 union-set and intersection-set for sets implemented as (balanced) binary trees. 41
5596
5597 Sets and information retrieval
5598 We have examined options for using lists to represent sets and have seen how the choice of
5599 representation for a data object can have a large impact on the performance of the programs that use
5600 the data. Another reason for concentrating on sets is that the techniques discussed here appear again
5601 and again in applications involving information retrieval.
5602 Consider a data base containing a large number of individual records, such as the personnel files for a
5603 company or the transactions in an accounting system. A typical data-management system spends a
5604 large amount of time accessing or modifying the data in the records and therefore requires an efficient
5605 method for accessing records. This is done by identifying a part of each record to serve as an
5606 identifying key. A key can be anything that uniquely identifies the record. For a personnel file, it might
5607 be an employee’s ID number. For an accounting system, it might be a transaction number. Whatever
5608 the key is, when we define the record as a data structure we should include a key selector procedure
5609 that retrieves the key associated with a given record.
5610
5611 \fNow we represent the data base as a set of records. To locate the record with a given key we use a
5612 procedure lookup, which takes as arguments a key and a data base and which returns the record that
5613 has that key, or false if there is no such record. Lookup is implemented in almost the same way as
5614 element-of-set?. For example, if the set of records is implemented as an unordered list, we
5615 could use
5616 (define (lookup given-key set-of-records)
5617 (cond ((null? set-of-records) false)
5618 ((equal? given-key (key (car set-of-records)))
5619 (car set-of-records))
5620 (else (lookup given-key (cdr set-of-records)))))
5621 Of course, there are better ways to represent large sets than as unordered lists. Information-retrieval
5622 systems in which records have to be ‘‘randomly accessed’’ are typically implemented by a tree-based
5623 method, such as the binary-tree representation discussed previously. In designing such a system the
5624 methodology of data abstraction can be a great help. The designer can create an initial implementation
5625 using a simple, straightforward representation such as unordered lists. This will be unsuitable for the
5626 eventual system, but it can be useful in providing a ‘‘quick and dirty’’ data base with which to test the
5627 rest of the system. Later on, the data representation can be modified to be more sophisticated. If the
5628 data base is accessed in terms of abstract selectors and constructors, this change in representation will
5629 not require any changes to the rest of the system.
5630 Exercise 2.66. Implement the lookup procedure for the case where the set of records is structured as
5631 a binary tree, ordered by the numerical values of the keys.
5632
5633 2.3.4 Example: Huffman Encoding Trees
5634 This section provides practice in the use of list structure and data abstraction to manipulate sets and
5635 trees. The application is to methods for representing data as sequences of ones and zeros (bits). For
5636 example, the ASCII standard code used to represent text in computers encodes each character as a
5637 sequence of seven bits. Using seven bits allows us to distinguish 2 7 , or 128, possible different
5638 characters. In general, if we want to distinguish n different symbols, we will need to use log 2 n bits
5639 per symbol. If all our messages are made up of the eight symbols A, B, C, D, E, F, G, and H, we can
5640 choose a code with three bits per character, for example
5641 A 000
5642
5643 C 010
5644
5645 E 100
5646
5647 G 110
5648
5649 B 001
5650
5651 D 011
5652
5653 F 101
5654
5655 H 111
5656
5657 With this code, the message
5658 BACADAEAFABBAAAGAH
5659 is encoded as the string of 54 bits
5660 001000010000011000100000101000001001000000000110000111
5661 Codes such as ASCII and the A-through-H code above are known as fixed-length codes, because they
5662 represent each symbol in the message with the same number of bits. It is sometimes advantageous to
5663 use variable-length codes, in which different symbols may be represented by different numbers of bits.
5664 For example, Morse code does not use the same number of dots and dashes for each letter of the
5665
5666 \falphabet. In particular, E, the most frequent letter, is represented by a single dot. In general, if our
5667 messages are such that some symbols appear very frequently and some very rarely, we can encode
5668 data more efficiently (i.e., using fewer bits per message) if we assign shorter codes to the frequent
5669 symbols. Consider the following alternative code for the letters A through H:
5670 A0
5671
5672 C 1010
5673
5674 E 1100
5675
5676 G 1110
5677
5678 B 100
5679
5680 D 1011
5681
5682 F 1101
5683
5684 H 1111
5685
5686 With this code, the same message as above is encoded as the string
5687 100010100101101100011010100100000111001111
5688 This string contains 42 bits, so it saves more than 20% in space in comparison with the fixed-length
5689 code shown above.
5690 One of the difficulties of using a variable-length code is knowing when you have reached the end of a
5691 symbol in reading a sequence of zeros and ones. Morse code solves this problem by using a special
5692 separator code (in this case, a pause) after the sequence of dots and dashes for each letter. Another
5693 solution is to design the code in such a way that no complete code for any symbol is the beginning (or
5694 prefix) of the code for another symbol. Such a code is called a prefix code. In the example above, A is
5695 encoded by 0 and B is encoded by 100, so no other symbol can have a code that begins with 0 or with
5696 100.
5697 In general, we can attain significant savings if we use variable-length prefix codes that take advantage
5698 of the relative frequencies of the symbols in the messages to be encoded. One particular scheme for
5699 doing this is called the Huffman encoding method, after its discoverer, David Huffman. A Huffman
5700 code can be represented as a binary tree whose leaves are the symbols that are encoded. At each
5701 non-leaf node of the tree there is a set containing all the symbols in the leaves that lie below the node.
5702 In addition, each symbol at a leaf is assigned a weight (which is its relative frequency), and each
5703 non-leaf node contains a weight that is the sum of all the weights of the leaves lying below it. The
5704 weights are not used in the encoding or the decoding process. We will see below how they are used to
5705 help construct the tree.
5706
5707 \fFigure 2.18: A Huffman encoding tree.
5708 Figure 2.18: A Huffman encoding tree.
5709 Figure 2.18 shows the Huffman tree for the A-through-H code given above. The weights at the leaves
5710 indicate that the tree was designed for messages in which A appears with relative frequency 8, B with
5711 relative frequency 3, and the other letters each with relative frequency 1.
5712 Given a Huffman tree, we can find the encoding of any symbol by starting at the root and moving
5713 down until we reach the leaf that holds the symbol. Each time we move down a left branch we add a 0
5714 to the code, and each time we move down a right branch we add a 1. (We decide which branch to
5715 follow by testing to see which branch either is the leaf node for the symbol or contains the symbol in
5716 its set.) For example, starting from the root of the tree in figure 2.18, we arrive at the leaf for D by
5717 following a right branch, then a left branch, then a right branch, then a right branch; hence, the code
5718 for D is 1011.
5719 To decode a bit sequence using a Huffman tree, we begin at the root and use the successive zeros and
5720 ones of the bit sequence to determine whether to move down the left or the right branch. Each time we
5721 come to a leaf, we have generated a new symbol in the message, at which point we start over from the
5722 root of the tree to find the next symbol. For example, suppose we are given the tree above and the
5723 sequence 10001010. Starting at the root, we move down the right branch, (since the first bit of the
5724 string is 1), then down the left branch (since the second bit is 0), then down the left branch (since the
5725 third bit is also 0). This brings us to the leaf for B, so the first symbol of the decoded message is B.
5726 Now we start again at the root, and we make a left move because the next bit in the string is 0. This
5727 brings us to the leaf for A. Then we start again at the root with the rest of the string 1010, so we move
5728 right, left, right, left and reach C. Thus, the entire message is BAC.
5729
5730 Generating Huffman trees
5731 Given an ‘‘alphabet’’ of symbols and their relative frequencies, how do we construct the ‘‘best’’ code?
5732 (In other words, which tree will encode messages with the fewest bits?) Huffman gave an algorithm
5733 for doing this and showed that the resulting code is indeed the best variable-length code for messages
5734 where the relative frequency of the symbols matches the frequencies with which the code was
5735
5736 \fconstructed. We will not prove this optimality of Huffman codes here, but we will show how Huffman
5737 trees are constructed. 42
5738 The algorithm for generating a Huffman tree is very simple. The idea is to arrange the tree so that the
5739 symbols with the lowest frequency appear farthest away from the root. Begin with the set of leaf
5740 nodes, containing symbols and their frequencies, as determined by the initial data from which the code
5741 is to be constructed. Now find two leaves with the lowest weights and merge them to produce a node
5742 that has these two nodes as its left and right branches. The weight of the new node is the sum of the
5743 two weights. Remove the two leaves from the original set and replace them by this new node. Now
5744 continue this process. At each step, merge two nodes with the smallest weights, removing them from
5745 the set and replacing them with a node that has these two as its left and right branches. The process
5746 stops when there is only one node left, which is the root of the entire tree. Here is how the Huffman
5747 tree of figure 2.18 was generated:
5748 Initial leaves
5749
5750 {(A 8) (B 3) (C 1) (D 1) (E 1) (F 1) (G 1) (H 1)}
5751
5752 Merge
5753
5754 {(A 8) (B 3) ({C D} 2) (E 1) (F 1) (G 1) (H 1)}
5755
5756 Merge
5757
5758 {(A 8) (B 3) ({C D} 2) ({E F} 2) (G 1) (H 1)}
5759
5760 Merge
5761
5762 {(A 8) (B 3) ({C D} 2) ({E F} 2) ({G H} 2)}
5763
5764 Merge
5765
5766 {(A 8) (B 3) ({C D} 2) ({E F G H} 4)}
5767
5768 Merge
5769
5770 {(A 8) ({B C D} 5) ({E F G H} 4)}
5771
5772 Merge
5773
5774 {(A 8) ({B C D E F G H} 9)}
5775
5776 Final merge
5777
5778 {({A B C D E F G H} 17)}
5779
5780 The algorithm does not always specify a unique tree, because there may not be unique smallest-weight
5781 nodes at each step. Also, the choice of the order in which the two nodes are merged (i.e., which will be
5782 the right branch and which will be the left branch) is arbitrary.
5783
5784 Representing Huffman trees
5785 In the exercises below we will work with a system that uses Huffman trees to encode and decode
5786 messages and generates Huffman trees according to the algorithm outlined above. We will begin by
5787 discussing how trees are represented.
5788 Leaves of the tree are represented by a list consisting of the symbol leaf, the symbol at the leaf, and
5789 the weight:
5790 (define (make-leaf symbol weight)
5791 (list ’leaf symbol weight))
5792 (define (leaf? object)
5793 (eq? (car object) ’leaf))
5794 (define (symbol-leaf x) (cadr x))
5795 (define (weight-leaf x) (caddr x))
5796
5797 \fA general tree will be a list of a left branch, a right branch, a set of symbols, and a weight. The set of
5798 symbols will be simply a list of the symbols, rather than some more sophisticated set representation.
5799 When we make a tree by merging two nodes, we obtain the weight of the tree as the sum of the
5800 weights of the nodes, and the set of symbols as the union of the sets of symbols for the nodes. Since
5801 our symbol sets are represented as lists, we can form the union by using the append procedure we
5802 defined in section 2.2.1:
5803 (define (make-code-tree left right)
5804 (list left
5805 right
5806 (append (symbols left) (symbols right))
5807 (+ (weight left) (weight right))))
5808 If we make a tree in this way, we have the following selectors:
5809 (define (left-branch tree) (car tree))
5810 (define (right-branch tree) (cadr tree))
5811 (define (symbols tree)
5812 (if (leaf? tree)
5813 (list (symbol-leaf tree))
5814 (caddr tree)))
5815 (define (weight tree)
5816 (if (leaf? tree)
5817 (weight-leaf tree)
5818 (cadddr tree)))
5819 The procedures symbols and weight must do something slightly different depending on whether
5820 they are called with a leaf or a general tree. These are simple examples of generic procedures
5821 (procedures that can handle more than one kind of data), which we will have much more to say about
5822 in sections 2.4 and 2.5.
5823
5824 The decoding procedure
5825 The following procedure implements the decoding algorithm. It takes as arguments a list of zeros and
5826 ones, together with a Huffman tree.
5827 (define (decode bits tree)
5828 (define (decode-1 bits current-branch)
5829 (if (null? bits)
5830 ’()
5831 (let ((next-branch
5832 (choose-branch (car bits) current-branch)))
5833 (if (leaf? next-branch)
5834 (cons (symbol-leaf next-branch)
5835 (decode-1 (cdr bits) tree))
5836 (decode-1 (cdr bits) next-branch)))))
5837 (decode-1 bits tree))
5838 (define (choose-branch bit branch)
5839 (cond ((= bit 0) (left-branch branch))
5840 ((= bit 1) (right-branch branch))
5841 (else (error "bad bit -- CHOOSE-BRANCH" bit))))
5842
5843 \fThe procedure decode-1 takes two arguments: the list of remaining bits and the current position in
5844 the tree. It keeps moving ‘‘down’’ the tree, choosing a left or a right branch according to whether the
5845 next bit in the list is a zero or a one. (This is done with the procedure choose-branch.) When it
5846 reaches a leaf, it returns the symbol at that leaf as the next symbol in the message by consing it onto
5847 the result of decoding the rest of the message, starting at the root of the tree. Note the error check in
5848 the final clause of choose-branch, which complains if the procedure finds something other than a
5849 zero or a one in the input data.
5850
5851 Sets of weighted elements
5852 In our representation of trees, each non-leaf node contains a set of symbols, which we have
5853 represented as a simple list. However, the tree-generating algorithm discussed above requires that we
5854 also work with sets of leaves and trees, successively merging the two smallest items. Since we will be
5855 required to repeatedly find the smallest item in a set, it is convenient to use an ordered representation
5856 for this kind of set.
5857 We will represent a set of leaves and trees as a list of elements, arranged in increasing order of weight.
5858 The following adjoin-set procedure for constructing sets is similar to the one described in
5859 exercise 2.61; however, items are compared by their weights, and the element being added to the set is
5860 never already in it.
5861 (define (adjoin-set x set)
5862 (cond ((null? set) (list x))
5863 ((< (weight x) (weight (car set))) (cons x set))
5864 (else (cons (car set)
5865 (adjoin-set x (cdr set))))))
5866 The following procedure takes a list of symbol-frequency pairs such as ((A 4) (B 2) (C 1) (D
5867 1)) and constructs an initial ordered set of leaves, ready to be merged according to the Huffman
5868 algorithm:
5869 (define (make-leaf-set pairs)
5870 (if (null? pairs)
5871 ’()
5872 (let ((pair (car pairs)))
5873 (adjoin-set (make-leaf (car pair)
5874 ; symbol
5875 (cadr pair)) ; frequency
5876 (make-leaf-set (cdr pairs))))))
5877 Exercise 2.67. Define an encoding tree and a sample message:
5878 (define sample-tree
5879 (make-code-tree (make-leaf ’A 4)
5880 (make-code-tree
5881 (make-leaf ’B 2)
5882 (make-code-tree (make-leaf ’D 1)
5883 (make-leaf ’C 1)))))
5884 (define sample-message ’(0 1 1 0 0 1 0 1 0 1 1 1 0))
5885
5886 \fUse the decode procedure to decode the message, and give the result.
5887 Exercise 2.68. The encode procedure takes as arguments a message and a tree and produces the list
5888 of bits that gives the encoded message.
5889 (define (encode message tree)
5890 (if (null? message)
5891 ’()
5892 (append (encode-symbol (car message) tree)
5893 (encode (cdr message) tree))))
5894 Encode-symbol is a procedure, which you must write, that returns the list of bits that encodes a
5895 given symbol according to a given tree. You should design encode-symbol so that it signals an
5896 error if the symbol is not in the tree at all. Test your procedure by encoding the result you obtained in
5897 exercise 2.67 with the sample tree and seeing whether it is the same as the original sample message.
5898 Exercise 2.69. The following procedure takes as its argument a list of symbol-frequency pairs (where
5899 no symbol appears in more than one pair) and generates a Huffman encoding tree according to the
5900 Huffman algorithm.
5901 (define (generate-huffman-tree pairs)
5902 (successive-merge (make-leaf-set pairs)))
5903 Make-leaf-set is the procedure given above that transforms the list of pairs into an ordered set of
5904 leaves. Successive-merge is the procedure you must write, using make-code-tree to
5905 successively merge the smallest-weight elements of the set until there is only one element left, which
5906 is the desired Huffman tree. (This procedure is slightly tricky, but not really complicated. If you find
5907 yourself designing a complex procedure, then you are almost certainly doing something wrong. You
5908 can take significant advantage of the fact that we are using an ordered set representation.)
5909 Exercise 2.70. The following eight-symbol alphabet with associated relative frequencies was
5910 designed to efficiently encode the lyrics of 1950s rock songs. (Note that the ‘‘symbols’’ of an
5911 ‘‘alphabet’’ need not be individual letters.)
5912 A
5913
5914 2
5915
5916 NA
5917
5918 16
5919
5920 BOOM
5921
5922 1
5923
5924 SHA
5925
5926 3
5927
5928 GET
5929
5930 2
5931
5932 YIP
5933
5934 9
5935
5936 JOB
5937
5938 2
5939
5940 WAH
5941
5942 1
5943
5944 Use generate-huffman-tree (exercise 2.69) to generate a corresponding Huffman tree, and use
5945 encode (exercise 2.68) to encode the following message:
5946 Get a job
5947 Sha na na na na na na na na
5948 Get a job
5949
5950 \fSha na na na na na na na na
5951 Wah yip yip yip yip yip yip yip yip yip
5952 Sha boom
5953 How many bits are required for the encoding? What is the smallest number of bits that would be
5954 needed to encode this song if we used a fixed-length code for the eight-symbol alphabet?
5955 Exercise 2.71. Suppose we have a Huffman tree for an alphabet of n symbols, and that the relative
5956 frequencies of the symbols are 1, 2, 4, ..., 2 n-1 . Sketch the tree for n=5; for n=10. In such a tree (for
5957 general n) how may bits are required to encode the most frequent symbol? the least frequent symbol?
5958 Exercise 2.72. Consider the encoding procedure that you designed in exercise 2.68. What is the order
5959 of growth in the number of steps needed to encode a symbol? Be sure to include the number of steps
5960 needed to search the symbol list at each node encountered. To answer this question in general is
5961 difficult. Consider the special case where the relative frequencies of the n symbols are as described in
5962 exercise 2.71, and give the order of growth (as a function of n) of the number of steps needed to
5963 encode the most frequent and least frequent symbols in the alphabet.
5964 32 Allowing quotation in a language wreaks havoc with the ability to reason about the language in
5965
5966 simple terms, because it destroys the notion that equals can be substituted for equals. For example,
5967 three is one plus two, but the word ‘‘three’’ is not the phrase ‘‘one plus two.’’ Quotation is powerful
5968 because it gives us a way to build expressions that manipulate other expressions (as we will see when
5969 we write an interpreter in chapter 4). But allowing statements in a language that talk about other
5970 statements in that language makes it very difficult to maintain any coherent principle of what ‘‘equals
5971 can be substituted for equals’’ should mean. For example, if we know that the evening star is the
5972 morning star, then from the statement ‘‘the evening star is Venus’’ we can deduce ‘‘the morning star is
5973 Venus.’’ However, given that ‘‘John knows that the evening star is Venus’’ we cannot infer that ‘‘John
5974 knows that the morning star is Venus.’’
5975 33 The single quote is different from the double quote we have been using to enclose character strings
5976
5977 to be printed. Whereas the single quote can be used to denote lists or symbols, the double quote is used
5978 only with character strings. In this book, the only use for character strings is as items to be printed.
5979 34 Strictly, our use of the quotation mark violates the general rule that all compound expressions in
5980
5981 our language should be delimited by parentheses and look like lists. We can recover this consistency
5982 by introducing a special form quote, which serves the same purpose as the quotation mark. Thus, we
5983 would type (quote a) instead of ’a, and we would type (quote (a b c)) instead of ’(a b
5984 c). This is precisely how the interpreter works. The quotation mark is just a single-character
5985 abbreviation for wrapping the next complete expression with quote to form (quote
5986 <expression>). This is important because it maintains the principle that any expression seen by
5987 the interpreter can be manipulated as a data object. For instance, we could construct the expression
5988 (car ’(a b c)), which is the same as (car (quote (a b c))), by evaluating
5989 (list ’car (list ’quote ’(a b c))).
5990 35 We can consider two symbols to be ‘‘the same’’ if they consist of the same characters in the same
5991
5992 order. Such a definition skirts a deep issue that we are not yet ready to address: the meaning of
5993 ‘‘sameness’’ in a programming language. We will return to this in chapter 3 (section 3.1.3).
5994
5995 \f36 In practice, programmers use equal? to compare lists that contain numbers as well as symbols.
5996
5997 Numbers are not considered to be symbols. The question of whether two numerically equal numbers
5998 (as tested by =) are also eq? is highly implementation-dependent. A better definition of equal?
5999 (such as the one that comes as a primitive in Scheme) would also stipulate that if a and b are both
6000 numbers, then a and b are equal? if they are numerically equal.
6001 37 If we want to be more formal, we can specify ‘‘consistent with the interpretations given above’’ to
6002
6003 mean that the operations satisfy a collection of rules such as these:
6004 For any set S and any object x, (element-of-set? x (adjoin-set x S)) is true
6005 (informally: ‘‘Adjoining an object to a set produces a set that contains the object’’).
6006 For any sets S and T and any object x, (element-of-set? x (union-set S T)) is
6007 equal to (or (element-of-set? x S) (element-of-set? x T)) (informally:
6008 ‘‘The elements of (union S T) are the elements that are in S or in T’’).
6009 For any object x, (element-of-set? x ’()) is false (informally: ‘‘No object is an
6010 element of the empty set’’).
6011 38 Halving the size of the problem at each step is the distinguishing characteristic of logarithmic
6012
6013 growth, as we saw with the fast-exponentiation algorithm of section 1.2.4 and the half-interval search
6014 method of section 1.3.3.
6015 39 We are representing sets in terms of trees, and trees in terms of lists -- in effect, a data abstraction
6016
6017 built upon a data abstraction. We can regard the procedures entry, left-branch,
6018 right-branch, and make-tree as a way of isolating the abstraction of a ‘‘binary tree’’ from the
6019 particular way we might wish to represent such a tree in terms of list structure.
6020 40 Examples of such structures include B-trees and red-black trees. There is a large literature on data
6021
6022 structures devoted to this problem. See Cormen, Leiserson, and Rivest 1990.
6023 41 Exercises 2.63-2.65 are due to Paul Hilfinger.
6024 42 See Hamming 1980 for a discussion of the mathematical properties of Huffman codes.
6025
6026 [Go to first, previous, next page; contents; index]
6027
6028 \f[Go to first, previous, next page; contents; index]
6029
6030 2.4 Multiple Representations for Abstract Data
6031 We have introduced data abstraction, a methodology for structuring systems in such a way that much
6032 of a program can be specified independent of the choices involved in implementing the data objects
6033 that the program manipulates. For example, we saw in section 2.1.1 how to separate the task of
6034 designing a program that uses rational numbers from the task of implementing rational numbers in
6035 terms of the computer language’s primitive mechanisms for constructing compound data. The key idea
6036 was to erect an abstraction barrier -- in this case, the selectors and constructors for rational numbers
6037 (make-rat, numer, denom) -- that isolates the way rational numbers are used from their underlying
6038 representation in terms of list structure. A similar abstraction barrier isolates the details of the
6039 procedures that perform rational arithmetic (add-rat, sub-rat, mul-rat, and div-rat) from
6040 the ‘‘higher-level’’ procedures that use rational numbers. The resulting program has the structure
6041 shown in figure 2.1.
6042 These data-abstraction barriers are powerful tools for controlling complexity. By isolating the
6043 underlying representations of data objects, we can divide the task of designing a large program into
6044 smaller tasks that can be performed separately. But this kind of data abstraction is not yet powerful
6045 enough, because it may not always make sense to speak of ‘‘the underlying representation’’ for a data
6046 object.
6047 For one thing, there might be more than one useful representation for a data object, and we might like
6048 to design systems that can deal with multiple representations. To take a simple example, complex
6049 numbers may be represented in two almost equivalent ways: in rectangular form (real and imaginary
6050 parts) and in polar form (magnitude and angle). Sometimes rectangular form is more appropriate and
6051 sometimes polar form is more appropriate. Indeed, it is perfectly plausible to imagine a system in
6052 which complex numbers are represented in both ways, and in which the procedures for manipulating
6053 complex numbers work with either representation.
6054 More importantly, programming systems are often designed by many people working over extended
6055 periods of time, subject to requirements that change over time. In such an environment, it is simply not
6056 possible for everyone to agree in advance on choices of data representation. So in addition to the
6057 data-abstraction barriers that isolate representation from use, we need abstraction barriers that isolate
6058 different design choices from each other and permit different choices to coexist in a single program.
6059 Furthermore, since large programs are often created by combining pre-existing modules that were
6060 designed in isolation, we need conventions that permit programmers to incorporate modules into larger
6061 systems additively, that is, without having to redesign or reimplement these modules.
6062 In this section, we will learn how to cope with data that may be represented in different ways by
6063 different parts of a program. This requires constructing generic procedures -- procedures that can
6064 operate on data that may be represented in more than one way. Our main technique for building
6065 generic procedures will be to work in terms of data objects that have type tags, that is, data objects that
6066 include explicit information about how they are to be processed. We will also discuss data-directed
6067 programming, a powerful and convenient implementation strategy for additively assembling systems
6068 with generic operations.
6069 We begin with the simple complex-number example. We will see how type tags and data-directed
6070 style enable us to design separate rectangular and polar representations for complex numbers while
6071 maintaining the notion of an abstract ‘‘complex-number’’ data object. We will accomplish this by
6072 defining arithmetic procedures for complex numbers (add-complex, sub-complex,
6073
6074 \fmul-complex, and div-complex) in terms of generic selectors that access parts of a complex
6075 number independent of how the number is represented. The resulting complex-number system, as
6076 shown in figure 2.19, contains two different kinds of abstraction barriers. The ‘‘horizontal’’
6077 abstraction barriers play the same role as the ones in figure 2.1. They isolate ‘‘higher-level’’
6078 operations from ‘‘lower-level’’ representations. In addition, there is a ‘‘vertical’’ barrier that gives us
6079 the ability to separately design and install alternative representations.
6080
6081 Figure 2.19: Data-abstraction barriers in the complex-number system.
6082 Figure 2.19: Data-abstraction barriers in the complex-number system.
6083 In section 2.5 we will show how to use type tags and data-directed style to develop a generic
6084 arithmetic package. This provides procedures (add, mul, and so on) that can be used to manipulate all
6085 sorts of ‘‘numbers’’ and can be easily extended when a new kind of number is needed. In
6086 section 2.5.3, we’ll show how to use generic arithmetic in a system that performs symbolic algebra.
6087
6088 2.4.1 Representations for Complex Numbers
6089 We will develop a system that performs arithmetic operations on complex numbers as a simple but
6090 unrealistic example of a program that uses generic operations. We begin by discussing two plausible
6091 representations for complex numbers as ordered pairs: rectangular form (real part and imaginary part)
6092 and polar form (magnitude and angle). 43 Section 2.4.2 will show how both representations can be
6093 made to coexist in a single system through the use of type tags and generic operations.
6094 Like rational numbers, complex numbers are naturally represented as ordered pairs. The set of
6095 complex numbers can be thought of as a two-dimensional space with two orthogonal axes, the ‘‘real’’
6096 axis and the ‘‘imaginary’’ axis. (See figure 2.20.) From this point of view, the complex number z = x +
6097 iy (where i 2 = - 1) can be thought of as the point in the plane whose real coordinate is x and whose
6098 imaginary coordinate is y. Addition of complex numbers reduces in this representation to addition of
6099 coordinates:
6100
6101 When multiplying complex numbers, it is more natural to think in terms of representing a complex
6102 number in polar form, as a magnitude and an angle (r and A in figure 2.20). The product of two
6103 complex numbers is the vector obtained by stretching one complex number by the length of the other
6104 and then rotating it through the angle of the other:
6105
6106 \fThus, there are two different representations for complex numbers, which are appropriate for different
6107 operations. Yet, from the viewpoint of someone writing a program that uses complex numbers, the
6108 principle of data abstraction suggests that all the operations for manipulating complex numbers should
6109 be available regardless of which representation is used by the computer. For example, it is often useful
6110 to be able to find the magnitude of a complex number that is specified by rectangular coordinates.
6111 Similarly, it is often useful to be able to determine the real part of a complex number that is specified
6112 by polar coordinates.
6113
6114 Figure 2.20: Complex numbers as points in the plane.
6115 Figure 2.20: Complex numbers as points in the plane.
6116 To design such a system, we can follow the same data-abstraction strategy we followed in designing
6117 the rational-number package in section 2.1.1. Assume that the operations on complex numbers are
6118 implemented in terms of four selectors: real-part, imag-part, magnitude, and angle. Also
6119 assume that we have two procedures for constructing complex numbers: make-from-real-imag
6120 returns a complex number with specified real and imaginary parts, and make-from-mag-ang
6121 returns a complex number with specified magnitude and angle. These procedures have the property
6122 that, for any complex number z, both
6123 (make-from-real-imag (real-part z) (imag-part z))
6124 and
6125 (make-from-mag-ang (magnitude z) (angle z))
6126 produce complex numbers that are equal to z.
6127 Using these constructors and selectors, we can implement arithmetic on complex numbers using the
6128 ‘‘abstract data’’ specified by the constructors and selectors, just as we did for rational numbers in
6129 section 2.1.1. As shown in the formulas above, we can add and subtract complex numbers in terms of
6130 real and imaginary parts while multiplying and dividing complex numbers in terms of magnitudes and
6131 angles:
6132
6133 \f(define (add-complex z1 z2)
6134 (make-from-real-imag (+ (real-part z1) (real-part z2))
6135 (+ (imag-part z1) (imag-part z2))))
6136 (define (sub-complex z1 z2)
6137 (make-from-real-imag (- (real-part z1) (real-part z2))
6138 (- (imag-part z1) (imag-part z2))))
6139 (define (mul-complex z1 z2)
6140 (make-from-mag-ang (* (magnitude z1) (magnitude z2))
6141 (+ (angle z1) (angle z2))))
6142 (define (div-complex z1 z2)
6143 (make-from-mag-ang (/ (magnitude z1) (magnitude z2))
6144 (- (angle z1) (angle z2))))
6145 To complete the complex-number package, we must choose a representation and we must implement
6146 the constructors and selectors in terms of primitive numbers and primitive list structure. There are two
6147 obvious ways to do this: We can represent a complex number in ‘‘rectangular form’’ as a pair (real
6148 part, imaginary part) or in ‘‘polar form’’ as a pair (magnitude, angle). Which shall we choose?
6149 In order to make the different choices concrete, imagine that there are two programmers, Ben
6150 Bitdiddle and Alyssa P. Hacker, who are independently designing representations for the
6151 complex-number system. Ben chooses to represent complex numbers in rectangular form. With this
6152 choice, selecting the real and imaginary parts of a complex number is straightforward, as is
6153 constructing a complex number with given real and imaginary parts. To find the magnitude and the
6154 angle, or to construct a complex number with a given magnitude and angle, he uses the trigonometric
6155 relations
6156
6157 which relate the real and imaginary parts (x, y) to the magnitude and the angle (r, A). 44 Ben’s
6158 representation is therefore given by the following selectors and constructors:
6159 (define
6160 (define
6161 (define
6162 (sqrt
6163 (define
6164 (atan
6165 (define
6166 (define
6167 (cons
6168
6169 (real-part z) (car z))
6170 (imag-part z) (cdr z))
6171 (magnitude z)
6172 (+ (square (real-part z)) (square (imag-part z)))))
6173 (angle z)
6174 (imag-part z) (real-part z)))
6175 (make-from-real-imag x y) (cons x y))
6176 (make-from-mag-ang r a)
6177 (* r (cos a)) (* r (sin a))))
6178
6179 Alyssa, in contrast, chooses to represent complex numbers in polar form. For her, selecting the
6180 magnitude and angle is straightforward, but she has to use the trigonometric relations to obtain the real
6181 and imaginary parts. Alyssa’s representation is:
6182 (define (real-part
6183 (* (magnitude z)
6184 (define (imag-part
6185 (* (magnitude z)
6186
6187 z)
6188 (cos (angle z))))
6189 z)
6190 (sin (angle z))))
6191
6192 \f(define
6193 (define
6194 (define
6195 (cons
6196
6197 (magnitude z) (car z))
6198 (angle z) (cdr z))
6199 (make-from-real-imag x y)
6200 (sqrt (+ (square x) (square y)))
6201 (atan y x)))
6202 (define (make-from-mag-ang r a) (cons r a))
6203 The discipline of data abstraction ensures that the same implementation of add-complex,
6204 sub-complex, mul-complex, and div-complex will work with either Ben’s representation or
6205 Alyssa’s representation.
6206
6207 2.4.2 Tagged data
6208 One way to view data abstraction is as an application of the ‘‘principle of least commitment.’’ In
6209 implementing the complex-number system in section 2.4.1, we can use either Ben’s rectangular
6210 representation or Alyssa’s polar representation. The abstraction barrier formed by the selectors and
6211 constructors permits us to defer to the last possible moment the choice of a concrete representation for
6212 our data objects and thus retain maximum flexibility in our system design.
6213 The principle of least commitment can be carried to even further extremes. If we desire, we can
6214 maintain the ambiguity of representation even after we have designed the selectors and constructors,
6215 and elect to use both Ben’s representation and Alyssa’s representation. If both representations are
6216 included in a single system, however, we will need some way to distinguish data in polar form from
6217 data in rectangular form. Otherwise, if we were asked, for instance, to find the magnitude of the
6218 pair (3,4), we wouldn’t know whether to answer 5 (interpreting the number in rectangular form) or 3
6219 (interpreting the number in polar form). A straightforward way to accomplish this distinction is to
6220 include a type tag -- the symbol rectangular or polar -- as part of each complex number. Then
6221 when we need to manipulate a complex number we can use the tag to decide which selector to apply.
6222 In order to manipulate tagged data, we will assume that we have procedures type-tag and
6223 contents that extract from a data object the tag and the actual contents (the polar or rectangular
6224 coordinates, in the case of a complex number). We will also postulate a procedure attach-tag that
6225 takes a tag and contents and produces a tagged data object. A straightforward way to implement this is
6226 to use ordinary list structure:
6227 (define (attach-tag type-tag contents)
6228 (cons type-tag contents))
6229 (define (type-tag datum)
6230 (if (pair? datum)
6231 (car datum)
6232 (error "Bad tagged datum -- TYPE-TAG" datum)))
6233 (define (contents datum)
6234 (if (pair? datum)
6235 (cdr datum)
6236 (error "Bad tagged datum -- CONTENTS" datum)))
6237 Using these procedures, we can define predicates rectangular? and polar?, which recognize
6238 polar and rectangular numbers, respectively:
6239
6240 \f(define (rectangular? z)
6241 (eq? (type-tag z) ’rectangular))
6242 (define (polar? z)
6243 (eq? (type-tag z) ’polar))
6244 With type tags, Ben and Alyssa can now modify their code so that their two different representations
6245 can coexist in the same system. Whenever Ben constructs a complex number, he tags it as rectangular.
6246 Whenever Alyssa constructs a complex number, she tags it as polar. In addition, Ben and Alyssa must
6247 make sure that the names of their procedures do not conflict. One way to do this is for Ben to append
6248 the suffix rectangular to the name of each of his representation procedures and for Alyssa to
6249 append polar to the names of hers. Here is Ben’s revised rectangular representation from
6250 section 2.4.1:
6251 (define
6252 (define
6253 (define
6254 (sqrt
6255
6256 (real-part-rectangular z) (car z))
6257 (imag-part-rectangular z) (cdr z))
6258 (magnitude-rectangular z)
6259 (+ (square (real-part-rectangular z))
6260 (square (imag-part-rectangular z)))))
6261 (define (angle-rectangular z)
6262 (atan (imag-part-rectangular z)
6263 (real-part-rectangular z)))
6264 (define (make-from-real-imag-rectangular x y)
6265 (attach-tag ’rectangular (cons x y)))
6266 (define (make-from-mag-ang-rectangular r a)
6267 (attach-tag ’rectangular
6268 (cons (* r (cos a)) (* r (sin a)))))
6269 and here is Alyssa’s revised polar representation:
6270 (define (real-part-polar z)
6271 (* (magnitude-polar z) (cos (angle-polar z))))
6272 (define (imag-part-polar z)
6273 (* (magnitude-polar z) (sin (angle-polar z))))
6274 (define (magnitude-polar z) (car z))
6275 (define (angle-polar z) (cdr z))
6276 (define (make-from-real-imag-polar x y)
6277 (attach-tag ’polar
6278 (cons (sqrt (+ (square x) (square y)))
6279 (atan y x))))
6280 (define (make-from-mag-ang-polar r a)
6281 (attach-tag ’polar (cons r a)))
6282 Each generic selector is implemented as a procedure that checks the tag of its argument and calls the
6283 appropriate procedure for handling data of that type. For example, to obtain the real part of a complex
6284 number, real-part examines the tag to determine whether to use Ben’s
6285 real-part-rectangular or Alyssa’s real-part-polar. In either case, we use contents
6286 to extract the bare, untagged datum and send this to the rectangular or polar procedure as required:
6287 (define (real-part z)
6288 (cond ((rectangular? z)
6289 (real-part-rectangular (contents z)))
6290
6291 \f(define
6292 (cond
6293
6294 (define
6295 (cond
6296
6297 (define
6298 (cond
6299
6300 ((polar? z)
6301 (real-part-polar (contents z)))
6302 (else (error "Unknown type -- REAL-PART" z))))
6303 (imag-part z)
6304 ((rectangular? z)
6305 (imag-part-rectangular (contents z)))
6306 ((polar? z)
6307 (imag-part-polar (contents z)))
6308 (else (error "Unknown type -- IMAG-PART" z))))
6309 (magnitude z)
6310 ((rectangular? z)
6311 (magnitude-rectangular (contents z)))
6312 ((polar? z)
6313 (magnitude-polar (contents z)))
6314 (else (error "Unknown type -- MAGNITUDE" z))))
6315 (angle z)
6316 ((rectangular? z)
6317 (angle-rectangular (contents z)))
6318 ((polar? z)
6319 (angle-polar (contents z)))
6320 (else (error "Unknown type -- ANGLE" z))))
6321
6322 To implement the complex-number arithmetic operations, we can use the same procedures
6323 add-complex, sub-complex, mul-complex, and div-complex from section 2.4.1, because
6324 the selectors they call are generic, and so will work with either representation. For example, the
6325 procedure add-complex is still
6326 (define (add-complex z1 z2)
6327 (make-from-real-imag (+ (real-part z1) (real-part z2))
6328 (+ (imag-part z1) (imag-part z2))))
6329 Finally, we must choose whether to construct complex numbers using Ben’s representation or Alyssa’s
6330 representation. One reasonable choice is to construct rectangular numbers whenever we have real and
6331 imaginary parts and to construct polar numbers whenever we have magnitudes and angles:
6332 (define (make-from-real-imag x y)
6333 (make-from-real-imag-rectangular x y))
6334 (define (make-from-mag-ang r a)
6335 (make-from-mag-ang-polar r a))
6336
6337 \fFigure 2.21: Structure of the generic complex-arithmetic system.
6338 Figure 2.21: Structure of the generic complex-arithmetic system.
6339 The resulting complex-number system has the structure shown in figure 2.21. The system has been
6340 decomposed into three relatively independent parts: the complex-number-arithmetic operations,
6341 Alyssa’s polar implementation, and Ben’s rectangular implementation. The polar and rectangular
6342 implementations could have been written by Ben and Alyssa working separately, and both of these can
6343 be used as underlying representations by a third programmer implementing the complex-arithmetic
6344 procedures in terms of the abstract constructor/selector interface.
6345 Since each data object is tagged with its type, the selectors operate on the data in a generic manner.
6346 That is, each selector is defined to have a behavior that depends upon the particular type of data it is
6347 applied to. Notice the general mechanism for interfacing the separate representations: Within a given
6348 representation implementation (say, Alyssa’s polar package) a complex number is an untyped pair
6349 (magnitude, angle). When a generic selector operates on a number of polar type, it strips off the tag
6350 and passes the contents on to Alyssa’s code. Conversely, when Alyssa constructs a number for general
6351 use, she tags it with a type so that it can be appropriately recognized by the higher-level procedures.
6352 This discipline of stripping off and attaching tags as data objects are passed from level to level can be
6353 an important organizational strategy, as we shall see in section 2.5.
6354
6355 2.4.3 Data-Directed Programming and Additivity
6356 The general strategy of checking the type of a datum and calling an appropriate procedure is called
6357 dispatching on type. This is a powerful strategy for obtaining modularity in system design. Oh the
6358 other hand, implementing the dispatch as in section 2.4.2 has two significant weaknesses. One
6359 weakness is that the generic interface procedures (real-part, imag-part, magnitude, and
6360 angle) must know about all the different representations. For instance, suppose we wanted to
6361 incorporate a new representation for complex numbers into our complex-number system. We would
6362 need to identify this new representation with a type, and then add a clause to each of the generic
6363 interface procedures to check for the new type and apply the appropriate selector for that
6364 representation.
6365 Another weakness of the technique is that even though the individual representations can be designed
6366 separately, we must guarantee that no two procedures in the entire system have the same name. This is
6367 why Ben and Alyssa had to change the names of their original procedures from section 2.4.1.
6368
6369 \fThe issue underlying both of these weaknesses is that the technique for implementing generic
6370 interfaces is not additive. The person implementing the generic selector procedures must modify those
6371 procedures each time a new representation is installed, and the people interfacing the individual
6372 representations must modify their code to avoid name conflicts. In each of these cases, the changes
6373 that must be made to the code are straightforward, but they must be made nonetheless, and this is a
6374 source of inconvenience and error. This is not much of a problem for the complex-number system as it
6375 stands, but suppose there were not two but hundreds of different representations for complex numbers.
6376 And suppose that there were many generic selectors to be maintained in the abstract-data interface.
6377 Suppose, in fact, that no one programmer knew all the interface procedures or all the representations.
6378 The problem is real and must be addressed in such programs as large-scale data-base-management
6379 systems.
6380 What we need is a means for modularizing the system design even further. This is provided by the
6381 programming technique known as data-directed programming. To understand how data-directed
6382 programming works, begin with the observation that whenever we deal with a set of generic operations
6383 that are common to a set of different types we are, in effect, dealing with a two-dimensional table that
6384 contains the possible operations on one axis and the possible types on the other axis. The entries in the
6385 table are the procedures that implement each operation for each type of argument presented. In the
6386 complex-number system developed in the previous section, the correspondence between operation
6387 name, data type, and actual procedure was spread out among the various conditional clauses in the
6388 generic interface procedures. But the same information could have been organized in a table, as shown
6389 in figure 2.22.
6390 Data-directed programming is the technique of designing programs to work with such a table directly.
6391 Previously, we implemented the mechanism that interfaces the complex-arithmetic code with the two
6392 representation packages as a set of procedures that each perform an explicit dispatch on type. Here we
6393 will implement the interface as a single procedure that looks up the combination of the operation name
6394 and argument type in the table to find the correct procedure to apply, and then applies it to the contents
6395 of the argument. If we do this, then to add a new representation package to the system we need not
6396 change any existing procedures; we need only add new entries to the table.
6397
6398 Figure 2.22: Table of operations for the complex-number system.
6399 Figure 2.22: Table of operations for the complex-number system.
6400 To implement this plan, assume that we have two procedures, put and get, for manipulating the
6401 operation-and-type table:
6402 (put <op> <type> <item>)
6403 installs the <item> in the table, indexed by the <op> and the <type>.
6404
6405 \f(get <op> <type>)
6406 looks up the <op>, <type> entry in the table and returns the item found there. If no item is
6407 found, get returns false.
6408 For now, we can assume that put and get are included in our language. In chapter 3 (section 3.3.3,
6409 exercise 3.24) we will see how to implement these and other operations for manipulating tables.
6410 Here is how data-directed programming can be used in the complex-number system. Ben, who
6411 developed the rectangular representation, implements his code just as he did originally. He defines a
6412 collection of procedures, or a package, and interfaces these to the rest of the system by adding entries
6413 to the table that tell the system how to operate on rectangular numbers. This is accomplished by calling
6414 the following procedure:
6415 (define (install-rectangular-package)
6416 ;; internal procedures
6417 (define (real-part z) (car z))
6418 (define (imag-part z) (cdr z))
6419 (define (make-from-real-imag x y) (cons x y))
6420 (define (magnitude z)
6421 (sqrt (+ (square (real-part z))
6422 (square (imag-part z)))))
6423 (define (angle z)
6424 (atan (imag-part z) (real-part z)))
6425 (define (make-from-mag-ang r a)
6426 (cons (* r (cos a)) (* r (sin a))))
6427 ;; interface to the rest of the system
6428 (define (tag x) (attach-tag ’rectangular x))
6429 (put ’real-part ’(rectangular) real-part)
6430 (put ’imag-part ’(rectangular) imag-part)
6431 (put ’magnitude ’(rectangular) magnitude)
6432 (put ’angle ’(rectangular) angle)
6433 (put ’make-from-real-imag ’rectangular
6434 (lambda (x y) (tag (make-from-real-imag x y))))
6435 (put ’make-from-mag-ang ’rectangular
6436 (lambda (r a) (tag (make-from-mag-ang r a))))
6437 ’done)
6438 Notice that the internal procedures here are the same procedures from section 2.4.1 that Ben wrote
6439 when he was working in isolation. No changes are necessary in order to interface them to the rest of
6440 the system. Moreover, since these procedure definitions are internal to the installation procedure, Ben
6441 needn’t worry about name conflicts with other procedures outside the rectangular package. To
6442 interface these to the rest of the system, Ben installs his real-part procedure under the operation
6443 name real-part and the type (rectangular), and similarly for the other selectors. 45 The
6444 interface also defines the constructors to be used by the external system. 46 These are identical to
6445 Ben’s internally defined constructors, except that they attach the tag.
6446 Alyssa’s polar package is analogous:
6447 (define (install-polar-package)
6448 ;; internal procedures
6449 (define (magnitude z) (car z))
6450
6451 \f(define (angle z) (cdr z))
6452 (define (make-from-mag-ang r a) (cons r a))
6453 (define (real-part z)
6454 (* (magnitude z) (cos (angle z))))
6455 (define (imag-part z)
6456 (* (magnitude z) (sin (angle z))))
6457 (define (make-from-real-imag x y)
6458 (cons (sqrt (+ (square x) (square y)))
6459 (atan y x)))
6460 ;; interface to the rest of the system
6461 (define (tag x) (attach-tag ’polar x))
6462 (put ’real-part ’(polar) real-part)
6463 (put ’imag-part ’(polar) imag-part)
6464 (put ’magnitude ’(polar) magnitude)
6465 (put ’angle ’(polar) angle)
6466 (put ’make-from-real-imag ’polar
6467 (lambda (x y) (tag (make-from-real-imag x y))))
6468 (put ’make-from-mag-ang ’polar
6469 (lambda (r a) (tag (make-from-mag-ang r a))))
6470 ’done)
6471 Even though Ben and Alyssa both still use their original procedures defined with the same names as
6472 each other’s (e.g., real-part), these definitions are now internal to different procedures (see
6473 section 1.1.8), so there is no name conflict.
6474 The complex-arithmetic selectors access the table by means of a general ‘‘operation’’ procedure called
6475 apply-generic, which applies a generic operation to some arguments. Apply-generic looks
6476 in the table under the name of the operation and the types of the arguments and applies the resulting
6477 procedure if one is present: 47
6478 (define (apply-generic op . args)
6479 (let ((type-tags (map type-tag args)))
6480 (let ((proc (get op type-tags)))
6481 (if proc
6482 (apply proc (map contents args))
6483 (error
6484 "No method for these types -- APPLY-GENERIC"
6485 (list op type-tags))))))
6486 Using apply-generic, we can define our generic selectors as follows:
6487 (define
6488 (define
6489 (define
6490 (define
6491
6492 (real-part z) (apply-generic ’real-part z))
6493 (imag-part z) (apply-generic ’imag-part z))
6494 (magnitude z) (apply-generic ’magnitude z))
6495 (angle z) (apply-generic ’angle z))
6496
6497 Observe that these do not change at all if a new representation is added to the system.
6498 We can also extract from the table the constructors to be used by the programs external to the packages
6499 in making complex numbers from real and imaginary parts and from magnitudes and angles. As in
6500 section 2.4.2, we construct rectangular numbers whenever we have real and imaginary parts, and polar
6501 numbers whenever we have magnitudes and angles:
6502
6503 \f(define
6504 ((get
6505 (define
6506 ((get
6507
6508 (make-from-real-imag x y)
6509 ’make-from-real-imag ’rectangular) x y))
6510 (make-from-mag-ang r a)
6511 ’make-from-mag-ang ’polar) r a))
6512
6513 Exercise 2.73. Section 2.3.2 described a program that performs symbolic differentiation:
6514 (define (deriv exp var)
6515 (cond ((number? exp) 0)
6516 ((variable? exp) (if (same-variable? exp var) 1 0))
6517 ((sum? exp)
6518 (make-sum (deriv (addend exp) var)
6519 (deriv (augend exp) var)))
6520 ((product? exp)
6521 (make-sum
6522 (make-product (multiplier exp)
6523 (deriv (multiplicand exp) var))
6524 (make-product (deriv (multiplier exp) var)
6525 (multiplicand exp))))
6526 <more rules can be added here>
6527 (else (error "unknown expression type -- DERIV" exp))))
6528 We can regard this program as performing a dispatch on the type of the expression to be differentiated.
6529 In this situation the ‘‘type tag’’ of the datum is the algebraic operator symbol (such as +) and the
6530 operation being performed is deriv. We can transform this program into data-directed style by
6531 rewriting the basic derivative procedure as
6532 (define (deriv exp var)
6533 (cond ((number? exp) 0)
6534 ((variable? exp) (if (same-variable? exp var) 1 0))
6535 (else ((get ’deriv (operator exp)) (operands exp)
6536 var))))
6537 (define (operator exp) (car exp))
6538 (define (operands exp) (cdr exp))
6539 a. Explain what was done above. Why can’t we assimilate the predicates number? and
6540 same-variable? into the data-directed dispatch?
6541 b. Write the procedures for derivatives of sums and products, and the auxiliary code required to install
6542 them in the table used by the program above.
6543 c. Choose any additional differentiation rule that you like, such as the one for exponents
6544 (exercise 2.56), and install it in this data-directed system.
6545 d. In this simple algebraic manipulator the type of an expression is the algebraic operator that binds it
6546 together. Suppose, however, we indexed the procedures in the opposite way, so that the dispatch line
6547 in deriv looked like
6548 ((get (operator exp) ’deriv) (operands exp) var)
6549
6550 \fWhat corresponding changes to the derivative system are required?
6551 Exercise 2.74. Insatiable Enterprises, Inc., is a highly decentralized conglomerate company consisting
6552 of a large number of independent divisions located all over the world. The company’s computer
6553 facilities have just been interconnected by means of a clever network-interfacing scheme that makes
6554 the entire network appear to any user to be a single computer. Insatiable’s president, in her first
6555 attempt to exploit the ability of the network to extract administrative information from division files, is
6556 dismayed to discover that, although all the division files have been implemented as data structures in
6557 Scheme, the particular data structure used varies from division to division. A meeting of division
6558 managers is hastily called to search for a strategy to integrate the files that will satisfy headquarters’
6559 needs while preserving the existing autonomy of the divisions.
6560 Show how such a strategy can be implemented with data-directed programming. As an example,
6561 suppose that each division’s personnel records consist of a single file, which contains a set of records
6562 keyed on employees’ names. The structure of the set varies from division to division. Furthermore,
6563 each employee’s record is itself a set (structured differently from division to division) that contains
6564 information keyed under identifiers such as address and salary. In particular:
6565 a. Implement for headquarters a get-record procedure that retrieves a specified employee’s record
6566 from a specified personnel file. The procedure should be applicable to any division’s file. Explain how
6567 the individual divisions’ files should be structured. In particular, what type information must be
6568 supplied?
6569 b. Implement for headquarters a get-salary procedure that returns the salary information from a
6570 given employee’s record from any division’s personnel file. How should the record be structured in
6571 order to make this operation work?
6572 c. Implement for headquarters a find-employee-record procedure. This should search all the
6573 divisions’ files for the record of a given employee and return the record. Assume that this procedure
6574 takes as arguments an employee’s name and a list of all the divisions’ files.
6575 d. When Insatiable takes over a new company, what changes must be made in order to incorporate the
6576 new personnel information into the central system?
6577
6578 Message passing
6579 The key idea of data-directed programming is to handle generic operations in programs by dealing
6580 explicitly with operation-and-type tables, such as the table in figure 2.22. The style of programming
6581 we used in section 2.4.2 organized the required dispatching on type by having each operation take care
6582 of its own dispatching. In effect, this decomposes the operation-and-type table into rows, with each
6583 generic operation procedure representing a row of the table.
6584 An alternative implementation strategy is to decompose the table into columns and, instead of using
6585 ‘‘intelligent operations’’ that dispatch on data types, to work with ‘‘intelligent data objects’’ that
6586 dispatch on operation names. We can do this by arranging things so that a data object, such as a
6587 rectangular number, is represented as a procedure that takes as input the required operation name and
6588 performs the operation indicated. In such a discipline, make-from-real-imag could be written as
6589 (define (make-from-real-imag x y)
6590 (define (dispatch op)
6591 (cond ((eq? op ’real-part) x)
6592 ((eq? op ’imag-part) y)
6593
6594 \f((eq? op ’magnitude)
6595 (sqrt (+ (square x) (square y))))
6596 ((eq? op ’angle) (atan y x))
6597 (else
6598 (error "Unknown op -- MAKE-FROM-REAL-IMAG" op))))
6599 dispatch)
6600 The corresponding apply-generic procedure, which applies a generic operation to an argument,
6601 now simply feeds the operation’s name to the data object and lets the object do the work: 48
6602 (define (apply-generic op arg) (arg op))
6603 Note that the value returned by make-from-real-imag is a procedure -- the internal dispatch
6604 procedure. This is the procedure that is invoked when apply-generic requests an operation to be
6605 performed.
6606 This style of programming is called message passing. The name comes from the image that a data
6607 object is an entity that receives the requested operation name as a ‘‘message.’’ We have already seen
6608 an example of message passing in section 2.1.3, where we saw how cons, car, and cdr could be
6609 defined with no data objects but only procedures. Here we see that message passing is not a
6610 mathematical trick but a useful technique for organizing systems with generic operations. In the
6611 remainder of this chapter we will continue to use data-directed programming, rather than message
6612 passing, to discuss generic arithmetic operations. In chapter 3 we will return to message passing, and
6613 we will see that it can be a powerful tool for structuring simulation programs.
6614 Exercise 2.75. Implement the constructor make-from-mag-ang in message-passing style. This
6615 procedure should be analogous to the make-from-real-imag procedure given above.
6616 Exercise 2.76. As a large system with generic operations evolves, new types of data objects or new
6617 operations may be needed. For each of the three strategies -- generic operations with explicit dispatch,
6618 data-directed style, and message-passing-style -- describe the changes that must be made to a system in
6619 order to add new types or new operations. Which organization would be most appropriate for a system
6620 in which new types must often be added? Which would be most appropriate for a system in which new
6621 operations must often be added?
6622 43 In actual computational systems, rectangular form is preferable to polar form most of the time
6623
6624 because of roundoff errors in conversion between rectangular and polar form. This is why the
6625 complex-number example is unrealistic. Nevertheless, it provides a clear illustration of the design of a
6626 system using generic operations and a good introduction to the more substantial systems to be
6627 developed later in this chapter.
6628 44 The arctangent function referred to here, computed by Scheme’s atan procedure, is defined so as
6629
6630 to take two arguments y and x and to return the angle whose tangent is y/x. The signs of the arguments
6631 determine the quadrant of the angle.
6632 45 We use the list (rectangular) rather than the symbol rectangular to allow for the
6633
6634 possibility of operations with multiple arguments, not all of the same type.
6635 46 The type the constructors are installed under needn’t be a list because a constructor is always used
6636
6637 to make an object of one particular type.
6638
6639 \f47 Apply-generic uses the dotted-tail notation described in exercise 2.20, because different
6640
6641 generic operations may take different numbers of arguments. In apply-generic, op has as its
6642 value the first argument to apply-generic and args has as its value a list of the remaining
6643 arguments.
6644 Apply-generic also uses the primitive procedure apply, which takes two arguments, a procedure
6645 and a list. Apply applies the procedure, using the elements in the list as arguments. For example,
6646 (apply + (list 1 2 3 4))
6647 returns 10.
6648 48 One limitation of this organization is it permits only generic procedures of one argument.
6649
6650 [Go to first, previous, next page; contents; index]
6651
6652 \f[Go to first, previous, next page; contents; index]
6653
6654 2.5 Systems with Generic Operations
6655 In the previous section, we saw how to design systems in which data objects can be represented in
6656 more than one way. The key idea is to link the code that specifies the data operations to the several
6657 representations by means of generic interface procedures. Now we will see how to use this same idea
6658 not only to define operations that are generic over different representations but also to define
6659 operations that are generic over different kinds of arguments. We have already seen several different
6660 packages of arithmetic operations: the primitive arithmetic (+, -, *, /) built into our language, the
6661 rational-number arithmetic (add-rat, sub-rat, mul-rat, div-rat) of section 2.1.1, and the
6662 complex-number arithmetic that we implemented in section 2.4.3. We will now use data-directed
6663 techniques to construct a package of arithmetic operations that incorporates all the arithmetic packages
6664 we have already constructed.
6665 Figure 2.23 shows the structure of the system we shall build. Notice the abstraction barriers. From the
6666 perspective of someone using ‘‘numbers,’’ there is a single procedure add that operates on whatever
6667 numbers are supplied. Add is part of a generic interface that allows the separate ordinary-arithmetic,
6668 rational-arithmetic, and complex-arithmetic packages to be accessed uniformly by programs that use
6669 numbers. Any individual arithmetic package (such as the complex package) may itself be accessed
6670 through generic procedures (such as add-complex) that combine packages designed for different
6671 representations (such as rectangular and polar). Moreover, the structure of the system is additive, so
6672 that one can design the individual arithmetic packages separately and combine them to produce a
6673 generic arithmetic system.
6674
6675 Figure 2.23: Generic arithmetic system.
6676 Figure 2.23: Generic arithmetic system.
6677
6678 2.5.1 Generic Arithmetic Operations
6679 The task of designing generic arithmetic operations is analogous to that of designing the generic
6680 complex-number operations. We would like, for instance, to have a generic addition procedure add
6681 that acts like ordinary primitive addition + on ordinary numbers, like add-rat on rational numbers,
6682
6683 \fand like add-complex on complex numbers. We can implement add, and the other generic
6684 arithmetic operations, by following the same strategy we used in section 2.4.3 to implement the
6685 generic selectors for complex numbers. We will attach a type tag to each kind of number and cause the
6686 generic procedure to dispatch to an appropriate package according to the data type of its arguments.
6687 The generic arithmetic procedures are defined as follows:
6688 (define
6689 (define
6690 (define
6691 (define
6692
6693 (add
6694 (sub
6695 (mul
6696 (div
6697
6698 x
6699 x
6700 x
6701 x
6702
6703 y)
6704 y)
6705 y)
6706 y)
6707
6708 (apply-generic
6709 (apply-generic
6710 (apply-generic
6711 (apply-generic
6712
6713 ’add
6714 ’sub
6715 ’mul
6716 ’div
6717
6718 x
6719 x
6720 x
6721 x
6722
6723 y))
6724 y))
6725 y))
6726 y))
6727
6728 We begin by installing a package for handling ordinary numbers, that is, the primitive numbers of our
6729 language. We will tag these with the symbol scheme-number. The arithmetic operations in this
6730 package are the primitive arithmetic procedures (so there is no need to define extra procedures to
6731 handle the untagged numbers). Since these operations each take two arguments, they are installed in
6732 the table keyed by the list (scheme-number scheme-number):
6733 (define (install-scheme-number-package)
6734 (define (tag x)
6735 (attach-tag ’scheme-number x))
6736 (put ’add ’(scheme-number scheme-number)
6737 (lambda (x y) (tag (+ x y))))
6738 (put ’sub ’(scheme-number scheme-number)
6739 (lambda (x y) (tag (- x y))))
6740 (put ’mul ’(scheme-number scheme-number)
6741 (lambda (x y) (tag (* x y))))
6742 (put ’div ’(scheme-number scheme-number)
6743 (lambda (x y) (tag (/ x y))))
6744 (put ’make ’scheme-number
6745 (lambda (x) (tag x)))
6746 ’done)
6747 Users of the Scheme-number package will create (tagged) ordinary numbers by means of the
6748 procedure:
6749 (define (make-scheme-number n)
6750 ((get ’make ’scheme-number) n))
6751 Now that the framework of the generic arithmetic system is in place, we can readily include new kinds
6752 of numbers. Here is a package that performs rational arithmetic. Notice that, as a benefit of additivity,
6753 we can use without modification the rational-number code from section 2.1.1 as the internal
6754 procedures in the package:
6755 (define (install-rational-package)
6756 ;; internal procedures
6757 (define (numer x) (car x))
6758 (define (denom x) (cdr x))
6759 (define (make-rat n d)
6760 (let ((g (gcd n d)))
6761 (cons (/ n g) (/ d g))))
6762 (define (add-rat x y)
6763
6764 \f(make-rat (+ (* (numer x) (denom y))
6765 (* (numer y) (denom x)))
6766 (* (denom x) (denom y))))
6767 (define (sub-rat x y)
6768 (make-rat (- (* (numer x) (denom y))
6769 (* (numer y) (denom x)))
6770 (* (denom x) (denom y))))
6771 (define (mul-rat x y)
6772 (make-rat (* (numer x) (numer y))
6773 (* (denom x) (denom y))))
6774 (define (div-rat x y)
6775 (make-rat (* (numer x) (denom y))
6776 (* (denom x) (numer y))))
6777 ;; interface to rest of the system
6778 (define (tag x) (attach-tag ’rational x))
6779 (put ’add ’(rational rational)
6780 (lambda (x y) (tag (add-rat x y))))
6781 (put ’sub ’(rational rational)
6782 (lambda (x y) (tag (sub-rat x y))))
6783 (put ’mul ’(rational rational)
6784 (lambda (x y) (tag (mul-rat x y))))
6785 (put ’div ’(rational rational)
6786 (lambda (x y) (tag (div-rat x y))))
6787 (put ’make ’rational
6788 (lambda (n d) (tag (make-rat n d))))
6789 ’done)
6790 (define (make-rational n d)
6791 ((get ’make ’rational) n d))
6792 We can install a similar package to handle complex numbers, using the tag complex. In creating the
6793 package, we extract from the table the operations make-from-real-imag and
6794 make-from-mag-ang that were defined by the rectangular and polar packages. Additivity permits
6795 us to use, as the internal operations, the same add-complex, sub-complex, mul-complex, and
6796 div-complex procedures from section 2.4.1.
6797 (define (install-complex-package)
6798 ;; imported procedures from rectangular and polar packages
6799 (define (make-from-real-imag x y)
6800 ((get ’make-from-real-imag ’rectangular) x y))
6801 (define (make-from-mag-ang r a)
6802 ((get ’make-from-mag-ang ’polar) r a))
6803 ;; internal procedures
6804 (define (add-complex z1 z2)
6805 (make-from-real-imag (+ (real-part z1) (real-part z2))
6806 (+ (imag-part z1) (imag-part z2))))
6807 (define (sub-complex z1 z2)
6808 (make-from-real-imag (- (real-part z1) (real-part z2))
6809 (- (imag-part z1) (imag-part z2))))
6810 (define (mul-complex z1 z2)
6811 (make-from-mag-ang (* (magnitude z1) (magnitude z2))
6812 (+ (angle z1) (angle z2))))
6813
6814 \f(define (div-complex z1 z2)
6815 (make-from-mag-ang (/ (magnitude z1) (magnitude z2))
6816 (- (angle z1) (angle z2))))
6817 ;; interface to rest of the system
6818 (define (tag z) (attach-tag ’complex z))
6819 (put ’add ’(complex complex)
6820 (lambda (z1 z2) (tag (add-complex z1 z2))))
6821 (put ’sub ’(complex complex)
6822 (lambda (z1 z2) (tag (sub-complex z1 z2))))
6823 (put ’mul ’(complex complex)
6824 (lambda (z1 z2) (tag (mul-complex z1 z2))))
6825 (put ’div ’(complex complex)
6826 (lambda (z1 z2) (tag (div-complex z1 z2))))
6827 (put ’make-from-real-imag ’complex
6828 (lambda (x y) (tag (make-from-real-imag x y))))
6829 (put ’make-from-mag-ang ’complex
6830 (lambda (r a) (tag (make-from-mag-ang r a))))
6831 ’done)
6832 Programs outside the complex-number package can construct complex numbers either from real and
6833 imaginary parts or from magnitudes and angles. Notice how the underlying procedures, originally
6834 defined in the rectangular and polar packages, are exported to the complex package, and exported from
6835 there to the outside world.
6836 (define
6837 ((get
6838 (define
6839 ((get
6840
6841 (make-complex-from-real-imag x y)
6842 ’make-from-real-imag ’complex) x y))
6843 (make-complex-from-mag-ang r a)
6844 ’make-from-mag-ang ’complex) r a))
6845
6846 What we have here is a two-level tag system. A typical complex number, such as 3 + 4i in rectangular
6847 form, would be represented as shown in figure 2.24. The outer tag (complex) is used to direct the
6848 number to the complex package. Once within the complex package, the next tag (rectangular) is
6849 used to direct the number to the rectangular package. In a large and complicated system there might be
6850 many levels, each interfaced with the next by means of generic operations. As a data object is passed
6851 ‘‘downward,’’ the outer tag that is used to direct it to the appropriate package is stripped off (by
6852 applying contents) and the next level of tag (if any) becomes visible to be used for further
6853 dispatching.
6854
6855 Figure 2.24: Representation of 3 + 4i in rectangular form.
6856 Figure 2.24: Representation of 3 + 4i in rectangular form.
6857 In the above packages, we used add-rat, add-complex, and the other arithmetic procedures
6858 exactly as originally written. Once these definitions are internal to different installation procedures,
6859 however, they no longer need names that are distinct from each other: we could simply name them
6860
6861 \fadd, sub, mul, and div in both packages.
6862 Exercise 2.77. Louis Reasoner tries to evaluate the expression (magnitude z) where z is the
6863 object shown in figure 2.24. To his surprise, instead of the answer 5 he gets an error message from
6864 apply-generic, saying there is no method for the operation magnitude on the types
6865 (complex). He shows this interaction to Alyssa P. Hacker, who says ‘‘The problem is that the
6866 complex-number selectors were never defined for complex numbers, just for polar and
6867 rectangular numbers. All you have to do to make this work is add the following to the complex
6868 package:’’
6869 (put
6870 (put
6871 (put
6872 (put
6873
6874 ’real-part ’(complex) real-part)
6875 ’imag-part ’(complex) imag-part)
6876 ’magnitude ’(complex) magnitude)
6877 ’angle ’(complex) angle)
6878
6879 Describe in detail why this works. As an example, trace through all the procedures called in evaluating
6880 the expression (magnitude z) where z is the object shown in figure 2.24. In particular, how many
6881 times is apply-generic invoked? What procedure is dispatched to in each case?
6882 Exercise 2.78. The internal procedures in the scheme-number package are essentially nothing
6883 more than calls to the primitive procedures +, -, etc. It was not possible to use the primitives of the
6884 language directly because our type-tag system requires that each data object have a type attached to it.
6885 In fact, however, all Lisp implementations do have a type system, which they use internally. Primitive
6886 predicates such as symbol? and number? determine whether data objects have particular types.
6887 Modify the definitions of type-tag, contents, and attach-tag from section 2.4.2 so that our
6888 generic system takes advantage of Scheme’s internal type system. That is to say, the system should
6889 work as before except that ordinary numbers should be represented simply as Scheme numbers rather
6890 than as pairs whose car is the symbol scheme-number.
6891 Exercise 2.79. Define a generic equality predicate equ? that tests the equality of two numbers, and
6892 install it in the generic arithmetic package. This operation should work for ordinary numbers, rational
6893 numbers, and complex numbers.
6894 Exercise 2.80. Define a generic predicate =zero? that tests if its argument is zero, and install it in
6895 the generic arithmetic package. This operation should work for ordinary numbers, rational numbers,
6896 and complex numbers.
6897
6898 2.5.2 Combining Data of Different Types
6899 We have seen how to define a unified arithmetic system that encompasses ordinary numbers, complex
6900 numbers, rational numbers, and any other type of number we might decide to invent, but we have
6901 ignored an important issue. The operations we have defined so far treat the different data types as
6902 being completely independent. Thus, there are separate packages for adding, say, two ordinary
6903 numbers, or two complex numbers. What we have not yet considered is the fact that it is meaningful to
6904 define operations that cross the type boundaries, such as the addition of a complex number to an
6905 ordinary number. We have gone to great pains to introduce barriers between parts of our programs so
6906 that they can be developed and understood separately. We would like to introduce the cross-type
6907 operations in some carefully controlled way, so that we can support them without seriously violating
6908 our module boundaries.
6909
6910 \fOne way to handle cross-type operations is to design a different procedure for each possible
6911 combination of types for which the operation is valid. For example, we could extend the
6912 complex-number package so that it provides a procedure for adding complex numbers to ordinary
6913 numbers and installs this in the table using the tag (complex scheme-number): 49
6914 ;; to be included in the complex package
6915 (define (add-complex-to-schemenum z x)
6916 (make-from-real-imag (+ (real-part z) x)
6917 (imag-part z)))
6918 (put ’add ’(complex scheme-number)
6919 (lambda (z x) (tag (add-complex-to-schemenum z x))))
6920 This technique works, but it is cumbersome. With such a system, the cost of introducing a new type is
6921 not just the construction of the package of procedures for that type but also the construction and
6922 installation of the procedures that implement the cross-type operations. This can easily be much more
6923 code than is needed to define the operations on the type itself. The method also undermines our ability
6924 to combine separate packages additively, or least to limit the extent to which the implementors of the
6925 individual packages need to take account of other packages. For instance, in the example above, it
6926 seems reasonable that handling mixed operations on complex numbers and ordinary numbers should
6927 be the responsibility of the complex-number package. Combining rational numbers and complex
6928 numbers, however, might be done by the complex package, by the rational package, or by some third
6929 package that uses operations extracted from these two packages. Formulating coherent policies on the
6930 division of responsibility among packages can be an overwhelming task in designing systems with
6931 many packages and many cross-type operations.
6932
6933 Coercion
6934 In the general situation of completely unrelated operations acting on completely unrelated types,
6935 implementing explicit cross-type operations, cumbersome though it may be, is the best that one can
6936 hope for. Fortunately, we can usually do better by taking advantage of additional structure that may be
6937 latent in our type system. Often the different data types are not completely independent, and there may
6938 be ways by which objects of one type may be viewed as being of another type. This process is called
6939 coercion. For example, if we are asked to arithmetically combine an ordinary number with a complex
6940 number, we can view the ordinary number as a complex number whose imaginary part is zero. This
6941 transforms the problem to that of combining two complex numbers, which can be handled in the
6942 ordinary way by the complex-arithmetic package.
6943 In general, we can implement this idea by designing coercion procedures that transform an object of
6944 one type into an equivalent object of another type. Here is a typical coercion procedure, which
6945 transforms a given ordinary number to a complex number with that real part and zero imaginary part:
6946 (define (scheme-number->complex n)
6947 (make-complex-from-real-imag (contents n) 0))
6948 We install these coercion procedures in a special coercion table, indexed under the names of the two
6949 types:
6950 (put-coercion ’scheme-number ’complex scheme-number->complex)
6951
6952 \f(We assume that there are put-coercion and get-coercion procedures available for
6953 manipulating this table.) Generally some of the slots in the table will be empty, because it is not
6954 generally possible to coerce an arbitrary data object of each type into all other types. For example,
6955 there is no way to coerce an arbitrary complex number to an ordinary number, so there will be no
6956 general complex->scheme-number procedure included in the table.
6957 Once the coercion table has been set up, we can handle coercion in a uniform manner by modifying
6958 the apply-generic procedure of section 2.4.3. When asked to apply an operation, we first check
6959 whether the operation is defined for the arguments’ types, just as before. If so, we dispatch to the
6960 procedure found in the operation-and-type table. Otherwise, we try coercion. For simplicity, we
6961 consider only the case where there are two arguments. 50 We check the coercion table to see if objects
6962 of the first type can be coerced to the second type. If so, we coerce the first argument and try the
6963 operation again. If objects of the first type cannot in general be coerced to the second type, we try the
6964 coercion the other way around to see if there is a way to coerce the second argument to the type of the
6965 first argument. Finally, if there is no known way to coerce either type to the other type, we give up.
6966 Here is the procedure:
6967 (define (apply-generic op . args)
6968 (let ((type-tags (map type-tag args)))
6969 (let ((proc (get op type-tags)))
6970 (if proc
6971 (apply proc (map contents args))
6972 (if (= (length args) 2)
6973 (let ((type1 (car type-tags))
6974 (type2 (cadr type-tags))
6975 (a1 (car args))
6976 (a2 (cadr args)))
6977 (let ((t1->t2 (get-coercion type1 type2))
6978 (t2->t1 (get-coercion type2 type1)))
6979 (cond (t1->t2
6980 (apply-generic op (t1->t2 a1) a2))
6981 (t2->t1
6982 (apply-generic op a1 (t2->t1 a2)))
6983 (else
6984 (error "No method for these types"
6985 (list op type-tags))))))
6986 (error "No method for these types"
6987 (list op type-tags)))))))
6988 This coercion scheme has many advantages over the method of defining explicit cross-type operations,
6989 as outlined above. Although we still need to write coercion procedures to relate the types (possibly n 2
6990 procedures for a system with n types), we need to write only one procedure for each pair of types
6991 rather than a different procedure for each collection of types and each generic operation. 51 What we
6992 are counting on here is the fact that the appropriate transformation between types depends only on the
6993 types themselves, not on the operation to be applied.
6994 On the other hand, there may be applications for which our coercion scheme is not general enough.
6995 Even when neither of the objects to be combined can be converted to the type of the other it may still
6996 be possible to perform the operation by converting both objects to a third type. In order to deal with
6997 such complexity and still preserve modularity in our programs, it is usually necessary to build systems
6998 that take advantage of still further structure in the relations among types, as we discuss next.
6999
7000 \fHierarchies of types
7001 The coercion scheme presented above relied on the existence of natural relations between pairs of
7002 types. Often there is more ‘‘global’’ structure in how the different types relate to each other. For
7003 instance, suppose we are building a generic arithmetic system to handle integers, rational numbers,
7004 real numbers, and complex numbers. In such a system, it is quite natural to regard an integer as a
7005 special kind of rational number, which is in turn a special kind of real number, which is in turn a
7006 special kind of complex number. What we actually have is a so-called hierarchy of types, in which, for
7007 example, integers are a subtype of rational numbers (i.e., any operation that can be applied to a rational
7008 number can automatically be applied to an integer). Conversely, we say that rational numbers form a
7009 supertype of integers. The particular hierarchy we have here is of a very simple kind, in which each
7010 type has at most one supertype and at most one subtype. Such a structure, called a tower, is illustrated
7011 in figure 2.25.
7012
7013 Figure 2.25: A tower of types.
7014 Figure 2.25: A tower of types.
7015 If we have a tower structure, then we can greatly simplify the problem of adding a new type to the
7016 hierarchy, for we need only specify how the new type is embedded in the next supertype above it and
7017 how it is the supertype of the type below it. For example, if we want to add an integer to a complex
7018 number, we need not explicitly define a special coercion procedure integer->complex. Instead,
7019 we define how an integer can be transformed into a rational number, how a rational number is
7020 transformed into a real number, and how a real number is transformed into a complex number. We
7021 then allow the system to transform the integer into a complex number through these steps and then add
7022 the two complex numbers.
7023 We can redesign our apply-generic procedure in the following way: For each type, we need to
7024 supply a raise procedure, which ‘‘raises’’ objects of that type one level in the tower. Then when the
7025 system is required to operate on objects of different types it can successively raise the lower types until
7026 all the objects are at the same level in the tower. (Exercises 2.83 and 2.84 concern the details of
7027 implementing such a strategy.)
7028 Another advantage of a tower is that we can easily implement the notion that every type ‘‘inherits’’ all
7029 operations defined on a supertype. For instance, if we do not supply a special procedure for finding the
7030 real part of an integer, we should nevertheless expect that real-part will be defined for integers by
7031 virtue of the fact that integers are a subtype of complex numbers. In a tower, we can arrange for this to
7032 happen in a uniform way by modifying apply-generic. If the required operation is not directly
7033 defined for the type of the object given, we raise the object to its supertype and try again. We thus
7034
7035 \fcrawl up the tower, transforming our argument as we go, until we either find a level at which the
7036 desired operation can be performed or hit the top (in which case we give up).
7037 Yet another advantage of a tower over a more general hierarchy is that it gives us a simple way to
7038 ‘‘lower’’ a data object to the simplest representation. For example, if we add 2 + 3i to 4 - 3i, it would
7039 be nice to obtain the answer as the integer 6 rather than as the complex number 6 + 0i. Exercise 2.85
7040 discusses a way to implement such a lowering operation. (The trick is that we need a general way to
7041 distinguish those objects that can be lowered, such as 6 + 0i, from those that cannot, such as 6 + 2i.)
7042
7043 Figure 2.26: Relations among types of geometric figures.
7044 Figure 2.26: Relations among types of geometric figures.
7045
7046 Inadequacies of hierarchies
7047 If the data types in our system can be naturally arranged in a tower, this greatly simplifies the
7048 problems of dealing with generic operations on different types, as we have seen. Unfortunately, this is
7049 usually not the case. Figure 2.26 illustrates a more complex arrangement of mixed types, this one
7050 showing relations among different types of geometric figures. We see that, in general, a type may have
7051 more than one subtype. Triangles and quadrilaterals, for instance, are both subtypes of polygons. In
7052 addition, a type may have more than one supertype. For example, an isosceles right triangle may be
7053 regarded either as an isosceles triangle or as a right triangle. This multiple-supertypes issue is
7054 particularly thorny, since it means that there is no unique way to ‘‘raise’’ a type in the hierarchy.
7055 Finding the ‘‘correct’’ supertype in which to apply an operation to an object may involve considerable
7056 searching through the entire type network on the part of a procedure such as apply-generic. Since
7057 there generally are multiple subtypes for a type, there is a similar problem in coercing a value ‘‘down’’
7058 the type hierarchy. Dealing with large numbers of interrelated types while still preserving modularity
7059
7060 \fin the design of large systems is very difficult, and is an area of much current research. 52
7061 Exercise 2.81. Louis Reasoner has noticed that apply-generic may try to coerce the arguments
7062 to each other’s type even if they already have the same type. Therefore, he reasons, we need to put
7063 procedures in the coercion table to "coerce" arguments of each type to their own type. For example, in
7064 addition to the scheme-number->complex coercion shown above, he would do:
7065 (define (scheme-number->scheme-number n) n)
7066 (define (complex->complex z) z)
7067 (put-coercion ’scheme-number ’scheme-number
7068 scheme-number->scheme-number)
7069 (put-coercion ’complex ’complex complex->complex)
7070 a. With Louis’s coercion procedures installed, what happens if apply-generic is called with two
7071 arguments of type scheme-number or two arguments of type complex for an operation that is not
7072 found in the table for those types? For example, assume that we’ve defined a generic exponentiation
7073 operation:
7074 (define (exp x y) (apply-generic ’exp x y))
7075 and have put a procedure for exponentiation in the Scheme-number package but not in any other
7076 package:
7077 ;; following added to Scheme-number package
7078 (put ’exp ’(scheme-number scheme-number)
7079 (lambda (x y) (tag (expt x y)))) ; using primitive expt
7080 What happens if we call exp with two complex numbers as arguments?
7081 b. Is Louis correct that something had to be done about coercion with arguments of the same type, or
7082 does apply-generic work correctly as is?
7083 c. Modify apply-generic so that it doesn’t try coercion if the two arguments have the same type.
7084 Exercise 2.82. Show how to generalize apply-generic to handle coercion in the general case of
7085 multiple arguments. One strategy is to attempt to coerce all the arguments to the type of the first
7086 argument, then to the type of the second argument, and so on. Give an example of a situation where
7087 this strategy (and likewise the two-argument version given above) is not sufficiently general. (Hint:
7088 Consider the case where there are some suitable mixed-type operations present in the table that will
7089 not be tried.)
7090 Exercise 2.83. Suppose you are designing a generic arithmetic system for dealing with the tower of
7091 types shown in figure 2.25: integer, rational, real, complex. For each type (except complex), design a
7092 procedure that raises objects of that type one level in the tower. Show how to install a generic raise
7093 operation that will work for each type (except complex).
7094 Exercise 2.84. Using the raise operation of exercise 2.83, modify the apply-generic procedure
7095 so that it coerces its arguments to have the same type by the method of successive raising, as discussed
7096 in this section. You will need to devise a way to test which of two types is higher in the tower. Do this
7097 in a manner that is ‘‘compatible’’ with the rest of the system and will not lead to problems in adding
7098 new levels to the tower.
7099
7100 \fExercise 2.85. This section mentioned a method for ‘‘simplifying’’ a data object by lowering it in the
7101 tower of types as far as possible. Design a procedure drop that accomplishes this for the tower
7102 described in exercise 2.83. The key is to decide, in some general way, whether an object can be
7103 lowered. For example, the complex number 1.5 + 0i can be lowered as far as real, the complex
7104 number 1 + 0i can be lowered as far as integer, and the complex number 2 + 3i cannot be lowered
7105 at all. Here is a plan for determining whether an object can be lowered: Begin by defining a generic
7106 operation project that ‘‘pushes’’ an object down in the tower. For example, projecting a complex
7107 number would involve throwing away the imaginary part. Then a number can be dropped if, when we
7108 project it and raise the result back to the type we started with, we end up with something equal
7109 to what we started with. Show how to implement this idea in detail, by writing a drop procedure that
7110 drops an object as far as possible. You will need to design the various projection operations 53 and
7111 install project as a generic operation in the system. You will also need to make use of a generic
7112 equality predicate, such as described in exercise 2.79. Finally, use drop to rewrite apply-generic
7113 from exercise 2.84 so that it ‘‘simplifies’’ its answers.
7114 Exercise 2.86. Suppose we want to handle complex numbers whose real parts, imaginary parts,
7115 magnitudes, and angles can be either ordinary numbers, rational numbers, or other numbers we might
7116 wish to add to the system. Describe and implement the changes to the system needed to accommodate
7117 this. You will have to define operations such as sine and cosine that are generic over ordinary
7118 numbers and rational numbers.
7119
7120 2.5.3 Example: Symbolic Algebra
7121 The manipulation of symbolic algebraic expressions is a complex process that illustrates many of the
7122 hardest problems that occur in the design of large-scale systems. An algebraic expression, in general,
7123 can be viewed as a hierarchical structure, a tree of operators applied to operands. We can construct
7124 algebraic expressions by starting with a set of primitive objects, such as constants and variables, and
7125 combining these by means of algebraic operators, such as addition and multiplication. As in other
7126 languages, we form abstractions that enable us to refer to compound objects in simple terms. Typical
7127 abstractions in symbolic algebra are ideas such as linear combination, polynomial, rational function, or
7128 trigonometric function. We can regard these as compound ‘‘types,’’ which are often useful for
7129 directing the processing of expressions. For example, we could describe the expression
7130
7131 as a polynomial in x with coefficients that are trigonometric functions of polynomials in y whose
7132 coefficients are integers.
7133 We will not attempt to develop a complete algebraic-manipulation system here. Such systems are
7134 exceedingly complex programs, embodying deep algebraic knowledge and elegant algorithms. What
7135 we will do is look at a simple but important part of algebraic manipulation: the arithmetic of
7136 polynomials. We will illustrate the kinds of decisions the designer of such a system faces, and how to
7137 apply the ideas of abstract data and generic operations to help organize this effort.
7138
7139 Arithmetic on polynomials
7140 Our first task in designing a system for performing arithmetic on polynomials is to decide just what a
7141 polynomial is. Polynomials are normally defined relative to certain variables (the indeterminates of the
7142 polynomial). For simplicity, we will restrict ourselves to polynomials having just one indeterminate
7143 (univariate polynomials). 54 We will define a polynomial to be a sum of terms, each of which is either
7144 a coefficient, a power of the indeterminate, or a product of a coefficient and a power of the
7145
7146 \findeterminate. A coefficient is defined as an algebraic expression that is not dependent upon the
7147 indeterminate of the polynomial. For example,
7148
7149 is a simple polynomial in x, and
7150
7151 is a polynomial in x whose coefficients are polynomials in y.
7152 Already we are skirting some thorny issues. Is the first of these polynomials the same as the
7153 polynomial 5y 2 + 3y + 7, or not? A reasonable answer might be ‘‘yes, if we are considering a
7154 polynomial purely as a mathematical function, but no, if we are considering a polynomial to be a
7155 syntactic form.’’ The second polynomial is algebraically equivalent to a polynomial in y whose
7156 coefficients are polynomials in x. Should our system recognize this, or not? Furthermore, there are
7157 other ways to represent a polynomial -- for example, as a product of factors, or (for a univariate
7158 polynomial) as the set of roots, or as a listing of the values of the polynomial at a specified set of
7159 points. 55 We can finesse these questions by deciding that in our algebraic-manipulation system a
7160 ‘‘polynomial’’ will be a particular syntactic form, not its underlying mathematical meaning.
7161 Now we must consider how to go about doing arithmetic on polynomials. In this simple system, we
7162 will consider only addition and multiplication. Moreover, we will insist that two polynomials to be
7163 combined must have the same indeterminate.
7164 We will approach the design of our system by following the familiar discipline of data abstraction. We
7165 will represent polynomials using a data structure called a poly, which consists of a variable and a
7166 collection of terms. We assume that we have selectors variable and term-list that extract those
7167 parts from a poly and a constructor make-poly that assembles a poly from a given variable and a
7168 term list. A variable will be just a symbol, so we can use the same-variable? procedure of
7169 section 2.3.2 to compare variables. The following procedures define addition and multiplication of
7170 polys:
7171 (define (add-poly p1 p2)
7172 (if (same-variable? (variable p1) (variable p2))
7173 (make-poly (variable p1)
7174 (add-terms (term-list p1)
7175 (term-list p2)))
7176 (error "Polys not in same var -- ADD-POLY"
7177 (list p1 p2))))
7178 (define (mul-poly p1 p2)
7179 (if (same-variable? (variable p1) (variable p2))
7180 (make-poly (variable p1)
7181 (mul-terms (term-list p1)
7182 (term-list p2)))
7183 (error "Polys not in same var -- MUL-POLY"
7184 (list p1 p2))))
7185 To incorporate polynomials into our generic arithmetic system, we need to supply them with type tags.
7186 We’ll use the tag polynomial, and install appropriate operations on tagged polynomials in the
7187 operation table. We’ll embed all our code in an installation procedure for the polynomial package,
7188 similar to the ones in section 2.5.1:
7189
7190 \f(define (install-polynomial-package)
7191 ;; internal procedures
7192 ;; representation of poly
7193 (define (make-poly variable term-list)
7194 (cons variable term-list))
7195 (define (variable p) (car p))
7196 (define (term-list p) (cdr p))
7197 <procedures same-variable? and variable? from section 2.3.2>
7198 ;; representation of terms and term lists
7199 <procedures adjoin-term ...coeff from text below>
7200 ;; continued on next page
7201 (define (add-poly p1 p2) ...)
7202 <procedures used by add-poly>
7203 (define (mul-poly p1 p2) ...)
7204 <procedures used by mul-poly>
7205 ;; interface to rest of the system
7206 (define (tag p) (attach-tag ’polynomial p))
7207 (put ’add ’(polynomial polynomial)
7208 (lambda (p1 p2) (tag (add-poly p1 p2))))
7209 (put ’mul ’(polynomial polynomial)
7210 (lambda (p1 p2) (tag (mul-poly p1 p2))))
7211 (put ’make ’polynomial
7212 (lambda (var terms) (tag (make-poly var terms))))
7213 ’done)
7214 Polynomial addition is performed termwise. Terms of the same order (i.e., with the same power of the
7215 indeterminate) must be combined. This is done by forming a new term of the same order whose
7216 coefficient is the sum of the coefficients of the addends. Terms in one addend for which there are no
7217 terms of the same order in the other addend are simply accumulated into the sum polynomial being
7218 constructed.
7219 In order to manipulate term lists, we will assume that we have a constructor
7220 the-empty-termlist that returns an empty term list and a constructor adjoin-term that
7221 adjoins a new term to a term list. We will also assume that we have a predicate empty-termlist?
7222 that tells if a given term list is empty, a selector first-term that extracts the highest-order term
7223 from a term list, and a selector rest-terms that returns all but the highest-order term. To
7224 manipulate terms, we will suppose that we have a constructor make-term that constructs a term with
7225 given order and coefficient, and selectors order and coeff that return, respectively, the order and
7226 the coefficient of the term. These operations allow us to consider both terms and term lists as data
7227 abstractions, whose concrete representations we can worry about separately.
7228 Here is the procedure that constructs the term list for the sum of two polynomials: 56
7229 (define (add-terms L1 L2)
7230 (cond ((empty-termlist? L1) L2)
7231 ((empty-termlist? L2) L1)
7232 (else
7233 (let ((t1 (first-term L1)) (t2 (first-term L2)))
7234 (cond ((> (order t1) (order t2))
7235 (adjoin-term
7236 t1 (add-terms (rest-terms L1) L2)))
7237
7238 \f((< (order t1) (order t2))
7239 (adjoin-term
7240 t2 (add-terms L1 (rest-terms L2))))
7241 (else
7242 (adjoin-term
7243 (make-term (order t1)
7244 (add (coeff t1) (coeff t2)))
7245 (add-terms (rest-terms L1)
7246 (rest-terms L2)))))))))
7247 The most important point to note here is that we used the generic addition procedure add to add
7248 together the coefficients of the terms being combined. This has powerful consequences, as we will see
7249 below.
7250 In order to multiply two term lists, we multiply each term of the first list by all the terms of the other
7251 list, repeatedly using mul-term-by-all-terms, which multiplies a given term by all terms in a
7252 given term list. The resulting term lists (one for each term of the first list) are accumulated into a sum.
7253 Multiplying two terms forms a term whose order is the sum of the orders of the factors and whose
7254 coefficient is the product of the coefficients of the factors:
7255 (define (mul-terms L1 L2)
7256 (if (empty-termlist? L1)
7257 (the-empty-termlist)
7258 (add-terms (mul-term-by-all-terms (first-term L1) L2)
7259 (mul-terms (rest-terms L1) L2))))
7260 (define (mul-term-by-all-terms t1 L)
7261 (if (empty-termlist? L)
7262 (the-empty-termlist)
7263 (let ((t2 (first-term L)))
7264 (adjoin-term
7265 (make-term (+ (order t1) (order t2))
7266 (mul (coeff t1) (coeff t2)))
7267 (mul-term-by-all-terms t1 (rest-terms L))))))
7268 This is really all there is to polynomial addition and multiplication. Notice that, since we operate on
7269 terms using the generic procedures add and mul, our polynomial package is automatically able to
7270 handle any type of coefficient that is known about by the generic arithmetic package. If we include a
7271 coercion mechanism such as one of those discussed in section 2.5.2, then we also are automatically
7272 able to handle operations on polynomials of different coefficient types, such as
7273
7274 Because we installed the polynomial addition and multiplication procedures add-poly and
7275 mul-poly in the generic arithmetic system as the add and mul operations for type polynomial,
7276 our system is also automatically able to handle polynomial operations such as
7277
7278 \fThe reason is that when the system tries to combine coefficients, it will dispatch through add and
7279 mul. Since the coefficients are themselves polynomials (in y), these will be combined using
7280 add-poly and mul-poly. The result is a kind of ‘‘data-directed recursion’’ in which, for example,
7281 a call to mul-poly will result in recursive calls to mul-poly in order to multiply the coefficients. If
7282 the coefficients of the coefficients were themselves polynomials (as might be used to represent
7283 polynomials in three variables), the data direction would ensure that the system would follow through
7284 another level of recursive calls, and so on through as many levels as the structure of the data
7285 dictates. 57
7286
7287 Representing term lists
7288 Finally, we must confront the job of implementing a good representation for term lists. A term list is,
7289 in effect, a set of coefficients keyed by the order of the term. Hence, any of the methods for
7290 representing sets, as discussed in section 2.3.3, can be applied to this task. On the other hand, our
7291 procedures add-terms and mul-terms always access term lists sequentially from highest to
7292 lowest order. Thus, we will use some kind of ordered list representation.
7293 How should we structure the list that represents a term list? One consideration is the ‘‘density’’ of the
7294 polynomials we intend to manipulate. A polynomial is said to be dense if it has nonzero coefficients in
7295 terms of most orders. If it has many zero terms it is said to be sparse. For example,
7296
7297 is a dense polynomial, whereas
7298
7299 is sparse.
7300 The term lists of dense polynomials are most efficiently represented as lists of the coefficients. For
7301 example, A above would be nicely represented as (1 2 0 3 -2 -5). The order of a term in this
7302 representation is the length of the sublist beginning with that term’s coefficient, decremented by 1. 58
7303 This would be a terrible representation for a sparse polynomial such as B: There would be a giant list
7304 of zeros punctuated by a few lonely nonzero terms. A more reasonable representation of the term list
7305 of a sparse polynomial is as a list of the nonzero terms, where each term is a list containing the order
7306 of the term and the coefficient for that order. In such a scheme, polynomial B is efficiently represented
7307 as ((100 1) (2 2) (0 1)). As most polynomial manipulations are performed on sparse
7308 polynomials, we will use this method. We will assume that term lists are represented as lists of terms,
7309 arranged from highest-order to lowest-order term. Once we have made this decision, implementing the
7310 selectors and constructors for terms and term lists is straightforward: 59
7311 (define (adjoin-term term term-list)
7312 (if (=zero? (coeff term))
7313 term-list
7314 (cons term term-list)))
7315 (define (the-empty-termlist) ’())
7316 (define (first-term term-list) (car term-list))
7317 (define (rest-terms term-list) (cdr term-list))
7318 (define (empty-termlist? term-list) (null? term-list))
7319 (define (make-term order coeff) (list order coeff))
7320 (define (order term) (car term))
7321 (define (coeff term) (cadr term))
7322
7323 \fwhere =zero? is as defined in exercise 2.80. (See also exercise 2.87 below.)
7324 Users of the polynomial package will create (tagged) polynomials by means of the procedure:
7325 (define (make-polynomial var terms)
7326 ((get ’make ’polynomial) var terms))
7327 Exercise 2.87. Install =zero? for polynomials in the generic arithmetic package. This will allow
7328 adjoin-term to work for polynomials with coefficients that are themselves polynomials.
7329 Exercise 2.88. Extend the polynomial system to include subtraction of polynomials. (Hint: You may
7330 find it helpful to define a generic negation operation.)
7331 Exercise 2.89. Define procedures that implement the term-list representation described above as
7332 appropriate for dense polynomials.
7333 Exercise 2.90. Suppose we want to have a polynomial system that is efficient for both sparse and
7334 dense polynomials. One way to do this is to allow both kinds of term-list representations in our
7335 system. The situation is analogous to the complex-number example of section 2.4, where we allowed
7336 both rectangular and polar representations. To do this we must distinguish different types of term lists
7337 and make the operations on term lists generic. Redesign the polynomial system to implement this
7338 generalization. This is a major effort, not a local change.
7339 Exercise 2.91. A univariate polynomial can be divided by another one to produce a polynomial
7340 quotient and a polynomial remainder. For example,
7341
7342 Division can be performed via long division. That is, divide the highest-order term of the dividend by
7343 the highest-order term of the divisor. The result is the first term of the quotient. Next, multiply the
7344 result by the divisor, subtract that from the dividend, and produce the rest of the answer by recursively
7345 dividing the difference by the divisor. Stop when the order of the divisor exceeds the order of the
7346 dividend and declare the dividend to be the remainder. Also, if the dividend ever becomes zero, return
7347 zero as both quotient and remainder.
7348 We can design a div-poly procedure on the model of add-poly and mul-poly. The procedure
7349 checks to see if the two polys have the same variable. If so, div-poly strips off the variable and
7350 passes the problem to div-terms, which performs the division operation on term lists. Div-poly
7351 finally reattaches the variable to the result supplied by div-terms. It is convenient to design
7352 div-terms to compute both the quotient and the remainder of a division. Div-terms can take two
7353 term lists as arguments and return a list of the quotient term list and the remainder term list.
7354 Complete the following definition of div-terms by filling in the missing expressions. Use this to
7355 implement div-poly, which takes two polys as arguments and returns a list of the quotient and
7356 remainder polys.
7357 (define (div-terms L1 L2)
7358 (if (empty-termlist? L1)
7359 (list (the-empty-termlist) (the-empty-termlist))
7360 (let ((t1 (first-term L1))
7361 (t2 (first-term L2)))
7362
7363 \f(if (> (order t2) (order t1))
7364 (list (the-empty-termlist) L1)
7365 (let ((new-c (div (coeff t1) (coeff t2)))
7366 (new-o (- (order t1) (order t2))))
7367 (let ((rest-of-result
7368 <compute rest of result recursively>
7369 ))
7370 <form complete result>
7371 ))))))
7372
7373 Hierarchies of types in symbolic algebra
7374 Our polynomial system illustrates how objects of one type (polynomials) may in fact be complex
7375 objects that have objects of many different types as parts. This poses no real difficulty in defining
7376 generic operations. We need only install appropriate generic operations for performing the necessary
7377 manipulations of the parts of the compound types. In fact, we saw that polynomials form a kind of
7378 ‘‘recursive data abstraction,’’ in that parts of a polynomial may themselves be polynomials. Our
7379 generic operations and our data-directed programming style can handle this complication without
7380 much trouble.
7381 On the other hand, polynomial algebra is a system for which the data types cannot be naturally
7382 arranged in a tower. For instance, it is possible to have polynomials in x whose coefficients are
7383 polynomials in y. It is also possible to have polynomials in y whose coefficients are polynomials in x.
7384 Neither of these types is ‘‘above’’ the other in any natural way, yet it is often necessary to add together
7385 elements from each set. There are several ways to do this. One possibility is to convert one polynomial
7386 to the type of the other by expanding and rearranging terms so that both polynomials have the same
7387 principal variable. One can impose a towerlike structure on this by ordering the variables and thus
7388 always converting any polynomial to a ‘‘canonical form’’ with the highest-priority variable dominant
7389 and the lower-priority variables buried in the coefficients. This strategy works fairly well, except that
7390 the conversion may expand a polynomial unnecessarily, making it hard to read and perhaps less
7391 efficient to work with. The tower strategy is certainly not natural for this domain or for any domain
7392 where the user can invent new types dynamically using old types in various combining forms, such as
7393 trigonometric functions, power series, and integrals.
7394 It should not be surprising that controlling coercion is a serious problem in the design of large-scale
7395 algebraic-manipulation systems. Much of the complexity of such systems is concerned with
7396 relationships among diverse types. Indeed, it is fair to say that we do not yet completely understand
7397 coercion. In fact, we do not yet completely understand the concept of a data type. Nevertheless, what
7398 we know provides us with powerful structuring and modularity principles to support the design of
7399 large systems.
7400 Exercise 2.92. By imposing an ordering on variables, extend the polynomial package so that addition
7401 and multiplication of polynomials works for polynomials in different variables. (This is not easy!)
7402
7403 Extended exercise: Rational functions
7404 We can extend our generic arithmetic system to include rational functions. These are ‘‘fractions’’
7405 whose numerator and denominator are polynomials, such as
7406
7407 \fThe system should be able to add, subtract, multiply, and divide rational functions, and to perform
7408 such computations as
7409
7410 (Here the sum has been simplified by removing common factors. Ordinary ‘‘cross multiplication’’
7411 would have produced a fourth-degree polynomial over a fifth-degree polynomial.)
7412 If we modify our rational-arithmetic package so that it uses generic operations, then it will do what we
7413 want, except for the problem of reducing fractions to lowest terms.
7414 Exercise 2.93. Modify the rational-arithmetic package to use generic operations, but change
7415 make-rat so that it does not attempt to reduce fractions to lowest terms. Test your system by calling
7416 make-rational on two polynomials to produce a rational function
7417 (define p1 (make-polynomial ’x ’((2 1)(0 1))))
7418 (define p2 (make-polynomial ’x ’((3 1)(0 1))))
7419 (define rf (make-rational p2 p1))
7420 Now add rf to itself, using add. You will observe that this addition procedure does not reduce
7421 fractions to lowest terms.
7422 We can reduce polynomial fractions to lowest terms using the same idea we used with integers:
7423 modifying make-rat to divide both the numerator and the denominator by their greatest common
7424 divisor. The notion of ‘‘greatest common divisor’’ makes sense for polynomials. In fact, we can
7425 compute the GCD of two polynomials using essentially the same Euclid’s Algorithm that works for
7426 integers. 60 The integer version is
7427 (define (gcd a b)
7428 (if (= b 0)
7429 a
7430 (gcd b (remainder a b))))
7431 Using this, we could make the obvious modification to define a GCD operation that works on term
7432 lists:
7433 (define (gcd-terms a b)
7434 (if (empty-termlist? b)
7435 a
7436 (gcd-terms b (remainder-terms a b))))
7437 where remainder-terms picks out the remainder component of the list returned by the term-list
7438 division operation div-terms that was implemented in exercise 2.91.
7439 Exercise 2.94. Using div-terms, implement the procedure remainder-terms and use this to
7440 define gcd-terms as above. Now write a procedure gcd-poly that computes the polynomial GCD
7441 of two polys. (The procedure should signal an error if the two polys are not in the same variable.)
7442 Install in the system a generic operation greatest-common-divisor that reduces to gcd-poly
7443
7444 \ffor polynomials and to ordinary gcd for ordinary numbers. As a test, try
7445 (define p1 (make-polynomial ’x ’((4 1) (3 -1) (2 -2) (1 2))))
7446 (define p2 (make-polynomial ’x ’((3 1) (1 -1))))
7447 (greatest-common-divisor p1 p2)
7448 and check your result by hand.
7449 Exercise 2.95. Define P 1 , P 2 , and P 3 to be the polynomials
7450
7451 Now define Q 1 to be the product of P 1 and P 2 and Q 2 to be the product of P 1 and P 3 , and use
7452 greatest-common-divisor (exercise 2.94) to compute the GCD of Q 1 and Q 2 . Note that the
7453 answer is not the same as P 1 . This example introduces noninteger operations into the computation,
7454 causing difficulties with the GCD algorithm. 61 To understand what is happening, try tracing
7455 gcd-terms while computing the GCD or try performing the division by hand.
7456 We can solve the problem exhibited in exercise 2.95 if we use the following modification of the GCD
7457 algorithm (which really works only in the case of polynomials with integer coefficients). Before
7458 performing any polynomial division in the GCD computation, we multiply the dividend by an integer
7459 constant factor, chosen to guarantee that no fractions will arise during the division process. Our answer
7460 will thus differ from the actual GCD by an integer constant factor, but this does not matter in the case
7461 of reducing rational functions to lowest terms; the GCD will be used to divide both the numerator and
7462 denominator, so the integer constant factor will cancel out.
7463 More precisely, if P and Q are polynomials, let O 1 be the order of P (i.e., the order of the largest term
7464 of P) and let O 2 be the order of Q. Let c be the leading coefficient of Q. Then it can be shown that, if
7465 we multiply P by the integerizing factor c 1+O 1 -O 2 , the resulting polynomial can be divided by Q by
7466 using the div-terms algorithm without introducing any fractions. The operation of multiplying the
7467 dividend by this constant and then dividing is sometimes called the pseudodivision of P by Q. The
7468 remainder of the division is called the pseudoremainder.
7469 Exercise 2.96. a. Implement the procedure pseudoremainder-terms, which is just like
7470 remainder-terms except that it multiplies the dividend by the integerizing factor described above
7471 before calling div-terms. Modify gcd-terms to use pseudoremainder-terms, and verify
7472 that greatest-common-divisor now produces an answer with integer coefficients on the
7473 example in exercise 2.95.
7474 b. The GCD now has integer coefficients, but they are larger than those of P 1 . Modify gcd-terms
7475 so that it removes common factors from the coefficients of the answer by dividing all the coefficients
7476 by their (integer) greatest common divisor.
7477 Thus, here is how to reduce a rational function to lowest terms:
7478
7479 \fCompute the GCD of the numerator and denominator, using the version of gcd-terms from
7480 exercise 2.96.
7481 When you obtain the GCD, multiply both numerator and denominator by the same integerizing
7482 factor before dividing through by the GCD, so that division by the GCD will not introduce any
7483 noninteger coefficients. As the factor you can use the leading coefficient of the GCD raised to the
7484 power 1 + O 1 - O 2 , where O 2 is the order of the GCD and O 1 is the maximum of the orders of
7485 the numerator and denominator. This will ensure that dividing the numerator and denominator by
7486 the GCD will not introduce any fractions.
7487 The result of this operation will be a numerator and denominator with integer coefficients. The
7488 coefficients will normally be very large because of all of the integerizing factors, so the last step
7489 is to remove the redundant factors by computing the (integer) greatest common divisor of all the
7490 coefficients of the numerator and the denominator and dividing through by this factor.
7491 Exercise 2.97. a. Implement this algorithm as a procedure reduce-terms that takes two term lists
7492 n and d as arguments and returns a list nn, dd, which are n and d reduced to lowest terms via the
7493 algorithm given above. Also write a procedure reduce-poly, analogous to add-poly, that checks
7494 to see if the two polys have the same variable. If so, reduce-poly strips off the variable and passes
7495 the problem to reduce-terms, then reattaches the variable to the two term lists supplied by
7496 reduce-terms.
7497 b. Define a procedure analogous to reduce-terms that does what the original make-rat did for
7498 integers:
7499 (define (reduce-integers n d)
7500 (let ((g (gcd n d)))
7501 (list (/ n g) (/ d g))))
7502 and define reduce as a generic operation that calls apply-generic to dispatch to either
7503 reduce-poly (for polynomial arguments) or reduce-integers (for scheme-number
7504 arguments). You can now easily make the rational-arithmetic package reduce fractions to lowest terms
7505 by having make-rat call reduce before combining the given numerator and denominator to form a
7506 rational number. The system now handles rational expressions in either integers or polynomials. To
7507 test your program, try the example at the beginning of this extended exercise:
7508 (define p1 (make-polynomial ’x ’((1
7509 (define p2 (make-polynomial ’x ’((3
7510 (define p3 (make-polynomial ’x ’((1
7511 (define p4 (make-polynomial ’x ’((2
7512 (define rf1 (make-rational p1 p2))
7513 (define rf2 (make-rational p3 p4))
7514 (add rf1 rf2)
7515
7516 1)(0 1))))
7517 1)(0 -1))))
7518 1))))
7519 1)(0 -1))))
7520
7521 See if you get the correct answer, correctly reduced to lowest terms.
7522 The GCD computation is at the heart of any system that does operations on rational functions. The
7523 algorithm used above, although mathematically straightforward, is extremely slow. The slowness is
7524 due partly to the large number of division operations and partly to the enormous size of the
7525 intermediate coefficients generated by the pseudodivisions. One of the active areas in the development
7526 of algebraic-manipulation systems is the design of better algorithms for computing polynomial
7527 GCDs. 62
7528
7529 \f49 We also have to supply an almost identical procedure to handle the types (scheme-number
7530
7531 complex).
7532 50 See exercise 2.82 for generalizations.
7533 51 If we are clever, we can usually get by with fewer than n 2 coercion procedures. For instance, if we
7534
7535 know how to convert from type 1 to type 2 and from type 2 to type 3, then we can use this knowledge
7536 to convert from type 1 to type 3. This can greatly decrease the number of coercion procedures we need
7537 to supply explicitly when we add a new type to the system. If we are willing to build the required
7538 amount of sophistication into our system, we can have it search the ‘‘graph’’ of relations among types
7539 and automatically generate those coercion procedures that can be inferred from the ones that are
7540 supplied explicitly.
7541 52 This statement, which also appears in the first edition of this book, is just as true now as it was
7542
7543 when we wrote it twelve years ago. Developing a useful, general framework for expressing the
7544 relations among different types of entities (what philosophers call ‘‘ontology’’) seems intractably
7545 difficult. The main difference between the confusion that existed ten years ago and the confusion that
7546 exists now is that now a variety of inadequate ontological theories have been embodied in a plethora of
7547 correspondingly inadequate programming languages. For example, much of the complexity of
7548 object-oriented programming languages -- and the subtle and confusing differences among
7549 contemporary object-oriented languages -- centers on the treatment of generic operations on
7550 interrelated types. Our own discussion of computational objects in chapter 3 avoids these issues
7551 entirely. Readers familiar with object-oriented programming will notice that we have much to say in
7552 chapter 3 about local state, but we do not even mention ‘‘classes’’ or ‘‘inheritance.’’ In fact, we
7553 suspect that these problems cannot be adequately addressed in terms of computer-language design
7554 alone, without also drawing on work in knowledge representation and automated reasoning.
7555 53 A real number can be projected to an integer using the round primitive, which returns the closest
7556
7557 integer to its argument.
7558 54 On the other hand, we will allow polynomials whose coefficients are themselves polynomials in
7559
7560 other variables. This will give us essentially the same representational power as a full multivariate
7561 system, although it does lead to coercion problems, as discussed below.
7562 55 For univariate polynomials, giving the value of a polynomial at a given set of points can be a
7563
7564 particularly good representation. This makes polynomial arithmetic extremely simple. To obtain, for
7565 example, the sum of two polynomials represented in this way, we need only add the values of the
7566 polynomials at corresponding points. To transform back to a more familiar representation, we can use
7567 the Lagrange interpolation formula, which shows how to recover the coefficients of a polynomial of
7568 degree n given the values of the polynomial at n + 1 points.
7569 56 This operation is very much like the ordered union-set operation we developed in exercise
7570
7571 2.62. In fact, if we think of the terms of the polynomial as a set ordered according to the power of the
7572 indeterminate, then the program that produces the term list for a sum is almost identical to
7573 union-set.
7574 57 To make this work completely smoothly, we should also add to our generic arithmetic system the
7575
7576 ability to coerce a ‘‘number’’ to a polynomial by regarding it as a polynomial of degree zero whose
7577 coefficient is the number. This is necessary if we are going to perform operations such as
7578
7579 \fwhich requires adding the coefficient y + 1 to the coefficient 2.
7580 58 In these polynomial examples, we assume that we have implemented the generic arithmetic system
7581
7582 using the type mechanism suggested in exercise 2.78. Thus, coefficients that are ordinary numbers will
7583 be represented as the numbers themselves rather than as pairs whose car is the symbol
7584 scheme-number.
7585 59 Although we are assuming that term lists are ordered, we have implemented adjoin-term to
7586
7587 simply cons the new term onto the existing term list. We can get away with this so long as we
7588 guarantee that the procedures (such as add-terms) that use adjoin-term always call it with a
7589 higher-order term than appears in the list. If we did not want to make such a guarantee, we could have
7590 implemented adjoin-term to be similar to the adjoin-set constructor for the ordered-list
7591 representation of sets (exercise 2.61).
7592 60 The fact that Euclid’s Algorithm works for polynomials is formalized in algebra by saying that
7593
7594 polynomials form a kind of algebraic domain called a Euclidean ring. A Euclidean ring is a domain
7595 that admits addition, subtraction, and commutative multiplication, together with a way of assigning to
7596 each element x of the ring a positive integer ‘‘measure’’ m(x) with the properties that m(xy)> m(x) for
7597 any nonzero x and y and that, given any x and y, there exists a q such that y = qx + r and either r = 0 or
7598 m(r)< m(x). From an abstract point of view, this is what is needed to prove that Euclid’s Algorithm
7599 works. For the domain of integers, the measure m of an integer is the absolute value of the integer
7600 itself. For the domain of polynomials, the measure of a polynomial is its degree.
7601 61 In an implementation like MIT Scheme, this produces a polynomial that is indeed a divisor of Q
7602 1
7603
7604 and Q 2 , but with rational coefficients. In many other Scheme systems, in which division of integers
7605 can produce limited-precision decimal numbers, we may fail to get a valid divisor.
7606 62 One extremely efficient and elegant method for computing polynomial GCDs was discovered by
7607
7608 Richard Zippel (1979). The method is a probabilistic algorithm, as is the fast test for primality that we
7609 discussed in chapter 1. Zippel’s book (1993) describes this method, together with other ways to
7610 compute polynomial GCDs.
7611 [Go to first, previous, next page; contents; index]
7612
7613 \f[Go to first, previous, next page; contents; index]
7614
7615 Chapter 3
7616 Modularity, Objects, and State
7617 M
7618 (Even while it changes, it stands still.)
7619 Heraclitus
7620 Plus ça change, plus c’est la même chose.
7621 Alphonse Karr
7622 The preceding chapters introduced the basic elements from which programs are made. We saw how
7623 primitive procedures and primitive data are combined to construct compound entities, and we learned
7624 that abstraction is vital in helping us to cope with the complexity of large systems. But these tools are
7625 not sufficient for designing programs. Effective program synthesis also requires organizational
7626 principles that can guide us in formulating the overall design of a program. In particular, we need
7627 strategies to help us structure large systems so that they will be modular, that is, so that they can be
7628 divided ‘‘naturally’’ into coherent parts that can be separately developed and maintained.
7629 One powerful design strategy, which is particularly appropriate to the construction of programs for
7630 modeling physical systems, is to base the structure of our programs on the structure of the system
7631 being modeled. For each object in the system, we construct a corresponding computational object. For
7632 each system action, we define a symbolic operation in our computational model. Our hope in using
7633 this strategy is that extending the model to accommodate new objects or new actions will require no
7634 strategic changes to the program, only the addition of the new symbolic analogs of those objects or
7635 actions. If we have been successful in our system organization, then to add a new feature or debug an
7636 old one we will have to work on only a localized part of the system.
7637 To a large extent, then, the way we organize a large program is dictated by our perception of the
7638 system to be modeled. In this chapter we will investigate two prominent organizational strategies
7639 arising from two rather different ‘‘world views’’ of the structure of systems. The first organizational
7640 strategy concentrates on objects, viewing a large system as a collection of distinct objects whose
7641 behaviors may change over time. An alternative organizational strategy concentrates on the streams of
7642 information that flow in the system, much as an electrical engineer views a signal-processing system.
7643 Both the object-based approach and the stream-processing approach raise significant linguistic issues
7644 in programming. With objects, we must be concerned with how a computational object can change and
7645 yet maintain its identity. This will force us to abandon our old substitution model of computation
7646 (section 1.1.5) in favor of a more mechanistic but less theoretically tractable environment model of
7647 computation. The difficulties of dealing with objects, change, and identity are a fundamental
7648 consequence of the need to grapple with time in our computational models. These difficulties become
7649 even greater when we allow the possibility of concurrent execution of programs. The stream approach
7650 can be most fully exploited when we decouple simulated time in our model from the order of the
7651 events that take place in the computer during evaluation. We will accomplish this using a technique
7652 known as delayed evaluation.
7653
7654 \f[Go to first, previous, next page; contents; index]
7655
7656 \f[Go to first, previous, next page; contents; index]
7657
7658 3.1 Assignment and Local State
7659 We ordinarily view the world as populated by independent objects, each of which has a state that
7660 changes over time. An object is said to ‘‘have state’’ if its behavior is influenced by its history. A bank
7661 account, for example, has state in that the answer to the question ‘‘Can I withdraw $100?’’ depends
7662 upon the history of deposit and withdrawal transactions. We can characterize an object’s state by one
7663 or more state variables, which among them maintain enough information about history to determine
7664 the object’s current behavior. In a simple banking system, we could characterize the state of an
7665 account by a current balance rather than by remembering the entire history of account transactions.
7666 In a system composed of many objects, the objects are rarely completely independent. Each may
7667 influence the states of others through interactions, which serve to couple the state variables of one
7668 object to those of other objects. Indeed, the view that a system is composed of separate objects is most
7669 useful when the state variables of the system can be grouped into closely coupled subsystems that are
7670 only loosely coupled to other subsystems.
7671 This view of a system can be a powerful framework for organizing computational models of the
7672 system. For such a model to be modular, it should be decomposed into computational objects that
7673 model the actual objects in the system. Each computational object must have its own local state
7674 variables describing the actual object’s state. Since the states of objects in the system being modeled
7675 change over time, the state variables of the corresponding computational objects must also change. If
7676 we choose to model the flow of time in the system by the elapsed time in the computer, then we must
7677 have a way to construct computational objects whose behaviors change as our programs run. In
7678 particular, if we wish to model state variables by ordinary symbolic names in the programming
7679 language, then the language must provide an assignment operator to enable us to change the value
7680 associated with a name.
7681
7682 3.1.1 Local State Variables
7683 To illustrate what we mean by having a computational object with time-varying state, let us model the
7684 situation of withdrawing money from a bank account. We will do this using a procedure withdraw,
7685 which takes as argument an amount to be withdrawn. If there is enough money in the account to
7686 accommodate the withdrawal, then withdraw should return the balance remaining after the
7687 withdrawal. Otherwise, withdraw should return the message Insufficient funds. For example, if we
7688 begin with $100 in the account, we should obtain the following sequence of responses using
7689 withdraw:
7690 (withdraw 25)
7691 75
7692 (withdraw 25)
7693 50
7694 (withdraw 60)
7695 "Insufficient funds"
7696 (withdraw 15)
7697 35
7698
7699 \fObserve that the expression (withdraw 25), evaluated twice, yields different values. This is a new
7700 kind of behavior for a procedure. Until now, all our procedures could be viewed as specifications for
7701 computing mathematical functions. A call to a procedure computed the value of the function applied to
7702 the given arguments, and two calls to the same procedure with the same arguments always produced
7703 the same result. 1
7704 To implement withdraw, we can use a variable balance to indicate the balance of money in the
7705 account and define withdraw as a procedure that accesses balance. The withdraw procedure
7706 checks to see if balance is at least as large as the requested amount. If so, withdraw decrements
7707 balance by amount and returns the new value of balance. Otherwise, withdraw returns the
7708 Insufficient funds message. Here are the definitions of balance and withdraw:
7709 (define balance 100)
7710 (define (withdraw amount)
7711 (if (>= balance amount)
7712 (begin (set! balance (- balance amount))
7713 balance)
7714 "Insufficient funds"))
7715 Decrementing balance is accomplished by the expression
7716 (set! balance (- balance amount))
7717 This uses the set! special form, whose syntax is
7718 (set! <name> <new-value>)
7719 Here <name> is a symbol and <new-value> is any expression. Set! changes <name> so that its value
7720 is the result obtained by evaluating <new-value>. In the case at hand, we are changing balance so
7721 that its new value will be the result of subtracting amount from the previous value of balance. 2
7722 Withdraw also uses the begin special form to cause two expressions to be evaluated in the case
7723 where the if test is true: first decrementing balance and then returning the value of balance. In
7724 general, evaluating the expression
7725 (begin <exp 1 > <exp 2 > ... <exp k >)
7726 causes the expressions <exp 1 > through <exp k > to be evaluated in sequence and the value of the final
7727 expression <exp k > to be returned as the value of the entire begin form. 3
7728 Although withdraw works as desired, the variable balance presents a problem. As specified
7729 above, balance is a name defined in the global environment and is freely accessible to be examined
7730 or modified by any procedure. It would be much better if we could somehow make balance internal
7731 to withdraw, so that withdraw would be the only procedure that could access balance directly
7732 and any other procedure could access balance only indirectly (through calls to withdraw). This
7733 would more accurately model the notion that balance is a local state variable used by withdraw to
7734 keep track of the state of the account.
7735 We can make balance internal to withdraw by rewriting the definition as follows:
7736
7737 \f(define new-withdraw
7738 (let ((balance 100))
7739 (lambda (amount)
7740 (if (>= balance amount)
7741 (begin (set! balance (- balance amount))
7742 balance)
7743 "Insufficient funds"))))
7744 What we have done here is use let to establish an environment with a local variable balance,
7745 bound to the initial value 100. Within this local environment, we use lambda to create a procedure
7746 that takes amount as an argument and behaves like our previous withdraw procedure. This
7747 procedure -- returned as the result of evaluating the let expression -- is new-withdraw, which
7748 behaves in precisely the same way as withdraw but whose variable balance is not accessible by
7749 any other procedure. 4
7750 Combining set! with local variables is the general programming technique we will use for
7751 constructing computational objects with local state. Unfortunately, using this technique raises a serious
7752 problem: When we first introduced procedures, we also introduced the substitution model of
7753 evaluation (section 1.1.5) to provide an interpretation of what procedure application means. We said
7754 that applying a procedure should be interpreted as evaluating the body of the procedure with the
7755 formal parameters replaced by their values. The trouble is that, as soon as we introduce assignment
7756 into our language, substitution is no longer an adequate model of procedure application. (We will see
7757 why this is so in section 3.1.3.) As a consequence, we technically have at this point no way to
7758 understand why the new-withdraw procedure behaves as claimed above. In order to really
7759 understand a procedure such as new-withdraw, we will need to develop a new model of procedure
7760 application. In section 3.2 we will introduce such a model, together with an explanation of set! and
7761 local variables. First, however, we examine some variations on the theme established by
7762 new-withdraw.
7763 The following procedure, make-withdraw, creates ‘‘withdrawal processors.’’ The formal
7764 parameter balance in make-withdraw specifies the initial amount of money in the account. 5
7765 (define (make-withdraw balance)
7766 (lambda (amount)
7767 (if (>= balance amount)
7768 (begin (set! balance (- balance amount))
7769 balance)
7770 "Insufficient funds")))
7771 Make-withdraw can be used as follows to create two objects W1 and W2:
7772 (define W1 (make-withdraw 100))
7773 (define W2 (make-withdraw 100))
7774 (W1 50)
7775 50
7776 (W2 70)
7777 30
7778 (W2 40)
7779 "Insufficient funds"
7780 (W1 40)
7781 10
7782
7783 \fObserve that W1 and W2 are completely independent objects, each with its own local state variable
7784 balance. Withdrawals from one do not affect the other.
7785 We can also create objects that handle deposits as well as withdrawals, and thus we can represent
7786 simple bank accounts. Here is a procedure that returns a ‘‘bank-account object’’ with a specified initial
7787 balance:
7788 (define (make-account balance)
7789 (define (withdraw amount)
7790 (if (>= balance amount)
7791 (begin (set! balance (- balance amount))
7792 balance)
7793 "Insufficient funds"))
7794 (define (deposit amount)
7795 (set! balance (+ balance amount))
7796 balance)
7797 (define (dispatch m)
7798 (cond ((eq? m ’withdraw) withdraw)
7799 ((eq? m ’deposit) deposit)
7800 (else (error "Unknown request -- MAKE-ACCOUNT"
7801 m))))
7802 dispatch)
7803 Each call to make-account sets up an environment with a local state variable balance. Within
7804 this environment, make-account defines procedures deposit and withdraw that access
7805 balance and an additional procedure dispatch that takes a ‘‘message’’ as input and returns one of
7806 the two local procedures. The dispatch procedure itself is returned as the value that represents the
7807 bank-account object. This is precisely the message-passing style of programming that we saw in
7808 section 2.4.3, although here we are using it in conjunction with the ability to modify local variables.
7809 Make-account can be used as follows:
7810 (define acc (make-account 100))
7811 ((acc ’withdraw) 50)
7812 50
7813 ((acc ’withdraw) 60)
7814 "Insufficient funds"
7815 ((acc ’deposit) 40)
7816 90
7817 ((acc ’withdraw) 60)
7818 30
7819 Each call to acc returns the locally defined deposit or withdraw procedure, which is then
7820 applied to the specified amount. As was the case with make-withdraw, another call to
7821 make-account
7822 (define acc2 (make-account 100))
7823 will produce a completely separate account object, which maintains its own local balance.
7824
7825 \fExercise 3.1. An accumulator is a procedure that is called repeatedly with a single numeric argument
7826 and accumulates its arguments into a sum. Each time it is called, it returns the currently accumulated
7827 sum. Write a procedure make-accumulator that generates accumulators, each maintaining an
7828 independent sum. The input to make-accumulator should specify the initial value of the sum; for
7829 example
7830 (define A (make-accumulator 5))
7831 (A 10)
7832 15
7833 (A 10)
7834 25
7835 Exercise 3.2. In software-testing applications, it is useful to be able to count the number of times a
7836 given procedure is called during the course of a computation. Write a procedure make-monitored
7837 that takes as input a procedure, f, that itself takes one input. The result returned by
7838 make-monitored is a third procedure, say mf, that keeps track of the number of times it has been
7839 called by maintaining an internal counter. If the input to mf is the special symbol
7840 how-many-calls?, then mf returns the value of the counter. If the input is the special symbol
7841 reset-count, then mf resets the counter to zero. For any other input, mf returns the result of
7842 calling f on that input and increments the counter. For instance, we could make a monitored version of
7843 the sqrt procedure:
7844 (define s (make-monitored sqrt))
7845 (s 100)
7846 10
7847 (s ’how-many-calls?)
7848 1
7849 Exercise 3.3. Modify the make-account procedure so that it creates password-protected accounts.
7850 That is, make-account should take a symbol as an additional argument, as in
7851 (define acc (make-account 100 ’secret-password))
7852 The resulting account object should process a request only if it is accompanied by the password with
7853 which the account was created, and should otherwise return a complaint:
7854 ((acc ’secret-password ’withdraw) 40)
7855 60
7856 ((acc ’some-other-password ’deposit) 50)
7857 "Incorrect password"
7858 Exercise 3.4. Modify the make-account procedure of exercise 3.3 by adding another local state
7859 variable so that, if an account is accessed more than seven consecutive times with an incorrect
7860 password, it invokes the procedure call-the-cops.
7861
7862 3.1.2 The Benefits of Introducing Assignment
7863 As we shall see, introducing assignment into our programming language leads us into a thicket of
7864 difficult conceptual issues. Nevertheless, viewing systems as collections of objects with local state is a
7865 powerful technique for maintaining a modular design. As a simple example, consider the design of a
7866 procedure rand that, whenever it is called, returns an integer chosen at random.
7867
7868 \fIt is not at all clear what is meant by ‘‘chosen at random.’’ What we presumably want is for successive
7869 calls to rand to produce a sequence of numbers that has statistical properties of uniform distribution.
7870 We will not discuss methods for generating suitable sequences here. Rather, let us assume that we
7871 have a procedure rand-update that has the property that if we start with a given number x 1 and
7872 form
7873 x 2 = (rand-update x 1 )
7874 x 3 = (rand-update x 2 )
7875 then the sequence of values x 1 , x 2 , x 3 , ..., will have the desired statistical properties. 6
7876 We can implement rand as a procedure with a local state variable x that is initialized to some fixed
7877 value random-init. Each call to rand computes rand-update of the current value of x, returns
7878 this as the random number, and also stores this as the new value of x.
7879 (define rand
7880 (let ((x random-init))
7881 (lambda ()
7882 (set! x (rand-update x))
7883 x)))
7884 Of course, we could generate the same sequence of random numbers without using assignment by
7885 simply calling rand-update directly. However, this would mean that any part of our program that
7886 used random numbers would have to explicitly remember the current value of x to be passed as an
7887 argument to rand-update. To realize what an annoyance this would be, consider using random
7888 numbers to implement a technique called Monte Carlo simulation.
7889 The Monte Carlo method consists of choosing sample experiments at random from a large set and then
7890 making deductions on the basis of the probabilities estimated from tabulating the results of those
7891 experiments. For example, we can approximate using the fact that 6/ 2 is the probability that two
7892 integers chosen at random will have no factors in common; that is, that their greatest common divisor
7893 will be 1. 7 To obtain the approximation to , we perform a large number of experiments. In each
7894 experiment we choose two integers at random and perform a test to see if their GCD is 1. The fraction
7895 of times that the test is passed gives us our estimate of 6/ 2 , and from this we obtain our
7896 approximation to .
7897 The heart of our program is a procedure monte-carlo, which takes as arguments the number of
7898 times to try an experiment, together with the experiment, represented as a no-argument procedure that
7899 will return either true or false each time it is run. Monte-carlo runs the experiment for the
7900 designated number of trials and returns a number telling the fraction of the trials in which the
7901 experiment was found to be true.
7902 (define (estimate-pi trials)
7903 (sqrt (/ 6 (monte-carlo trials cesaro-test))))
7904 (define (cesaro-test)
7905 (= (gcd (rand) (rand)) 1))
7906 (define (monte-carlo trials experiment)
7907 (define (iter trials-remaining trials-passed)
7908 (cond ((= trials-remaining 0)
7909 (/ trials-passed trials))
7910 ((experiment)
7911
7912 \f(iter (- trials-remaining 1) (+ trials-passed 1)))
7913 (else
7914 (iter (- trials-remaining 1) trials-passed))))
7915 (iter trials 0))
7916 Now let us try the same computation using rand-update directly rather than rand, the way we
7917 would be forced to proceed if we did not use assignment to model local state:
7918 (define (estimate-pi trials)
7919 (sqrt (/ 6 (random-gcd-test trials random-init))))
7920 (define (random-gcd-test trials initial-x)
7921 (define (iter trials-remaining trials-passed x)
7922 (let ((x1 (rand-update x)))
7923 (let ((x2 (rand-update x1)))
7924 (cond ((= trials-remaining 0)
7925 (/ trials-passed trials))
7926 ((= (gcd x1 x2) 1)
7927 (iter (- trials-remaining 1)
7928 (+ trials-passed 1)
7929 x2))
7930 (else
7931 (iter (- trials-remaining 1)
7932 trials-passed
7933 x2))))))
7934 (iter trials 0 initial-x))
7935 While the program is still simple, it betrays some painful breaches of modularity. In our first version
7936 of the program, using rand, we can express the Monte Carlo method directly as a general
7937 monte-carlo procedure that takes as an argument an arbitrary experiment procedure. In our
7938 second version of the program, with no local state for the random-number generator,
7939 random-gcd-test must explicitly manipulate the random numbers x1 and x2 and recycle x2
7940 through the iterative loop as the new input to rand-update. This explicit handling of the random
7941 numbers intertwines the structure of accumulating test results with the fact that our particular
7942 experiment uses two random numbers, whereas other Monte Carlo experiments might use one random
7943 number or three. Even the top-level procedure estimate-pi has to be concerned with supplying an
7944 initial random number. The fact that the random-number generator’s insides are leaking out into other
7945 parts of the program makes it difficult for us to isolate the Monte Carlo idea so that it can be applied to
7946 other tasks. In the first version of the program, assignment encapsulates the state of the
7947 random-number generator within the rand procedure, so that the details of random-number
7948 generation remain independent of the rest of the program.
7949 The general phenomenon illustrated by the Monte Carlo example is this: From the point of view of one
7950 part of a complex process, the other parts appear to change with time. They have hidden time-varying
7951 local state. If we wish to write computer programs whose structure reflects this decomposition, we
7952 make computational objects (such as bank accounts and random-number generators) whose behavior
7953 changes with time. We model state with local state variables, and we model the changes of state with
7954 assignments to those variables.
7955 It is tempting to conclude this discussion by saying that, by introducing assignment and the technique
7956 of hiding state in local variables, we are able to structure systems in a more modular fashion than if all
7957 state had to be manipulated explicitly, by passing additional parameters. Unfortunately, as we shall
7958
7959 \fsee, the story is not so simple.
7960 Exercise 3.5. Monte Carlo integration is a method of estimating definite integrals by means of Monte
7961 Carlo simulation. Consider computing the area of a region of space described by a predicate P(x, y)
7962 that is true for points (x, y) in the region and false for points not in the region. For example, the region
7963 contained within a circle of radius 3 centered at (5, 7) is described by the predicate that tests whether
7964 (x - 5) 2 + (y - 7) 2 < 3 2 . To estimate the area of the region described by such a predicate, begin by
7965 choosing a rectangle that contains the region. For example, a rectangle with diagonally opposite
7966 corners at (2, 4) and (8, 10) contains the circle above. The desired integral is the area of that portion of
7967 the rectangle that lies in the region. We can estimate the integral by picking, at random, points (x,y)
7968 that lie in the rectangle, and testing P(x, y) for each point to determine whether the point lies in the
7969 region. If we try this with many points, then the fraction of points that fall in the region should give an
7970 estimate of the proportion of the rectangle that lies in the region. Hence, multiplying this fraction by
7971 the area of the entire rectangle should produce an estimate of the integral.
7972 Implement Monte Carlo integration as a procedure estimate-integral that takes as arguments a
7973 predicate P, upper and lower bounds x1, x2, y1, and y2 for the rectangle, and the number of trials to
7974 perform in order to produce the estimate. Your procedure should use the same monte-carlo
7975 procedure that was used above to estimate . Use your estimate-integral to produce an
7976 estimate of by measuring the area of a unit circle.
7977 You will find it useful to have a procedure that returns a number chosen at random from a given range.
7978 The following random-in-range procedure implements this in terms of the random procedure
7979 used in section 1.2.6, which returns a nonnegative number less than its input. 8
7980 (define (random-in-range low high)
7981 (let ((range (- high low)))
7982 (+ low (random range))))
7983 Exercise 3.6. It is useful to be able to reset a random-number generator to produce a sequence starting
7984 from a given value. Design a new rand procedure that is called with an argument that is either the
7985 symbol generate or the symbol reset and behaves as follows: (rand ’generate) produces a
7986 new random number; ((rand ’reset) <new-value>) resets the internal state variable to the
7987 designated <new-value>. Thus, by resetting the state, one can generate repeatable sequences. These are
7988 very handy to have when testing and debugging programs that use random numbers.
7989
7990 3.1.3 The Costs of Introducing Assignment
7991 As we have seen, the set! operation enables us to model objects that have local state. However, this
7992 advantage comes at a price. Our programming language can no longer be interpreted in terms of the
7993 substitution model of procedure application that we introduced in section 1.1.5. Moreover, no simple
7994 model with ‘‘nice’’ mathematical properties can be an adequate framework for dealing with objects
7995 and assignment in programming languages.
7996 So long as we do not use assignments, two evaluations of the same procedure with the same arguments
7997 will produce the same result, so that procedures can be viewed as computing mathematical functions.
7998 Programming without any use of assignments, as we did throughout the first two chapters of this book,
7999 is accordingly known as functional programming.
8000
8001 \fTo understand how assignment complicates matters, consider a simplified version of the
8002 make-withdraw procedure of section 3.1.1 that does not bother to check for an insufficient amount:
8003 (define (make-simplified-withdraw balance)
8004 (lambda (amount)
8005 (set! balance (- balance amount))
8006 balance))
8007 (define W (make-simplified-withdraw 25))
8008 (W 20)
8009 5
8010 (W 10)
8011 - 5
8012 Compare this procedure with the following make-decrementer procedure, which does not use
8013 set!:
8014 (define (make-decrementer balance)
8015 (lambda (amount)
8016 (- balance amount)))
8017 Make-decrementer returns a procedure that subtracts its input from a designated amount
8018 balance, but there is no accumulated effect over successive calls, as with
8019 make-simplified-withdraw:
8020 (define D (make-decrementer 25))
8021 (D 20)
8022 5
8023 (D 10)
8024 15
8025 We can use the substitution model to explain how make-decrementer works. For instance, let us
8026 analyze the evaluation of the expression
8027 ((make-decrementer 25) 20)
8028 We first simplify the operator of the combination by substituting 25 for balance in the body of
8029 make-decrementer. This reduces the expression to
8030 ((lambda (amount) (- 25 amount)) 20)
8031 Now we apply the operator by substituting 20 for amount in the body of the lambda expression:
8032 (- 25 20)
8033 The final answer is 5.
8034 Observe, however, what happens if we attempt a similar substitution analysis with
8035 make-simplified-withdraw:
8036 ((make-simplified-withdraw 25) 20)
8037
8038 \fWe first simplify the operator by substituting 25 for balance in the body of
8039 make-simplified-withdraw. This reduces the expression to 9
8040 ((lambda (amount) (set! balance (- 25 amount)) 25) 20)
8041 Now we apply the operator by substituting 20 for amount in the body of the lambda expression:
8042 (set! balance (- 25 20)) 25
8043 If we adhered to the substitution model, we would have to say that the meaning of the procedure
8044 application is to first set balance to 5 and then return 25 as the value of the expression. This gets the
8045 wrong answer. In order to get the correct answer, we would have to somehow distinguish the first
8046 occurrence of balance (before the effect of the set!) from the second occurrence of balance
8047 (after the effect of the set!), and the substitution model cannot do this.
8048 The trouble here is that substitution is based ultimately on the notion that the symbols in our language
8049 are essentially names for values. But as soon as we introduce set! and the idea that the value of a
8050 variable can change, a variable can no longer be simply a name. Now a variable somehow refers to a
8051 place where a value can be stored, and the value stored at this place can change. In section 3.2 we will
8052 see how environments play this role of ‘‘place’’ in our computational model.
8053
8054 Sameness and change
8055 The issue surfacing here is more profound than the mere breakdown of a particular model of
8056 computation. As soon as we introduce change into our computational models, many notions that were
8057 previously straightforward become problematical. Consider the concept of two things being ‘‘the
8058 same.’’
8059 Suppose we call make-decrementer twice with the same argument to create two procedures:
8060 (define D1 (make-decrementer 25))
8061 (define D2 (make-decrementer 25))
8062 Are D1 and D2 the same? An acceptable answer is yes, because D1 and D2 have the same
8063 computational behavior -- each is a procedure that subtracts its input from 25. In fact, D1 could be
8064 substituted for D2 in any computation without changing the result.
8065 Contrast this with making two calls to make-simplified-withdraw:
8066 (define W1 (make-simplified-withdraw 25))
8067 (define W2 (make-simplified-withdraw 25))
8068 Are W1 and W2 the same? Surely not, because calls to W1 and W2 have distinct effects, as shown by the
8069 following sequence of interactions:
8070 (W1 20)
8071 5
8072 (W1 20)
8073 - 15
8074 (W2 20)
8075 5
8076
8077 \fEven though W1 and W2 are ‘‘equal’’ in the sense that they are both created by evaluating the same
8078 expression, (make-simplified-withdraw 25), it is not true that W1 could be substituted for
8079 W2 in any expression without changing the result of evaluating the expression.
8080 A language that supports the concept that ‘‘equals can be substituted for equals’’ in an expresssion
8081 without changing the value of the expression is said to be referentially transparent. Referential
8082 transparency is violated when we include set! in our computer language. This makes it tricky to
8083 determine when we can simplify expressions by substituting equivalent expressions. Consequently,
8084 reasoning about programs that use assignment becomes drastically more difficult.
8085 Once we forgo referential transparency, the notion of what it means for computational objects to be
8086 ‘‘the same’’ becomes difficult to capture in a formal way. Indeed, the meaning of ‘‘same’’ in the real
8087 world that our programs model is hardly clear in itself. In general, we can determine that two
8088 apparently identical objects are indeed ‘‘the same one’’ only by modifying one object and then
8089 observing whether the other object has changed in the same way. But how can we tell if an object has
8090 ‘‘changed’’ other than by observing the ‘‘same’’ object twice and seeing whether some property of the
8091 object differs from one observation to the next? Thus, we cannot determine ‘‘change’’ without some a
8092 priori notion of ‘‘sameness,’’ and we cannot determine sameness without observing the effects of
8093 change.
8094 As an example of how this issue arises in programming, consider the situation where Peter and Paul
8095 have a bank account with $100 in it. There is a substantial difference between modeling this as
8096 (define peter-acc (make-account 100))
8097 (define paul-acc (make-account 100))
8098 and modeling it as
8099 (define peter-acc (make-account 100))
8100 (define paul-acc peter-acc)
8101 In the first situation, the two bank accounts are distinct. Transactions made by Peter will not affect
8102 Paul’s account, and vice versa. In the second situation, however, we have defined paul-acc to be
8103 the same thing as peter-acc. In effect, Peter and Paul now have a joint bank account, and if Peter
8104 makes a withdrawal from peter-acc Paul will observe less money in paul-acc. These two
8105 similar but distinct situations can cause confusion in building computational models. With the shared
8106 account, in particular, it can be especially confusing that there is one object (the bank account) that has
8107 two different names (peter-acc and paul-acc); if we are searching for all the places in our
8108 program where paul-acc can be changed, we must remember to look also at things that change
8109 peter-acc. 10
8110 With reference to the above remarks on ‘‘sameness’’ and ‘‘change,’’ observe that if Peter and Paul
8111 could only examine their bank balances, and could not perform operations that changed the balance,
8112 then the issue of whether the two accounts are distinct would be moot. In general, so long as we never
8113 modify data objects, we can regard a compound data object to be precisely the totality of its pieces.
8114 For example, a rational number is determined by giving its numerator and its denominator. But this
8115 view is no longer valid in the presence of change, where a compound data object has an ‘‘identity’’
8116 that is something different from the pieces of which it is composed. A bank account is still ‘‘the
8117 same’’ bank account even if we change the balance by making a withdrawal; conversely, we could
8118 have two different bank accounts with the same state information. This complication is a consequence,
8119 not of our programming language, but of our perception of a bank account as an object. We do not, for
8120 example, ordinarily regard a rational number as a changeable object with identity, such that we could
8121
8122 \fchange the numerator and still have ‘‘the same’’ rational number.
8123
8124 Pitfalls of imperative programming
8125 In contrast to functional programming, programming that makes extensive use of assignment is known
8126 as imperative programming. In addition to raising complications about computational models,
8127 programs written in imperative style are susceptible to bugs that cannot occur in functional programs.
8128 For example, recall the iterative factorial program from section 1.2.1:
8129 (define (factorial n)
8130 (define (iter product counter)
8131 (if (> counter n)
8132 product
8133 (iter (* counter product)
8134 (+ counter 1))))
8135 (iter 1 1))
8136 Instead of passing arguments in the internal iterative loop, we could adopt a more imperative style by
8137 using explicit assignment to update the values of the variables product and counter:
8138 (define (factorial n)
8139 (let ((product 1)
8140 (counter 1))
8141 (define (iter)
8142 (if (> counter n)
8143 product
8144 (begin (set! product (* counter product))
8145 (set! counter (+ counter 1))
8146 (iter))))
8147 (iter)))
8148 This does not change the results produced by the program, but it does introduce a subtle trap. How do
8149 we decide the order of the assignments? As it happens, the program is correct as written. But writing
8150 the assignments in the opposite order
8151 (set! counter (+ counter 1))
8152 (set! product (* counter product))
8153 would have produced a different, incorrect result. In general, programming with assignment forces us
8154 to carefully consider the relative orders of the assignments to make sure that each statement is using
8155 the correct version of the variables that have been changed. This issue simply does not arise in
8156 functional programs. 11 The complexity of imperative programs becomes even worse if we consider
8157 applications in which several processes execute concurrently. We will return to this in section 3.4.
8158 First, however, we will address the issue of providing a computational model for expressions that
8159 involve assignment, and explore the uses of objects with local state in designing simulations.
8160 Exercise 3.7. Consider the bank account objects created by make-account, with the password
8161 modification described in exercise 3.3. Suppose that our banking system requires the ability to make
8162 joint accounts. Define a procedure make-joint that accomplishes this. Make-joint should take
8163 three arguments. The first is a password-protected account. The second argument must match the
8164 password with which the account was defined in order for the make-joint operation to proceed.
8165 The third argument is a new password. Make-joint is to create an additional access to the original
8166
8167 \faccount using the new password. For example, if peter-acc is a bank account with password
8168 open-sesame, then
8169 (define paul-acc
8170 (make-joint peter-acc ’open-sesame ’rosebud))
8171 will allow one to make transactions on peter-acc using the name paul-acc and the password
8172 rosebud. You may wish to modify your solution to exercise 3.3 to accommodate this new feature.
8173 Exercise 3.8. When we defined the evaluation model in section 1.1.3, we said that the first step in
8174 evaluating an expression is to evaluate its subexpressions. But we never specified the order in which
8175 the subexpressions should be evaluated (e.g., left to right or right to left). When we introduce
8176 assignment, the order in which the arguments to a procedure are evaluated can make a difference to the
8177 result. Define a simple procedure f such that evaluating (+ (f 0) (f 1)) will return 0 if the
8178 arguments to + are evaluated from left to right but will return 1 if the arguments are evaluated from
8179 right to left.
8180 1 Actually, this is not quite true. One exception was the random-number generator in section 1.2.6.
8181
8182 Another exception involved the operation/type tables we introduced in section 2.4.3, where the values
8183 of two calls to get with the same arguments depended on intervening calls to put. On the other hand,
8184 until we introduce assignment, we have no way to create such procedures ourselves.
8185 2 The value of a set! expression is implementation-dependent. Set! should be used only for its
8186
8187 effect, not for its value.
8188 The name set! reflects a naming convention used in Scheme: Operations that change the values of
8189 variables (or that change data structures, as we will see in section 3.3) are given names that end with
8190 an exclamation point. This is similar to the convention of designating predicates by names that end
8191 with a question mark.
8192 3 We have already used begin implicitly in our programs, because in Scheme the body of a
8193
8194 procedure can be a sequence of expressions. Also, the <consequent> part of each clause in a cond
8195 expression can be a sequence of expressions rather than a single expression.
8196 4 In programming-language jargon, the variable balance is said to be encapsulated within the
8197
8198 new-withdraw procedure. Encapsulation reflects the general system-design principle known as the
8199 hiding principle: One can make a system more modular and robust by protecting parts of the system
8200 from each other; that is, by providing information access only to those parts of the system that have a
8201 ‘‘need to know.’’
8202 5 In contrast with new-withdraw above, we do not have to use let to make balance a local
8203
8204 variable, since formal parameters are already local. This will be clearer after the discussion of the
8205 environment model of evaluation in section 3.2. (See also exercise 3.10.)
8206 6 One common way to implement rand-update is to use the rule that x is updated to ax + b
8207
8208 modulo m, where a, b, and m are appropriately chosen integers. Chapter 3 of Knuth 1981 includes an
8209 extensive discussion of techniques for generating sequences of random numbers and establishing their
8210 statistical properties. Notice that the rand-update procedure computes a mathematical function:
8211 Given the same input twice, it produces the same output. Therefore, the number sequence produced by
8212 rand-update certainly is not ‘‘random,’’ if by ‘‘random’’ we insist that each number in the
8213 sequence is unrelated to the preceding number. The relation between ‘‘real randomness’’ and so-called
8214
8215 \fpseudo-random sequences, which are produced by well-determined computations and yet have
8216 suitable statistical properties, is a complex question involving difficult issues in mathematics and
8217 philosophy. Kolmogorov, Solomonoff, and Chaitin have made great progress in clarifying these
8218 issues; a discussion can be found in Chaitin 1975.
8219 7 This theorem is due to E. Cesàro. See section 4.5.2 of Knuth 1981 for a discussion and a proof.
8220 8 MIT Scheme provides such a procedure. If random is given an exact integer (as in section 1.2.6) it
8221
8222 returns an exact integer, but if it is given a decimal value (as in this exercise) it returns a decimal
8223 value.
8224 9 We don’t substitute for the occurrence of balance in the set! expression because the <name> in
8225
8226 a set! is not evaluated. If we did substitute for it, we would get (set! 25 (- 25 amount)),
8227 which makes no sense.
8228 10 The phenomenon of a single computational object being accessed by more than one name is known
8229
8230 as aliasing. The joint bank account situation illustrates a very simple example of an alias. In
8231 section 3.3 we will see much more complex examples, such as ‘‘distinct’’ compound data structures
8232 that share parts. Bugs can occur in our programs if we forget that a change to an object may also, as a
8233 ‘‘side effect,’’ change a ‘‘different’’ object because the two ‘‘different’’ objects are actually a single
8234 object appearing under different aliases. These so-called side-effect bugs are so difficult to locate and
8235 to analyze that some people have proposed that programming languages be designed in such a way as
8236 to not allow side effects or aliasing (Lampson et al. 1981; Morris, Schmidt, and Wadler 1980).
8237 11 In view of this, it is ironic that introductory programming is most often taught in a highly
8238
8239 imperative style. This may be a vestige of a belief, common throughout the 1960s and 1970s, that
8240 programs that call procedures must inherently be less efficient than programs that perform
8241 assignments. (Steele (1977) debunks this argument.) Alternatively it may reflect a view that
8242 step-by-step assignment is easier for beginners to visualize than procedure call. Whatever the reason, it
8243 often saddles beginning programmers with ‘‘should I set this variable before or after that one’’
8244 concerns that can complicate programming and obscure the important ideas.
8245 [Go to first, previous, next page; contents; index]
8246
8247 \f[Go to first, previous, next page; contents; index]
8248
8249 3.2 The Environment Model of Evaluation
8250 When we introduced compound procedures in chapter 1, we used the substitution model of evaluation
8251 (section 1.1.5) to define what is meant by applying a procedure to arguments:
8252 To apply a compound procedure to arguments, evaluate the body of the procedure with each
8253 formal parameter replaced by the corresponding argument.
8254 Once we admit assignment into our programming language, such a definition is no longer adequate. In
8255 particular, section 3.1.3 argued that, in the presence of assignment, a variable can no longer be
8256 considered to be merely a name for a value. Rather, a variable must somehow designate a ‘‘place’’ in
8257 which values can be stored. In our new model of evaluation, these places will be maintained in
8258 structures called environments.
8259 An environment is a sequence of frames. Each frame is a table (possibly empty) of bindings, which
8260 associate variable names with their corresponding values. (A single frame may contain at most one
8261 binding for any variable.) Each frame also has a pointer to its enclosing environment, unless, for the
8262 purposes of discussion, the frame is considered to be global. The value of a variable with respect to an
8263 environment is the value given by the binding of the variable in the first frame in the environment that
8264 contains a binding for that variable. If no frame in the sequence specifies a binding for the variable,
8265 then the variable is said to be unbound in the environment.
8266
8267 Figure 3.1: A simple environment structure.
8268 Figure 3.1: A simple environment structure.
8269 Figure 3.1 shows a simple environment structure consisting of three frames, labeled I, II, and III. In the
8270 diagram, A, B, C, and D are pointers to environments. C and D point to the same environment. The
8271 variables z and x are bound in frame II, while y and x are bound in frame I. The value of x in
8272 environment D is 3. The value of x with respect to environment B is also 3. This is determined as
8273 follows: We examine the first frame in the sequence (frame III) and do not find a binding for x, so we
8274 proceed to the enclosing environment D and find the binding in frame I. On the other hand, the value
8275 of x in environment A is 7, because the first frame in the sequence (frame II) contains a binding of x
8276 to 7. With respect to environment A, the binding of x to 7 in frame II is said to shadow the binding of
8277 x to 3 in frame I.
8278
8279 \fThe environment is crucial to the evaluation process, because it determines the context in which an
8280 expression should be evaluated. Indeed, one could say that expressions in a programming language do
8281 not, in themselves, have any meaning. Rather, an expression acquires a meaning only with respect to
8282 some environment in which it is evaluated. Even the interpretation of an expression as straightforward
8283 as (+ 1 1) depends on an understanding that one is operating in a context in which + is the symbol
8284 for addition. Thus, in our model of evaluation we will always speak of evaluating an expression with
8285 respect to some environment. To describe interactions with the interpreter, we will suppose that there
8286 is a global environment, consisting of a single frame (with no enclosing environment) that includes
8287 values for the symbols associated with the primitive procedures. For example, the idea that + is the
8288 symbol for addition is captured by saying that the symbol + is bound in the global environment to the
8289 primitive addition procedure.
8290
8291 3.2.1 The Rules for Evaluation
8292 The overall specification of how the interpreter evaluates a combination remains the same as when we
8293 first introduced it in section 1.1.3:
8294 To evaluate a combination:
8295 1. Evaluate the subexpressions of the combination. 12
8296 2. Apply the value of the operator subexpression to the values of the operand subexpressions.
8297 The environment model of evaluation replaces the substitution model in specifying what it means to
8298 apply a compound procedure to arguments.
8299 In the environment model of evaluation, a procedure is always a pair consisting of some code and a
8300 pointer to an environment. Procedures are created in one way only: by evaluating a lambda
8301 expression. This produces a procedure whose code is obtained from the text of the lambda expression
8302 and whose environment is the environment in which the lambda expression was evaluated to produce
8303 the procedure. For example, consider the procedure definition
8304 (define (square x)
8305 (* x x))
8306 evaluated in the global environment. The procedure definition syntax is just syntactic sugar for an
8307 underlying implicit lambda expression. It would have been equivalent to have used
8308 (define square
8309 (lambda (x) (* x x)))
8310 which evaluates (lambda (x) (* x x)) and binds square to the resulting value, all in the
8311 global environment.
8312 Figure 3.2 shows the result of evaluating this define expression. The procedure object is a pair
8313 whose code specifies that the procedure has one formal parameter, namely x, and a procedure body (*
8314 x x). The environment part of the procedure is a pointer to the global environment, since that is the
8315 environment in which the lambda expression was evaluated to produce the procedure. A new
8316 binding, which associates the procedure object with the symbol square, has been added to the global
8317 frame. In general, define creates definitions by adding bindings to frames.
8318
8319 \fFigure 3.2: Environment structure produced by evaluating (define (square x) (* x
8320 x)) in the global environment.
8321 Figure 3.2: Environment structure produced by evaluating (define (square x) (* x x)) in
8322 the global environment.
8323 Now that we have seen how procedures are created, we can describe how procedures are applied. The
8324 environment model specifies: To apply a procedure to arguments, create a new environment
8325 containing a frame that binds the parameters to the values of the arguments. The enclosing
8326 environment of this frame is the environment specified by the procedure. Now, within this new
8327 environment, evaluate the procedure body.
8328 To show how this rule is followed, figure 3.3 illustrates the environment structure created by
8329 evaluating the expression (square 5) in the global environment, where square is the procedure
8330 generated in figure 3.2. Applying the procedure results in the creation of a new environment, labeled
8331 E1 in the figure, that begins with a frame in which x, the formal parameter for the procedure, is bound
8332 to the argument 5. The pointer leading upward from this frame shows that the frame’s enclosing
8333 environment is the global environment. The global environment is chosen here, because this is the
8334 environment that is indicated as part of the square procedure object. Within E1, we evaluate the
8335 body of the procedure, (* x x). Since the value of x in E1 is 5, the result is (* 5 5), or 25.
8336
8337 Figure 3.3: Environment created by evaluating (square 5) in the global environment.
8338 Figure 3.3: Environment created by evaluating (square 5) in the global environment.
8339
8340 \fThe environment model of procedure application can be summarized by two rules:
8341 A procedure object is applied to a set of arguments by constructing a frame, binding the formal
8342 parameters of the procedure to the arguments of the call, and then evaluating the body of the
8343 procedure in the context of the new environment constructed. The new frame has as its enclosing
8344 environment the environment part of the procedure object being applied.
8345 A procedure is created by evaluating a lambda expression relative to a given environment. The
8346 resulting procedure object is a pair consisting of the text of the lambda expression and a pointer
8347 to the environment in which the procedure was created.
8348 We also specify that defining a symbol using define creates a binding in the current environment
8349 frame and assigns to the symbol the indicated value. 13 Finally, we specify the behavior of set!, the
8350 operation that forced us to introduce the environment model in the first place. Evaluating the
8351 expression (set! <variable> <value>) in some environment locates the binding of the
8352 variable in the environment and changes that binding to indicate the new value. That is, one finds the
8353 first frame in the environment that contains a binding for the variable and modifies that frame. If the
8354 variable is unbound in the environment, then set! signals an error.
8355 These evaluation rules, though considerably more complex than the substitution model, are still
8356 reasonably straightforward. Moreover, the evaluation model, though abstract, provides a correct
8357 description of how the interpreter evaluates expressions. In chapter 4 we shall see how this model can
8358 serve as a blueprint for implementing a working interpreter. The following sections elaborate the
8359 details of the model by analyzing some illustrative programs.
8360
8361 3.2.2 Applying Simple Procedures
8362 When we introduced the substitution model in section 1.1.5 we showed how the combination (f 5)
8363 evaluates to 136, given the following procedure definitions:
8364 (define (square x)
8365 (* x x))
8366 (define (sum-of-squares x y)
8367 (+ (square x) (square y)))
8368 (define (f a)
8369 (sum-of-squares (+ a 1) (* a 2)))
8370 We can analyze the same example using the environment model. Figure 3.4 shows the three procedure
8371 objects created by evaluating the definitions of f, square, and sum-of-squares in the global
8372 environment. Each procedure object consists of some code, together with a pointer to the global
8373 environment.
8374
8375 \fFigure 3.4: Procedure objects in the global frame.
8376 Figure 3.4: Procedure objects in the global frame.
8377 In figure 3.5 we see the environment structure created by evaluating the expression (f 5). The call to
8378 f creates a new environment E1 beginning with a frame in which a, the formal parameter of f, is
8379 bound to the argument 5. In E1, we evaluate the body of f:
8380 (sum-of-squares (+ a 1) (* a 2))
8381
8382 Figure 3.5: Environments created by evaluating (f 5) using the procedures in figure 3.4.
8383 Figure 3.5: Environments created by evaluating (f 5) using the procedures in figure 3.4.
8384 To evaluate this combination, we first evaluate the subexpressions. The first subexpression,
8385 sum-of-squares, has a value that is a procedure object. (Notice how this value is found: We first
8386 look in the first frame of E1, which contains no binding for sum-of-squares. Then we proceed to
8387 the enclosing environment, i.e. the global environment, and find the binding shown in figure 3.4.) The
8388 other two subexpressions are evaluated by applying the primitive operations + and * to evaluate the
8389 two combinations (+ a 1) and (* a 2) to obtain 6 and 10, respectively.
8390 Now we apply the procedure object sum-of-squares to the arguments 6 and 10. This results in a
8391 new environment E2 in which the formal parameters x and y are bound to the arguments. Within E2
8392 we evaluate the combination (+ (square x) (square y)). This leads us to evaluate
8393 (square x), where square is found in the global frame and x is 6. Once again, we set up a new
8394
8395 \fenvironment, E3, in which x is bound to 6, and within this we evaluate the body of square, which is
8396 (* x x). Also as part of applying sum-of-squares, we must evaluate the subexpression
8397 (square y), where y is 10. This second call to square creates another environment, E4, in which
8398 x, the formal parameter of square, is bound to 10. And within E4 we must evaluate (* x x).
8399 The important point to observe is that each call to square creates a new environment containing a
8400 binding for x. We can see here how the different frames serve to keep separate the different local
8401 variables all named x. Notice that each frame created by square points to the global environment,
8402 since this is the environment indicated by the square procedure object.
8403 After the subexpressions are evaluated, the results are returned. The values generated by the two calls
8404 to square are added by sum-of-squares, and this result is returned by f. Since our focus here is
8405 on the environment structures, we will not dwell on how these returned values are passed from call to
8406 call; however, this is also an important aspect of the evaluation process, and we will return to it in
8407 detail in chapter 5.
8408 Exercise 3.9. In section 1.2.1 we used the substitution model to analyze two procedures for
8409 computing factorials, a recursive version
8410 (define (factorial n)
8411 (if (= n 1)
8412 1
8413 (* n (factorial (- n 1)))))
8414 and an iterative version
8415 (define (factorial n)
8416 (fact-iter 1 1 n))
8417 (define (fact-iter product counter max-count)
8418 (if (> counter max-count)
8419 product
8420 (fact-iter (* counter product)
8421 (+ counter 1)
8422 max-count)))
8423 Show the environment structures created by evaluating (factorial 6) using each version of the
8424 factorial procedure. 14
8425
8426 3.2.3 Frames as the Repository of Local State
8427 We can turn to the environment model to see how procedures and assignment can be used to represent
8428 objects with local state. As an example, consider the ‘‘withdrawal processor’’ from section 3.1.1
8429 created by calling the procedure
8430 (define (make-withdraw balance)
8431 (lambda (amount)
8432 (if (>= balance amount)
8433 (begin (set! balance (- balance amount))
8434 balance)
8435 "Insufficient funds")))
8436
8437 \fLet us describe the evaluation of
8438 (define W1 (make-withdraw 100))
8439 followed by
8440 (W1 50)
8441 50
8442 Figure 3.6 shows the result of defining the make-withdraw procedure in the global environment.
8443 This produces a procedure object that contains a pointer to the global environment. So far, this is no
8444 different from the examples we have already seen, except that the body of the procedure is itself a
8445 lambda expression.
8446
8447 Figure 3.6: Result of defining make-withdraw in the global environment.
8448 Figure 3.6: Result of defining make-withdraw in the global environment.
8449 The interesting part of the computation happens when we apply the procedure make-withdraw to
8450 an argument:
8451 (define W1 (make-withdraw 100))
8452 We begin, as usual, by setting up an environment E1 in which the formal parameter balance is
8453 bound to the argument 100. Within this environment, we evaluate the body of make-withdraw,
8454 namely the lambda expression. This constructs a new procedure object, whose code is as specified by
8455 the lambda and whose environment is E1, the environment in which the lambda was evaluated to
8456 produce the procedure. The resulting procedure object is the value returned by the call to
8457 make-withdraw. This is bound to W1 in the global environment, since the define itself is being
8458 evaluated in the global environment. Figure 3.7 shows the resulting environment structure.
8459
8460 \fFigure 3.7: Result of evaluating (define W1 (make-withdraw 100)).
8461 Figure 3.7: Result of evaluating (define W1 (make-withdraw 100)).
8462 Now we can analyze what happens when W1 is applied to an argument:
8463 (W1 50)
8464 50
8465 We begin by constructing a frame in which amount, the formal parameter of W1, is bound to the
8466 argument 50. The crucial point to observe is that this frame has as its enclosing environment not the
8467 global environment, but rather the environment E1, because this is the environment that is specified by
8468 the W1 procedure object. Within this new environment, we evaluate the body of the procedure:
8469 (if (>= balance amount)
8470 (begin (set! balance (- balance amount))
8471 balance)
8472 "Insufficient funds")
8473 The resulting environment structure is shown in figure 3.8. The expression being evaluated references
8474 both amount and balance. Amount will be found in the first frame in the environment, while
8475 balance will be found by following the enclosing-environment pointer to E1.
8476
8477 \fFigure 3.8: Environments created by applying the procedure object W1.
8478 Figure 3.8: Environments created by applying the procedure object W1.
8479 When the set! is executed, the binding of balance in E1 is changed. At the completion of the call
8480 to W1, balance is 50, and the frame that contains balance is still pointed to by the procedure
8481 object W1. The frame that binds amount (in which we executed the code that changed balance) is
8482 no longer relevant, since the procedure call that constructed it has terminated, and there are no pointers
8483 to that frame from other parts of the environment. The next time W1 is called, this will build a new
8484 frame that binds amount and whose enclosing environment is E1. We see that E1 serves as the
8485 ‘‘place’’ that holds the local state variable for the procedure object W1. Figure 3.9 shows the situation
8486 after the call to W1.
8487
8488 Figure 3.9: Environments after the call to W1.
8489 Figure 3.9: Environments after the call to W1.
8490 Observe what happens when we create a second ‘‘withdraw’’ object by making another call to
8491 make-withdraw:
8492
8493 \f(define W2 (make-withdraw 100))
8494 This produces the environment structure of figure 3.10, which shows that W2 is a procedure object,
8495 that is, a pair with some code and an environment. The environment E2 for W2 was created by the call
8496 to make-withdraw. It contains a frame with its own local binding for balance. On the other
8497 hand, W1 and W2 have the same code: the code specified by the lambda expression in the body of
8498 make-withdraw. 15 We see here why W1 and W2 behave as independent objects. Calls to W1
8499 reference the state variable balance stored in E1, whereas calls to W2 reference the balance stored
8500 in E2. Thus, changes to the local state of one object do not affect the other object.
8501
8502 Figure 3.10: Using (define W2 (make-withdraw 100)) to create a second object.
8503 Figure 3.10: Using (define W2 (make-withdraw 100)) to create a second object.
8504 Exercise 3.10. In the make-withdraw procedure, the local variable balance is created as a
8505 parameter of make-withdraw. We could also create the local state variable explicitly, using let,
8506 as follows:
8507 (define (make-withdraw initial-amount)
8508 (let ((balance initial-amount))
8509 (lambda (amount)
8510 (if (>= balance amount)
8511 (begin (set! balance (- balance amount))
8512 balance)
8513 "Insufficient funds"))))
8514 Recall from section 1.3.2 that let is simply syntactic sugar for a procedure call:
8515 (let ((<var> <exp>)) <body>)
8516 is interpreted as an alternate syntax for
8517 ((lambda (<var>) <body>) <exp>)
8518 Use the environment model to analyze this alternate version of make-withdraw, drawing figures
8519 like the ones above to illustrate the interactions
8520
8521 \f(define W1 (make-withdraw 100))
8522 (W1 50)
8523 (define W2 (make-withdraw 100))
8524 Show that the two versions of make-withdraw create objects with the same behavior. How do the
8525 environment structures differ for the two versions?
8526
8527 3.2.4 Internal Definitions
8528 Section 1.1.8 introduced the idea that procedures can have internal definitions, thus leading to a block
8529 structure as in the following procedure to compute square roots:
8530 (define (sqrt x)
8531 (define (good-enough? guess)
8532 (< (abs (- (square guess) x)) 0.001))
8533 (define (improve guess)
8534 (average guess (/ x guess)))
8535 (define (sqrt-iter guess)
8536 (if (good-enough? guess)
8537 guess
8538 (sqrt-iter (improve guess))))
8539 (sqrt-iter 1.0))
8540 Now we can use the environment model to see why these internal definitions behave as desired.
8541 Figure 3.11 shows the point in the evaluation of the expression (sqrt 2) where the internal
8542 procedure good-enough? has been called for the first time with guess equal to 1.
8543
8544 Figure 3.11: Sqrt procedure with internal definitions.
8545 Figure 3.11: Sqrt procedure with internal definitions.
8546
8547 \fObserve the structure of the environment. Sqrt is a symbol in the global environment that is bound to
8548 a procedure object whose associated environment is the global environment. When sqrt was called, a
8549 new environment E1 was formed, subordinate to the global environment, in which the parameter x is
8550 bound to 2. The body of sqrt was then evaluated in E1. Since the first expression in the body of
8551 sqrt is
8552 (define (good-enough? guess)
8553 (< (abs (- (square guess) x)) 0.001))
8554 evaluating this expression defined the procedure good-enough? in the environment E1. To be more
8555 precise, the symbol good-enough? was added to the first frame of E1, bound to a procedure object
8556 whose associated environment is E1. Similarly, improve and sqrt-iter were defined as
8557 procedures in E1. For conciseness, figure 3.11 shows only the procedure object for good-enough?.
8558 After the local procedures were defined, the expression (sqrt-iter 1.0) was evaluated, still in
8559 environment E1. So the procedure object bound to sqrt-iter in E1 was called with 1 as an
8560 argument. This created an environment E2 in which guess, the parameter of sqrt-iter, is bound
8561 to 1. Sqrt-iter in turn called good-enough? with the value of guess (from E2) as the
8562 argument for good-enough?. This set up another environment, E3, in which guess (the parameter
8563 of good-enough?) is bound to 1. Although sqrt-iter and good-enough? both have a
8564 parameter named guess, these are two distinct local variables located in different frames. Also, E2
8565 and E3 both have E1 as their enclosing environment, because the sqrt-iter and good-enough?
8566 procedures both have E1 as their environment part. One consequence of this is that the symbol x that
8567 appears in the body of good-enough? will reference the binding of x that appears in E1, namely the
8568 value of x with which the original sqrt procedure was called. The environment model thus explains
8569 the two key properties that make local procedure definitions a useful technique for modularizing
8570 programs:
8571 The names of the local procedures do not interfere with names external to the enclosing
8572 procedure, because the local procedure names will be bound in the frame that the procedure
8573 creates when it is run, rather than being bound in the global environment.
8574 The local procedures can access the arguments of the enclosing procedure, simply by using
8575 parameter names as free variables. This is because the body of the local procedure is evaluated in
8576 an environment that is subordinate to the evaluation environment for the enclosing procedure.
8577 Exercise 3.11. In section 3.2.3 we saw how the environment model described the behavior of
8578 procedures with local state. Now we have seen how internal definitions work. A typical
8579 message-passing procedure contains both of these aspects. Consider the bank account procedure of
8580 section 3.1.1:
8581 (define (make-account balance)
8582 (define (withdraw amount)
8583 (if (>= balance amount)
8584 (begin (set! balance (- balance amount))
8585 balance)
8586 "Insufficient funds"))
8587 (define (deposit amount)
8588 (set! balance (+ balance amount))
8589 balance)
8590 (define (dispatch m)
8591 (cond ((eq? m ’withdraw) withdraw)
8592
8593 \f((eq? m ’deposit) deposit)
8594 (else (error "Unknown request -- MAKE-ACCOUNT"
8595 m))))
8596 dispatch)
8597 Show the environment structure generated by the sequence of interactions
8598 (define acc (make-account 50))
8599 ((acc ’deposit) 40)
8600 90
8601 ((acc ’withdraw) 60)
8602 30
8603 Where is the local state for acc kept? Suppose we define another account
8604 (define acc2 (make-account 100))
8605 How are the local states for the two accounts kept distinct? Which parts of the environment structure
8606 are shared between acc and acc2?
8607 12 Assignment introduces a subtlety into step 1 of the evaluation rule. As shown in exercise 3.8, the
8608
8609 presence of assignment allows us to write expressions that will produce different values depending on
8610 the order in which the subexpressions in a combination are evaluated. Thus, to be precise, we should
8611 specify an evaluation order in step 1 (e.g., left to right or right to left). However, this order should
8612 always be considered to be an implementation detail, and one should never write programs that depend
8613 on some particular order. For instance, a sophisticated compiler might optimize a program by varying
8614 the order in which subexpressions are evaluated.
8615 13 If there is already a binding for the variable in the current frame, then the binding is changed. This
8616
8617 is convenient because it allows redefinition of symbols; however, it also means that define can be
8618 used to change values, and this brings up the issues of assignment without explicitly using set!.
8619 Because of this, some people prefer redefinitions of existing symbols to signal errors or warnings.
8620 14 The environment model will not clarify our claim in section 1.2.1 that the interpreter can execute a
8621
8622 procedure such as fact-iter in a constant amount of space using tail recursion. We will discuss tail
8623 recursion when we deal with the control structure of the interpreter in section 5.4.
8624 15 Whether W1 and W2 share the same physical code stored in the computer, or whether they each
8625
8626 keep a copy of the code, is a detail of the implementation. For the interpreter we implement in
8627 chapter 4, the code is in fact shared.
8628 [Go to first, previous, next page; contents; index]
8629
8630 \f[Go
8631 Figure
8632 to first,
8633 3.15:previous,
8634 Effect ofnext
8635 (set-cdr!
8636 page; contents;
8637 x y) on
8638 index]
8639 the lists in figure 3.12.
8640
8641 3.3
8642 Modeling
8643 Data
8644 The primitive
8645 mutators with
8646 for pairsMutable
8647 are set-car!
8648 and set-cdr!. Set-car! takes two arguments,
8649 the first of which must be a pair. It modifies this pair, replacing the car pointer by a pointer to the
8650 16 as a means for constructing computational objects that have
8651 Chapter
8652 2 dealt with
8653 compound data
8654 second argument
8655 of set-car!.
8656 several parts, in order to model real-world objects that have several aspects. In that chapter we
8657 introduced
8658 the discipline
8659 of data
8660 which
8661 data
8662 structures
8663 As an example,
8664 suppose that
8665 x isabstraction,
8666 bound to theaccording
8667 list ((a tob)
8668 c d)
8669 and
8670 y to the are
8671 list specified
8672 (e f) asin terms
8673 of
8674 constructors,
8675 which
8676 create
8677 data objects,
8678 and selectors,
8679 which access
8680 themodifies
8681 parts of compound
8682 data x
8683 illustrated
8684 in figure
8685 3.12.
8686 Evaluating
8687 the expression
8688 (set-car!
8689 x y)
8690 the pair to which
8691 objects.
8692 But
8693 we
8694 now
8695 know
8696 that
8697 there
8698 is
8699 another
8700 aspect
8701 of
8702 data
8703 that
8704 chapter
8705 2
8706 did
8707 not
8708 address.
8709 The
8710 is bound, replacing its car by the value of y. The result of the operation is shown in figure 3.13. desire
8711 The
8712 to
8713 model x
8714 systems
8715 composed
8716 objects
8717 that
8718 have
8719 theThe
8720 need
8721 to modify
8722 structure
8723 has been
8724 modifiedofand
8725 would
8726 now
8727 bechanging
8728 printed asstate
8729 ((eleads
8730 f) us
8731 c tod).
8732 pairs
8733 representing
8734 compound
8735 asby
8736 well
8737 to construct
8738 andreplaced,
8739 select from
8740 In order from
8741 to model
8742 compound
8743 the list (a data
8744 b), objects,
8745 identified
8746 theaspointer
8747 that was
8748 are them.
8749 now detached
8750 the original
8751 17
8752 objects
8753 with
8754 changing
8755 state,
8756 we
8757 will
8758 design
8759 data
8760 abstractions
8761 to
8762 include,
8763 in
8764 addition
8765 to
8766 selectors
8767 and
8768 structure.
8769 constructors, operations called mutators, which modify data objects. For instance, modeling a banking
8770 Compare
8771 figure us
8772 3.13
8773 figure
8774 3.14,balances.
8775 which illustrates
8776 the result
8777 of executing
8778 (define
8779 (cons
8780 system
8781 requires
8782 to with
8783 change
8784 account
8785 Thus, a data
8786 structure
8787 for representing
8788 bankzaccounts
8789 3.13:
8790 Effect
8791 (set-car!
8792 x y)
8793 on thelists
8794 listsofinfigure
8795 figure3.12.
8796 3.12.The variable z is now bound to
8797 y Figure
8798 (cdr
8799 x)))
8800 with xofand
8801 y bound to the
8802 original
8803 might
8804 admit
8805 an operation
8806 aFigure
8807 new pair
8808 created
8809 byofthe
8810 cons operation;
8811 theonlist
8812 which
8813 x is bound
8814 3.13:
8815 Effect
8816 (set-car!
8817 x y)
8818 thetolists
8819 in figure
8820 3.12. is unchanged.
8821 (set-balance! <account> <new-value>)
8822 The set-cdr! operation is similar to set-car!. The only difference is that the cdr pointer of the
8823 that
8824 the the
8825 balance
8826 the designated
8827 account
8828 to theofdesignated
8829 value. Dataxobjects
8830 pair,changes
8831 rather than
8832 car of
8833 pointer,
8834 is replaced.
8835 The effect
8836 executingnew
8837 (set-cdr!
8838 y) onfor
8839 thewhich
8840 lists
8841 mutators
8842 are defined
8843 areinknown
8844 mutable
8845 objects.
8846 of figure 3.12
8847 is shown
8848 figureas
8849 3.15.
8850 Here data
8851 the cdr
8852 pointer of x has been replaced by the pointer to
8853 (e f). Also, the list (c d), which used to be the cdr of x, is now detached from the structure.
8854 Chapter 2 introduced pairs as a general-purpose ‘‘glue’’ for synthesizing compound data. We begin
8855 this
8856 section
8857 defining
8858 basic mutators
8859 for pairs,
8860 so that
8861 pairsset-car!
8862 can serve asand
8863 building
8864 blocksmodify
8865 for
8866 Cons
8867 buildsbynew
8868 list structure
8869 by creating
8870 new pairs,
8871 while
8872 set-cdr!
8873 constructing
8874 objects.
8875 These mutators
8876 greatly
8877 themutators,
8878 representational
8879 existing pairs.mutable
8880 Indeed,data
8881 we could
8882 implement
8883 cons in
8884 termsenhance
8885 of the two
8886 togetherpower
8887 with aof pairs,
8888 enabling
8889 to build data structures
8890 than
8891 the sequences
8892 thatany
8893 weexisting
8894 workedlist
8895 with
8896 in
8897 procedureusget-new-pair,
8898 whichother
8899 returns
8900 a new
8901 pair that isand
8902 nottrees
8903 part of
8904 structure.
8905 We
8906 section
8907 2.2.new
8908 Wepair,
8909 also set
8910 present
8911 some
8912 of simulations
8913 in whichobjects,
8914 complex
8915 systems
8916 obtain the
8917 its car
8918 andexamples
8919 cdr pointers
8920 to the designated
8921 and
8922 return are
8923 the modeled
8924 new pair as
8925 18 local state.
8926 collections
8927 objects
8928 the result ofofthe
8929 cons.with
8930 (define
8931 (cons xList
8932 y) Structure
8933 3.3.1
8934 Mutable
8935
8936 (let ((new (get-new-pair)))
8937 (set-car!
8938 x)-- cons, car, and cdr -- can be used to construct list structure and to
8939 The basic
8940 operations new
8941 on pairs
8942 (set-cdr!
8943 new
8944 y) but they are incapable of modifying list structure. The same is true of
8945 select parts from list structure,
8946 Figure
8947 3.14:
8948 Effect
8949 of
8950 (define
8951 z (cons y (cdr x))) on the lists in figure 3.12.
8952 the listnew))
8953 operations we have used so far, such as append and list, since these can be defined in
8954 terms
8955 cons,
8956 car,ofand
8957 cdr. To modify
8958 list structures
8959 needonnew
8960 Figureof3.14:
8961 Effect
8962 (define
8963 z (cons
8964 y (cdr we
8965 x)))
8966 theoperations.
8967 lists in figure 3.12.
8968 Exercise 3.12. The following procedure for appending lists was introduced in section 2.2.1:
8969 (define (append x y)
8970 (if (null? x)
8971 y
8972 (cons (car x) (append (cdr x) y))))
8973 Append forms a new list by successively consing the elements of x onto y. The procedure
8974 append! is similar to append, but it is a mutator rather than a constructor. It appends the lists by
8975 splicing them together, modifying the final pair of x so that its cdr is now y. (It is an error to call
8976 append! with an empty x.)
8977 (define (append! x y)
8978 (set-cdr! (last-pair x) y)
8979 x)
8980 Figure 3.12: Lists x: ((a b) c d) and y: (e f).
8981 Figure 3.15: Effect of (set-cdr! x y) on the lists in figure 3.12.
8982 Figure 3.12: Lists x: ((a b) c d) and y: (e f).
8983
8984 \fHere last-pair is a procedure that returns the last pair in its argument:
8985 (define (last-pair x)
8986 (if (null? (cdr x))
8987 x
8988 (last-pair (cdr x))))
8989 Consider the interaction
8990 (define x (list ’a ’b))
8991 (define y (list ’c ’d))
8992 (define z (append x y))
8993 z
8994 (a b c d)
8995 (cdr x)
8996 <response>
8997 (define w (append! x y))
8998 w
8999 (a b c d)
9000 (cdr x)
9001 <response>
9002 What are the missing <response>s? Draw box-and-pointer diagrams to explain your answer.
9003 Exercise 3.13. Consider the following make-cycle procedure, which uses the last-pair
9004 procedure defined in exercise 3.12:
9005 (define (make-cycle x)
9006 (set-cdr! (last-pair x) x)
9007 x)
9008 Draw a box-and-pointer diagram that shows the structure z created by
9009 (define z (make-cycle (list ’a ’b ’c)))
9010 What happens if we try to compute (last-pair z)?
9011 Exercise 3.14. The following procedure is quite useful, although obscure:
9012 (define (mystery x)
9013 (define (loop x y)
9014 (if (null? x)
9015 y
9016 (let ((temp (cdr x)))
9017 (set-cdr! x y)
9018 (loop temp x))))
9019 (loop x ’()))
9020 Loop uses the ‘‘temporary’’ variable temp to hold the old value of the cdr of x, since the
9021 set-cdr! on the next line destroys the cdr. Explain what mystery does in general. Suppose v is
9022 defined by (define v (list ’a ’b ’c ’d)). Draw the box-and-pointer diagram that
9023 represents the list to which v is bound. Suppose that we now evaluate (define w (mystery
9024
9025 \fv)). Draw box-and-pointer diagrams that show the structures v and w after evaluating this expression.
9026 What would be printed as the values of v and w ?
9027
9028 Sharing and identity
9029 We mentioned in section 3.1.3 the theoretical issues of ‘‘sameness’’ and ‘‘change’’ raised by the
9030 introduction of assignment. These issues arise in practice when individual pairs are shared among
9031 different data objects. For example, consider the structure formed by
9032 (define x (list ’a ’b))
9033 (define z1 (cons x x))
9034 As shown in figure 3.16, z1 is a pair whose car and cdr both point to the same pair x. This sharing
9035 of x by the car and cdr of z1 is a consequence of the straightforward way in which cons is
9036 implemented. In general, using cons to construct lists will result in an interlinked structure of pairs in
9037 which many individual pairs are shared by many different structures.
9038
9039 Figure 3.16: The list z1 formed by (cons x x).
9040 Figure 3.16: The list z1 formed by (cons x x).
9041
9042 Figure 3.17: The list z2 formed by (cons (list ’a ’b) (list ’a ’b)).
9043 Figure 3.17: The list z2 formed by (cons (list ’a ’b) (list ’a ’b)).
9044 In contrast to figure 3.16, figure 3.17 shows the structure created by
9045 (define z2 (cons (list ’a ’b) (list ’a ’b)))
9046 In this structure, the pairs in the two (a b) lists are distinct, although the actual symbols are
9047 shared. 19
9048 When thought of as a list, z1 and z2 both represent ‘‘the same’’ list, ((a b) a b). In general,
9049 sharing is completely undetectable if we operate on lists using only cons, car, and cdr. However, if
9050 we allow mutators on list structure, sharing becomes significant. As an example of the difference that
9051 sharing can make, consider the following procedure, which modifies the car of the structure to which
9052
9053 \fit is applied:
9054 (define (set-to-wow! x)
9055 (set-car! (car x) ’wow)
9056 x)
9057 Even though z1 and z2 are ‘‘the same’’ structure, applying set-to-wow! to them yields different
9058 results. With z1, altering the car also changes the cdr, because in z1 the car and the cdr are the
9059 same pair. With z2, the car and cdr are distinct, so set-to-wow! modifies only the car:
9060 z1
9061 ((a b) a b)
9062 (set-to-wow! z1)
9063 ((wow b) wow b)
9064 z2
9065 ((a b) a b)
9066 (set-to-wow! z2)
9067 ((wow b) a b)
9068 One way to detect sharing in list structures is to use the predicate eq?, which we introduced in
9069 section 2.3.1 as a way to test whether two symbols are equal. More generally, (eq? x y) tests
9070 whether x and y are the same object (that is, whether x and y are equal as pointers). Thus, with z1
9071 and z2 as defined in figures 3.16 and 3.17, (eq? (car z1) (cdr z1)) is true and (eq?
9072 (car z2) (cdr z2)) is false.
9073 As will be seen in the following sections, we can exploit sharing to greatly extend the repertoire of
9074 data structures that can be represented by pairs. On the other hand, sharing can also be dangerous,
9075 since modifications made to structures will also affect other structures that happen to share the
9076 modified parts. The mutation operations set-car! and set-cdr! should be used with care; unless
9077 we have a good understanding of how our data objects are shared, mutation can have unanticipated
9078 results. 20
9079 Exercise 3.15. Draw box-and-pointer diagrams to explain the effect of set-to-wow! on the
9080 structures z1 and z2 above.
9081 Exercise 3.16. Ben Bitdiddle decides to write a procedure to count the number of pairs in any list
9082 structure. ‘‘It’s easy,’’ he reasons. ‘‘The number of pairs in any structure is the number in the car
9083 plus the number in the cdr plus one more to count the current pair.’’ So Ben writes the following
9084 procedure:
9085 (define (count-pairs x)
9086 (if (not (pair? x))
9087 0
9088 (+ (count-pairs (car x))
9089 (count-pairs (cdr x))
9090 1)))
9091 Show that this procedure is not correct. In particular, draw box-and-pointer diagrams representing list
9092 structures made up of exactly three pairs for which Ben’s procedure would return 3; return 4; return 7;
9093 never return at all.
9094
9095 \fExercise 3.17. Devise a correct version of the count-pairs procedure of exercise 3.16 that returns
9096 the number of distinct pairs in any structure. (Hint: Traverse the structure, maintaining an auxiliary
9097 data structure that is used to keep track of which pairs have already been counted.)
9098 Exercise 3.18. Write a procedure that examines a list and determines whether it contains a cycle, that
9099 is, whether a program that tried to find the end of the list by taking successive cdrs would go into an
9100 infinite loop. Exercise 3.13 constructed such lists.
9101 Exercise 3.19. Redo exercise 3.18 using an algorithm that takes only a constant amount of space.
9102 (This requires a very clever idea.)
9103
9104 Mutation is just assignment
9105 When we introduced compound data, we observed in section 2.1.3 that pairs can be represented purely
9106 in terms of procedures:
9107 (define (cons x y)
9108 (define (dispatch m)
9109 (cond ((eq? m ’car) x)
9110 ((eq? m ’cdr) y)
9111 (else (error "Undefined operation -- CONS" m))))
9112 dispatch)
9113 (define (car z) (z ’car))
9114 (define (cdr z) (z ’cdr))
9115 The same observation is true for mutable data. We can implement mutable data objects as procedures
9116 using assignment and local state. For instance, we can extend the above pair implementation to handle
9117 set-car! and set-cdr! in a manner analogous to the way we implemented bank accounts using
9118 make-account in section 3.1.1:
9119 (define (cons x y)
9120 (define (set-x! v) (set! x v))
9121 (define (set-y! v) (set! y v))
9122 (define (dispatch m)
9123 (cond ((eq? m ’car) x)
9124 ((eq? m ’cdr) y)
9125 ((eq? m ’set-car!) set-x!)
9126 ((eq? m ’set-cdr!) set-y!)
9127 (else (error "Undefined operation -- CONS" m))))
9128 dispatch)
9129 (define (car z) (z ’car))
9130 (define (cdr z) (z ’cdr))
9131 (define (set-car! z new-value)
9132 ((z ’set-car!) new-value)
9133 z)
9134 (define (set-cdr! z new-value)
9135 ((z ’set-cdr!) new-value)
9136 z)
9137
9138 \fAssignment is all that is needed, theoretically, to account for the behavior of mutable data. As soon as
9139 we admit set! to our language, we raise all the issues, not only of assignment, but of mutable data in
9140 general. 21
9141 Exercise 3.20. Draw environment diagrams to illustrate the evaluation of the sequence of expressions
9142 (define x (cons 1 2))
9143 (define z (cons x x))
9144 (set-car! (cdr z) 17)
9145 (car x)
9146 17
9147 using the procedural implementation of pairs given above. (Compare exercise 3.11.)
9148
9149 3.3.2 Representing Queues
9150 The mutators set-car! and set-cdr! enable us to use pairs to construct data structures that
9151 cannot be built with cons, car, and cdr alone. This section shows how to use pairs to represent a
9152 data structure called a queue. Section 3.3.3 will show how to represent data structures called tables.
9153 A queue is a sequence in which items are inserted at one end (called the rear of the queue) and deleted
9154 from the other end (the front). Figure 3.18 shows an initially empty queue in which the items a and b
9155 are inserted. Then a is removed, c and d are inserted, and b is removed. Because items are always
9156 removed in the order in which they are inserted, a queue is sometimes called a FIFO (first in, first out)
9157 buffer.
9158 Operation
9159
9160 Resulting Queue
9161
9162 (define q (make-queue))
9163 (insert-queue! q ’a)
9164
9165 a
9166
9167 (insert-queue! q ’b)
9168
9169 a b
9170
9171 (delete-queue! q)
9172
9173 b
9174
9175 (insert-queue! q ’c)
9176
9177 b c
9178
9179 (insert-queue! q ’d)
9180
9181 b c d
9182
9183 (delete-queue! q)
9184
9185 c d
9186
9187 Figure 3.18: Queue operations.
9188 Figure 3.18: Queue operations.
9189 In terms of data abstraction, we can regard a queue as defined by the following set of operations:
9190 a constructor:
9191 (make-queue)
9192 returns an empty queue (a queue containing no items).
9193
9194 \ftwo selectors:
9195 (empty-queue? <queue>)
9196 tests if the queue is empty.
9197 (front-queue <queue>)
9198 returns the object at the front of the queue, signaling an error if the queue is empty; it does not
9199 modify the queue.
9200 two mutators:
9201 (insert-queue! <queue> <item>)
9202 inserts the item at the rear of the queue and returns the modified queue as its value.
9203 (delete-queue! <queue>)
9204 removes the item at the front of the queue and returns the modified queue as its value, signaling
9205 an error if the queue is empty before the deletion.
9206 Because a queue is a sequence of items, we could certainly represent it as an ordinary list; the front of
9207 the queue would be the car of the list, inserting an item in the queue would amount to appending a
9208 new element at the end of the list, and deleting an item from the queue would just be taking the cdr of
9209 the list. However, this representation is inefficient, because in order to insert an item we must scan the
9210 list until we reach the end. Since the only method we have for scanning a list is by successive cdr
9211 operations, this scanning requires (n) steps for a list of n items. A simple modification to the list
9212 representation overcomes this disadvantage by allowing the queue operations to be implemented so
9213 that they require (1) steps; that is, so that the number of steps needed is independent of the length of
9214 the queue.
9215 The difficulty with the list representation arises from the need to scan to find the end of the list. The
9216 reason we need to scan is that, although the standard way of representing a list as a chain of pairs
9217 readily provides us with a pointer to the beginning of the list, it gives us no easily accessible pointer to
9218 the end. The modification that avoids the drawback is to represent the queue as a list, together with an
9219 additional pointer that indicates the final pair in the list. That way, when we go to insert an item, we
9220 can consult the rear pointer and so avoid scanning the list.
9221 A queue is represented, then, as a pair of pointers, front-ptr and rear-ptr, which indicate,
9222 respectively, the first and last pairs in an ordinary list. Since we would like the queue to be an
9223 identifiable object, we can use cons to combine the two pointers. Thus, the queue itself will be the
9224 cons of the two pointers. Figure 3.19 illustrates this representation.
9225
9226 Figure 3.19: Implementation of a queue as a list with front and rear pointers.
9227 Figure 3.19: Implementation of a queue as a list with front and rear pointers.
9228
9229 \fTo define the queue operations we use the following procedures, which enable us to select and to
9230 modify the front and rear pointers of a queue:
9231 (define
9232 (define
9233 (define
9234 (define
9235
9236 (front-ptr queue) (car queue))
9237 (rear-ptr queue) (cdr queue))
9238 (set-front-ptr! queue item) (set-car! queue item))
9239 (set-rear-ptr! queue item) (set-cdr! queue item))
9240
9241 Now we can implement the actual queue operations. We will consider a queue to be empty if its front
9242 pointer is the empty list:
9243 (define (empty-queue? queue) (null? (front-ptr queue)))
9244 The make-queue constructor returns, as an initially empty queue, a pair whose car and cdr are
9245 both the empty list:
9246 (define (make-queue) (cons ’() ’()))
9247 To select the item at the front of the queue, we return the car of the pair indicated by the front
9248 pointer:
9249 (define (front-queue queue)
9250 (if (empty-queue? queue)
9251 (error "FRONT called with an empty queue" queue)
9252 (car (front-ptr queue))))
9253 To insert an item in a queue, we follow the method whose result is indicated in figure 3.20. We first
9254 create a new pair whose car is the item to be inserted and whose cdr is the empty list. If the queue
9255 was initially empty, we set the front and rear pointers of the queue to this new pair. Otherwise, we
9256 modify the final pair in the queue to point to the new pair, and also set the rear pointer to the new pair.
9257
9258 Figure 3.20: Result of using (insert-queue! q ’d) on the queue of figure 3.19.
9259 Figure 3.20: Result of using (insert-queue! q ’d) on the queue of figure 3.19.
9260 (define (insert-queue! queue item)
9261 (let ((new-pair (cons item ’())))
9262 (cond ((empty-queue? queue)
9263 (set-front-ptr! queue new-pair)
9264 (set-rear-ptr! queue new-pair)
9265 queue)
9266 (else
9267 (set-cdr! (rear-ptr queue) new-pair)
9268
9269 \f(set-rear-ptr! queue new-pair)
9270 queue))))
9271 To delete the item at the front of the queue, we merely modify the front pointer so that it now points at
9272 the second item in the queue, which can be found by following the cdr pointer of the first item (see
9273 figure 3.21): 22
9274
9275 Figure 3.21: Result of using (delete-queue! q) on the queue of figure 3.20.
9276 Figure 3.21: Result of using (delete-queue! q) on the queue of figure 3.20.
9277 (define (delete-queue! queue)
9278 (cond ((empty-queue? queue)
9279 (error "DELETE! called with an empty queue" queue))
9280 (else
9281 (set-front-ptr! queue (cdr (front-ptr queue)))
9282 queue)))
9283 Exercise 3.21. Ben Bitdiddle decides to test the queue implementation described above. He types in
9284 the procedures to the Lisp interpreter and proceeds to try them out:
9285 (define q1 (make-queue))
9286 (insert-queue! q1 ’a)
9287 ((a) a)
9288 (insert-queue! q1 ’b)
9289 ((a b) b)
9290 (delete-queue! q1)
9291 ((b) b)
9292 (delete-queue! q1)
9293 (() b)
9294 ‘‘It’s all wrong!’’ he complains. ‘‘The interpreter’s response shows that the last item is inserted into
9295 the queue twice. And when I delete both items, the second b is still there, so the queue isn’t empty,
9296 even though it’s supposed to be.’’ Eva Lu Ator suggests that Ben has misunderstood what is
9297 happening. ‘‘It’s not that the items are going into the queue twice,’’ she explains. ‘‘It’s just that the
9298 standard Lisp printer doesn’t know how to make sense of the queue representation. If you want to see
9299 the queue printed correctly, you’ll have to define your own print procedure for queues.’’ Explain what
9300 Eva Lu is talking about. In particular, show why Ben’s examples produce the printed results that they
9301 do. Define a procedure print-queue that takes a queue as input and prints the sequence of items in
9302 the queue.
9303
9304 \fExercise 3.22. Instead of representing a queue as a pair of pointers, we can build a queue as a
9305 procedure with local state. The local state will consist of pointers to the beginning and the end of an
9306 ordinary list. Thus, the make-queue procedure will have the form
9307 (define (make-queue)
9308 (let ((front-ptr ...)
9309 (rear-ptr ...))
9310 <definitions of internal procedures>
9311 (define (dispatch m) ...)
9312 dispatch))
9313 Complete the definition of make-queue and provide implementations of the queue operations using
9314 this representation.
9315 Exercise 3.23. A deque (‘‘double-ended queue’’) is a sequence in which items can be inserted and
9316 deleted at either the front or the rear. Operations on deques are the constructor make-deque, the
9317 predicate empty-deque?, selectors front-deque and rear-deque, and mutators
9318 front-insert-deque!, rear-insert-deque!, front-delete-deque!, and
9319 rear-delete-deque!. Show how to represent deques using pairs, and give implementations of
9320 the operations. 23 All operations should be accomplished in (1) steps.
9321
9322 3.3.3 Representing Tables
9323 When we studied various ways of representing sets in chapter 2, we mentioned in section 2.3.3 the task
9324 of maintaining a table of records indexed by identifying keys. In the implementation of data-directed
9325 programming in section 2.4.3, we made extensive use of two-dimensional tables, in which information
9326 is stored and retrieved using two keys. Here we see how to build tables as mutable list structures.
9327 We first consider a one-dimensional table, in which each value is stored under a single key. We
9328 implement the table as a list of records, each of which is implemented as a pair consisting of a key and
9329 the associated value. The records are glued together to form a list by pairs whose cars point to
9330 successive records. These gluing pairs are called the backbone of the table. In order to have a place
9331 that we can change when we add a new record to the table, we build the table as a headed list. A
9332 headed list has a special backbone pair at the beginning, which holds a dummy ‘‘record’’ -- in this
9333 case the arbitrarily chosen symbol *table*. Figure 3.22 shows the box-and-pointer diagram for the
9334 table
9335 a:
9336 b:
9337 c:
9338
9339 1
9340 2
9341 3
9342
9343 \fFigure 3.22: A table represented as a headed list.
9344 Figure 3.22: A table represented as a headed list.
9345 To extract information from a table we use the lookup procedure, which takes a key as argument and
9346 returns the associated value (or false if there is no value stored under that key). Lookup is defined in
9347 terms of the assoc operation, which expects a key and a list of records as arguments. Note that
9348 assoc never sees the dummy record. Assoc returns the record that has the given key as its car. 24
9349 Lookup then checks to see that the resulting record returned by assoc is not false, and returns the
9350 value (the cdr) of the record.
9351 (define (lookup key table)
9352 (let ((record (assoc key (cdr table))))
9353 (if record
9354 (cdr record)
9355 false)))
9356 (define (assoc key records)
9357 (cond ((null? records) false)
9358 ((equal? key (caar records)) (car records))
9359 (else (assoc key (cdr records)))))
9360 To insert a value in a table under a specified key, we first use assoc to see if there is already a record
9361 in the table with this key. If not, we form a new record by consing the key with the value, and insert
9362 this at the head of the table’s list of records, after the dummy record. If there already is a record with
9363 this key, we set the cdr of this record to the designated new value. The header of the table provides us
9364 with a fixed location to modify in order to insert the new record. 25
9365 (define (insert! key value table)
9366 (let ((record (assoc key (cdr table))))
9367 (if record
9368 (set-cdr! record value)
9369 (set-cdr! table
9370 (cons (cons key value) (cdr table)))))
9371 ’ok)
9372 To construct a new table, we simply create a list containing the symbol *table*:
9373 (define (make-table)
9374 (list ’*table*))
9375
9376 \fTwo-dimensional tables
9377 In a two-dimensional table, each value is indexed by two keys. We can construct such a table as a
9378 one-dimensional table in which each key identifies a subtable. Figure 3.23 shows the box-and-pointer
9379 diagram for the table
9380 math:
9381 +: 43
9382 -: 45
9383 *: 42
9384 letters:
9385 a: 97
9386 b: 98
9387 which has two subtables. (The subtables don’t need a special header symbol, since the key that
9388 identifies the subtable serves this purpose.)
9389
9390 Figure 3.23: A two-dimensional table.
9391 Figure 3.23: A two-dimensional table.
9392 When we look up an item, we use the first key to identify the correct subtable. Then we use the second
9393 key to identify the record within the subtable.
9394 (define (lookup key-1 key-2 table)
9395 (let ((subtable (assoc key-1 (cdr table))))
9396 (if subtable
9397 (let ((record (assoc key-2 (cdr subtable))))
9398 (if record
9399
9400 \f(cdr record)
9401 false))
9402 false)))
9403 To insert a new item under a pair of keys, we use assoc to see if there is a subtable stored under the
9404 first key. If not, we build a new subtable containing the single record (key-2, value) and insert it
9405 into the table under the first key. If a subtable already exists for the first key, we insert the new record
9406 into this subtable, using the insertion method for one-dimensional tables described above:
9407 (define (insert! key-1 key-2 value table)
9408 (let ((subtable (assoc key-1 (cdr table))))
9409 (if subtable
9410 (let ((record (assoc key-2 (cdr subtable))))
9411 (if record
9412 (set-cdr! record value)
9413 (set-cdr! subtable
9414 (cons (cons key-2 value)
9415 (cdr subtable)))))
9416 (set-cdr! table
9417 (cons (list key-1
9418 (cons key-2 value))
9419 (cdr table)))))
9420 ’ok)
9421
9422 Creating local tables
9423 The lookup and insert! operations defined above take the table as an argument. This enables us
9424 to use programs that access more than one table. Another way to deal with multiple tables is to have
9425 separate lookup and insert! procedures for each table. We can do this by representing a table
9426 procedurally, as an object that maintains an internal table as part of its local state. When sent an
9427 appropriate message, this ‘‘table object’’ supplies the procedure with which to operate on the internal
9428 table. Here is a generator for two-dimensional tables represented in this fashion:
9429 (define (make-table)
9430 (let ((local-table (list ’*table*)))
9431 (define (lookup key-1 key-2)
9432 (let ((subtable (assoc key-1 (cdr local-table))))
9433 (if subtable
9434 (let ((record (assoc key-2 (cdr subtable))))
9435 (if record
9436 (cdr record)
9437 false))
9438 false)))
9439 (define (insert! key-1 key-2 value)
9440 (let ((subtable (assoc key-1 (cdr local-table))))
9441 (if subtable
9442 (let ((record (assoc key-2 (cdr subtable))))
9443 (if record
9444 (set-cdr! record value)
9445 (set-cdr! subtable
9446 (cons (cons key-2 value)
9447
9448 \f(cdr subtable)))))
9449 (set-cdr! local-table
9450 (cons (list key-1
9451 (cons key-2 value))
9452 (cdr local-table)))))
9453 ’ok)
9454 (define (dispatch m)
9455 (cond ((eq? m ’lookup-proc) lookup)
9456 ((eq? m ’insert-proc!) insert!)
9457 (else (error "Unknown operation -- TABLE" m))))
9458 dispatch))
9459 Using make-table, we could implement the get and put operations used in section 2.4.3 for
9460 data-directed programming, as follows:
9461 (define operation-table (make-table))
9462 (define get (operation-table ’lookup-proc))
9463 (define put (operation-table ’insert-proc!))
9464 Get takes as arguments two keys, and put takes as arguments two keys and a value. Both operations
9465 access the same local table, which is encapsulated within the object created by the call to
9466 make-table.
9467 Exercise 3.24. In the table implementations above, the keys are tested for equality using equal?
9468 (called by assoc). This is not always the appropriate test. For instance, we might have a table with
9469 numeric keys in which we don’t need an exact match to the number we’re looking up, but only a
9470 number within some tolerance of it. Design a table constructor make-table that takes as an
9471 argument a same-key? procedure that will be used to test ‘‘equality’’ of keys. Make-table should
9472 return a dispatch procedure that can be used to access appropriate lookup and insert!
9473 procedures for a local table.
9474 Exercise 3.25. Generalizing one- and two-dimensional tables, show how to implement a table in
9475 which values are stored under an arbitrary number of keys and different values may be stored under
9476 different numbers of keys. The lookup and insert! procedures should take as input a list of keys
9477 used to access the table.
9478 Exercise 3.26. To search a table as implemented above, one needs to scan through the list of records.
9479 This is basically the unordered list representation of section 2.3.3. For large tables, it may be more
9480 efficient to structure the table in a different manner. Describe a table implementation where the (key,
9481 value) records are organized using a binary tree, assuming that keys can be ordered in some way (e.g.,
9482 numerically or alphabetically). (Compare exercise 2.66 of chapter 2.)
9483 Exercise 3.27. Memoization (also called tabulation) is a technique that enables a procedure to record,
9484 in a local table, values that have previously been computed. This technique can make a vast difference
9485 in the performance of a program. A memoized procedure maintains a table in which values of previous
9486 calls are stored using as keys the arguments that produced the values. When the memoized procedure
9487 is asked to compute a value, it first checks the table to see if the value is already there and, if so, just
9488 returns that value. Otherwise, it computes the new value in the ordinary way and stores this in the
9489 table. As an example of memoization, recall from section 1.2.2 the exponential process for computing
9490 Fibonacci numbers:
9491
9492 \f(define (fib n)
9493 (cond ((= n 0) 0)
9494 ((= n 1) 1)
9495 (else (+ (fib (- n 1))
9496 (fib (- n 2))))))
9497 The memoized version of the same procedure is
9498 (define memo-fib
9499 (memoize (lambda (n)
9500 (cond ((= n 0) 0)
9501 ((= n 1) 1)
9502 (else (+ (memo-fib (- n 1))
9503 (memo-fib (- n 2))))))))
9504 where the memoizer is defined as
9505 (define (memoize f)
9506 (let ((table (make-table)))
9507 (lambda (x)
9508 (let ((previously-computed-result (lookup x table)))
9509 (or previously-computed-result
9510 (let ((result (f x)))
9511 (insert! x result table)
9512 result))))))
9513 Draw an environment diagram to analyze the computation of (memo-fib 3). Explain why
9514 memo-fib computes the nth Fibonacci number in a number of steps proportional to n. Would the
9515 scheme still work if we had simply defined memo-fib to be (memoize fib)?
9516
9517 3.3.4 A Simulator for Digital Circuits
9518 Designing complex digital systems, such as computers, is an important engineering activity. Digital
9519 systems are constructed by interconnecting simple elements. Although the behavior of these individual
9520 elements is simple, networks of them can have very complex behavior. Computer simulation of
9521 proposed circuit designs is an important tool used by digital systems engineers. In this section we
9522 design a system for performing digital logic simulations. This system typifies a kind of program called
9523 an event-driven simulation, in which actions (‘‘events’’) trigger further events that happen at a later
9524 time, which in turn trigger more events, and so so.
9525 Our computational model of a circuit will be composed of objects that correspond to the elementary
9526 components from which the circuit is constructed. There are wires, which carry digital signals. A
9527 digital signal may at any moment have only one of two possible values, 0 and 1. There are also various
9528 types of digital function boxes, which connect wires carrying input signals to other output wires. Such
9529 boxes produce output signals computed from their input signals. The output signal is delayed by a time
9530 that depends on the type of the function box. For example, an inverter is a primitive function box that
9531 inverts its input. If the input signal to an inverter changes to 0, then one inverter-delay later the inverter
9532 will change its output signal to 1. If the input signal to an inverter changes to 1, then one inverter-delay
9533 later the inverter will change its output signal to 0. We draw an inverter symbolically as in figure 3.24.
9534 An and-gate, also shown in figure 3.24, is a primitive function box with two inputs and one output. It
9535 drives its output signal to a value that is the logical and of the inputs. That is, if both of its input
9536
9537 \fsignals become 1, then one and-gate-delay time later the and-gate will force its output signal to be 1;
9538 otherwise the output will be 0. An or-gate is a similar two-input primitive function box that drives its
9539 output signal to a value that is the logical or of the inputs. That is, the output will become 1 if at least
9540 one of the input signals is 1; otherwise the output will become 0.
9541
9542 Figure 3.24: Primitive functions in the digital logic simulator.
9543 Figure 3.24: Primitive functions in the digital logic simulator.
9544 We can connect primitive functions together to construct more complex functions. To accomplish this
9545 we wire the outputs of some function boxes to the inputs of other function boxes. For example, the
9546 half-adder circuit shown in figure 3.25 consists of an or-gate, two and-gates, and an inverter. It takes
9547 two input signals, A and B, and has two output signals, S and C. S will become 1 whenever precisely
9548 one of A and B is 1, and C will become 1 whenever A and B are both 1. We can see from the figure
9549 that, because of the delays involved, the outputs may be generated at different times. Many of the
9550 difficulties in the design of digital circuits arise from this fact.
9551
9552 Figure 3.25: A half-adder circuit.
9553 Figure 3.25: A half-adder circuit.
9554 We will now build a program for modeling the digital logic circuits we wish to study. The program
9555 will construct computational objects modeling the wires, which will ‘‘hold’’ the signals. Function
9556 boxes will be modeled by procedures that enforce the correct relationships among the signals.
9557 One basic element of our simulation will be a procedure make-wire, which constructs wires. For
9558 example, we can construct six wires as follows:
9559 (define
9560 (define
9561 (define
9562 (define
9563 (define
9564 (define
9565
9566 a
9567 b
9568 c
9569 d
9570 e
9571 s
9572
9573 (make-wire))
9574 (make-wire))
9575 (make-wire))
9576 (make-wire))
9577 (make-wire))
9578 (make-wire))
9579
9580 We attach a function box to a set of wires by calling a procedure that constructs that kind of box. The
9581 arguments to the constructor procedure are the wires to be attached to the box. For example, given that
9582 we can construct and-gates, or-gates, and inverters, we can wire together the half-adder shown in
9583 figure 3.25:
9584
9585 \f(or-gate a b d)
9586 ok
9587 (and-gate a b c)
9588 ok
9589 (inverter c e)
9590 ok
9591 (and-gate d e s)
9592 ok
9593 Better yet, we can explicitly name this operation by defining a procedure half-adder that
9594 constructs this circuit, given the four external wires to be attached to the half-adder:
9595 (define (half-adder a b s c)
9596 (let ((d (make-wire)) (e (make-wire)))
9597 (or-gate a b d)
9598 (and-gate a b c)
9599 (inverter c e)
9600 (and-gate d e s)
9601 ’ok))
9602 The advantage of making this definition is that we can use half-adder itself as a building block in
9603 creating more complex circuits. Figure 3.26, for example, shows a full-adder composed of two
9604 half-adders and an or-gate. 26 We can construct a full-adder as follows:
9605 (define (full-adder a b c-in sum c-out)
9606 (let ((s (make-wire))
9607 (c1 (make-wire))
9608 (c2 (make-wire)))
9609 (half-adder b c-in s c1)
9610 (half-adder a s sum c2)
9611 (or-gate c1 c2 c-out)
9612 ’ok))
9613 Having defined full-adder as a procedure, we can now use it as a building block for creating still
9614 more complex circuits. (For example, see exercise 3.30.)
9615
9616 Figure 3.26: A full-adder circuit.
9617 Figure 3.26: A full-adder circuit.
9618 In essence, our simulator provides us with the tools to construct a language of circuits. If we adopt the
9619 general perspective on languages with which we approached the study of Lisp in section 1.1, we can
9620 say that the primitive function boxes form the primitive elements of the language, that wiring boxes
9621 together provides a means of combination, and that specifying wiring patterns as procedures serves as
9622
9623 \fa means of abstraction.
9624
9625 Primitive function boxes
9626 The primitive function boxes implement the ‘‘forces’’ by which a change in the signal on one wire
9627 influences the signals on other wires. To build function boxes, we use the following operations on
9628 wires:
9629 (get-signal <wire>)
9630 returns the current value of the signal on the wire.
9631 (set-signal! <wire> <new value>)
9632 changes the value of the signal on the wire to the new value.
9633 (add-action! <wire> <procedure of no arguments>)
9634 asserts that the designated procedure should be run whenever the signal on the wire changes
9635 value. Such procedures are the vehicles by which changes in the signal value on the wire are
9636 communicated to other wires.
9637 In addition, we will make use of a procedure after-delay that takes a time delay and a procedure
9638 to be run and executes the given procedure after the given delay.
9639 Using these procedures, we can define the primitive digital logic functions. To connect an input to an
9640 output through an inverter, we use add-action! to associate with the input wire a procedure that
9641 will be run whenever the signal on the input wire changes value. The procedure computes the
9642 logical-not of the input signal, and then, after one inverter-delay, sets the output signal to
9643 be this new value:
9644 (define (inverter input output)
9645 (define (invert-input)
9646 (let ((new-value (logical-not (get-signal input))))
9647 (after-delay inverter-delay
9648 (lambda ()
9649 (set-signal! output new-value)))))
9650 (add-action! input invert-input)
9651 ’ok)
9652 (define (logical-not s)
9653 (cond ((= s 0) 1)
9654 ((= s 1) 0)
9655 (else (error "Invalid signal" s))))
9656 An and-gate is a little more complex. The action procedure must be run if either of the inputs to the
9657 gate changes. It computes the logical-and (using a procedure analogous to logical-not) of
9658 the values of the signals on the input wires and sets up a change to the new value to occur on the
9659 output wire after one and-gate-delay.
9660 (define (and-gate a1 a2 output)
9661 (define (and-action-procedure)
9662 (let ((new-value
9663 (logical-and (get-signal a1) (get-signal a2))))
9664 (after-delay and-gate-delay
9665 (lambda ()
9666
9667 \f(set-signal! output new-value)))))
9668 (add-action! a1 and-action-procedure)
9669 (add-action! a2 and-action-procedure)
9670 ’ok)
9671 Exercise 3.28. Define an or-gate as a primitive function box. Your or-gate constructor should be
9672 similar to and-gate.
9673 Exercise 3.29. Another way to construct an or-gate is as a compound digital logic device, built from
9674 and-gates and inverters. Define a procedure or-gate that accomplishes this. What is the delay time
9675 of the or-gate in terms of and-gate-delay and inverter-delay?
9676 Exercise 3.30. Figure 3.27 shows a ripple-carry adder formed by stringing together n full-adders.
9677 This is the simplest form of parallel adder for adding two n-bit binary numbers. The inputs A 1 , A 2 ,
9678 A 3 , ..., A n and B 1 , B 2 , B 3 , ..., B n are the two binary numbers to be added (each A k and B k is a
9679 0 or a 1). The circuit generates S 1 , S 2 , S 3 , ..., S n , the n bits of the sum, and C, the carry from the
9680 addition. Write a procedure ripple-carry-adder that generates this circuit. The procedure
9681 should take as arguments three lists of n wires each -- the A k , the B k , and the S k -- and also another
9682 wire C. The major drawback of the ripple-carry adder is the need to wait for the carry signals to
9683 propagate. What is the delay needed to obtain the complete output from an n-bit ripple-carry adder,
9684 expressed in terms of the delays for and-gates, or-gates, and inverters?
9685
9686 Figure 3.27: A ripple-carry adder for n-bit numbers.
9687 Figure 3.27: A ripple-carry adder for n-bit numbers.
9688
9689 Representing wires
9690 A wire in our simulation will be a computational object with two local state variables: a
9691 signal-value (initially taken to be 0) and a collection of action-procedures to be run when
9692 the signal changes value. We implement the wire, using message-passing style, as a collection of local
9693 procedures together with a dispatch procedure that selects the appropriate local operation, just as
9694 we did with the simple bank-account object in section 3.1.1:
9695 (define (make-wire)
9696 (let ((signal-value 0) (action-procedures ’()))
9697 (define (set-my-signal! new-value)
9698 (if (not (= signal-value new-value))
9699 (begin (set! signal-value new-value)
9700 (call-each action-procedures))
9701
9702 \f’done))
9703 (define (accept-action-procedure! proc)
9704 (set! action-procedures (cons proc action-procedures))
9705 (proc))
9706 (define (dispatch m)
9707 (cond ((eq? m ’get-signal) signal-value)
9708 ((eq? m ’set-signal!) set-my-signal!)
9709 ((eq? m ’add-action!) accept-action-procedure!)
9710 (else (error "Unknown operation -- WIRE" m))))
9711 dispatch))
9712 The local procedure set-my-signal! tests whether the new signal value changes the signal on the
9713 wire. If so, it runs each of the action procedures, using the following procedure call-each, which
9714 calls each of the items in a list of no-argument procedures:
9715 (define (call-each procedures)
9716 (if (null? procedures)
9717 ’done
9718 (begin
9719 ((car procedures))
9720 (call-each (cdr procedures)))))
9721 The local procedure accept-action-procedure! adds the given procedure to the list of
9722 procedures to be run, and then runs the new procedure once. (See exercise 3.31.)
9723 With the local dispatch procedure set up as specified, we can provide the following procedures to
9724 access the local operations on wires: 27
9725 (define (get-signal wire)
9726 (wire ’get-signal))
9727 (define (set-signal! wire new-value)
9728 ((wire ’set-signal!) new-value))
9729 (define (add-action! wire action-procedure)
9730 ((wire ’add-action!) action-procedure))
9731 Wires, which have time-varying signals and may be incrementally attached to devices, are typical of
9732 mutable objects. We have modeled them as procedures with local state variables that are modified by
9733 assignment. When a new wire is created, a new set of state variables is allocated (by the let
9734 expression in make-wire) and a new dispatch procedure is constructed and returned, capturing
9735 the environment with the new state variables.
9736 The wires are shared among the various devices that have been connected to them. Thus, a change
9737 made by an interaction with one device will affect all the other devices attached to the wire. The wire
9738 communicates the change to its neighbors by calling the action procedures provided to it when the
9739 connections were established.
9740
9741 The agenda
9742 The only thing needed to complete the simulator is after-delay. The idea here is that we maintain
9743 a data structure, called an agenda, that contains a schedule of things to do. The following operations
9744 are defined for agendas:
9745
9746 \f(make-agenda)
9747 returns a new empty agenda.
9748 (empty-agenda? <agenda>)
9749 is true if the specified agenda is empty.
9750 (first-agenda-item <agenda>)
9751 returns the first item on the agenda.
9752 (remove-first-agenda-item! <agenda>)
9753 modifies the agenda by removing the first item.
9754 (add-to-agenda! <time> <action> <agenda>)
9755 modifies the agenda by adding the given action procedure to be run at the specified time.
9756 (current-time <agenda>)
9757 returns the current simulation time.
9758 The particular agenda that we use is denoted by the-agenda. The procedure after-delay adds
9759 new elements to the-agenda:
9760 (define (after-delay delay action)
9761 (add-to-agenda! (+ delay (current-time the-agenda))
9762 action
9763 the-agenda))
9764 The simulation is driven by the procedure propagate, which operates on the-agenda, executing
9765 each procedure on the agenda in sequence. In general, as the simulation runs, new items will be added
9766 to the agenda, and propagate will continue the simulation as long as there are items on the agenda:
9767 (define (propagate)
9768 (if (empty-agenda? the-agenda)
9769 ’done
9770 (let ((first-item (first-agenda-item the-agenda)))
9771 (first-item)
9772 (remove-first-agenda-item! the-agenda)
9773 (propagate))))
9774
9775 A sample simulation
9776 The following procedure, which places a ‘‘probe’’ on a wire, shows the simulator in action. The probe
9777 tells the wire that, whenever its signal changes value, it should print the new signal value, together
9778 with the current time and a name that identifies the wire:
9779 (define (probe name wire)
9780 (add-action! wire
9781 (lambda ()
9782 (newline)
9783 (display name)
9784 (display " ")
9785 (display (current-time the-agenda))
9786 (display " New-value = ")
9787
9788 \f(display (get-signal wire)))))
9789 We begin by initializing the agenda and specifying delays for the primitive function boxes:
9790 (define
9791 (define
9792 (define
9793 (define
9794
9795 the-agenda (make-agenda))
9796 inverter-delay 2)
9797 and-gate-delay 3)
9798 or-gate-delay 5)
9799
9800 Now we define four wires, placing probes on two of them:
9801 (define input-1 (make-wire))
9802 (define input-2 (make-wire))
9803 (define sum (make-wire))
9804 (define carry (make-wire))
9805 (probe ’sum sum)
9806 sum 0 New-value = 0
9807 (probe ’carry carry)
9808 carry 0 New-value = 0
9809 Next we connect the wires in a half-adder circuit (as in figure 3.25), set the signal on input-1 to 1,
9810 and run the simulation:
9811 (half-adder input-1 input-2 sum carry)
9812 ok
9813 (set-signal! input-1 1)
9814 done
9815 (propagate)
9816 sum 8 New-value = 1
9817 done
9818 The sum signal changes to 1 at time 8. We are now eight time units from the beginning of the
9819 simulation. At this point, we can set the signal on input-2 to 1 and allow the values to propagate:
9820 (set-signal! input-2 1)
9821 done
9822 (propagate)
9823 carry 11 New-value = 1
9824 sum 16 New-value = 0
9825 done
9826 The carry changes to 1 at time 11 and the sum changes to 0 at time 16.
9827 Exercise 3.31. The internal procedure accept-action-procedure! defined in make-wire
9828 specifies that when a new action procedure is added to a wire, the procedure is immediately run.
9829 Explain why this initialization is necessary. In particular, trace through the half-adder example in the
9830 paragraphs above and say how the system’s response would differ if we had defined
9831 accept-action-procedure! as
9832 (define (accept-action-procedure! proc)
9833 (set! action-procedures (cons proc action-procedures)))
9834
9835 \fImplementing the agenda
9836 Finally, we give details of the agenda data structure, which holds the procedures that are scheduled for
9837 future execution.
9838 The agenda is made up of time segments. Each time segment is a pair consisting of a number (the time)
9839 and a queue (see exercise 3.32) that holds the procedures that are scheduled to be run during that time
9840 segment.
9841 (define
9842 (cons
9843 (define
9844 (define
9845
9846 (make-time-segment time queue)
9847 time queue))
9848 (segment-time s) (car s))
9849 (segment-queue s) (cdr s))
9850
9851 We will operate on the time-segment queues using the queue operations described in section 3.3.2.
9852 The agenda itself is a one-dimensional table of time segments. It differs from the tables described in
9853 section 3.3.3 in that the segments will be sorted in order of increasing time. In addition, we store the
9854 current time (i.e., the time of the last action that was processed) at the head of the agenda. A newly
9855 constructed agenda has no time segments and has a current time of 0: 28
9856 (define (make-agenda) (list 0))
9857 (define (current-time agenda) (car agenda))
9858 (define (set-current-time! agenda time)
9859 (set-car! agenda time))
9860 (define (segments agenda) (cdr agenda))
9861 (define (set-segments! agenda segments)
9862 (set-cdr! agenda segments))
9863 (define (first-segment agenda) (car (segments agenda)))
9864 (define (rest-segments agenda) (cdr (segments agenda)))
9865 An agenda is empty if it has no time segments:
9866 (define (empty-agenda? agenda)
9867 (null? (segments agenda)))
9868 To add an action to an agenda, we first check if the agenda is empty. If so, we create a time segment
9869 for the action and install this in the agenda. Otherwise, we scan the agenda, examining the time of each
9870 segment. If we find a segment for our appointed time, we add the action to the associated queue. If we
9871 reach a time later than the one to which we are appointed, we insert a new time segment into the
9872 agenda just before it. If we reach the end of the agenda, we must create a new time segment at the end.
9873 (define (add-to-agenda! time action agenda)
9874 (define (belongs-before? segments)
9875 (or (null? segments)
9876 (< time (segment-time (car segments)))))
9877 (define (make-new-time-segment time action)
9878 (let ((q (make-queue)))
9879 (insert-queue! q action)
9880 (make-time-segment time q)))
9881 (define (add-to-segments! segments)
9882 (if (= (segment-time (car segments)) time)
9883
9884 \f(insert-queue! (segment-queue (car segments))
9885 action)
9886 (let ((rest (cdr segments)))
9887 (if (belongs-before? rest)
9888 (set-cdr!
9889 segments
9890 (cons (make-new-time-segment time action)
9891 (cdr segments)))
9892 (add-to-segments! rest)))))
9893 (let ((segments (segments agenda)))
9894 (if (belongs-before? segments)
9895 (set-segments!
9896 agenda
9897 (cons (make-new-time-segment time action)
9898 segments))
9899 (add-to-segments! segments))))
9900 The procedure that removes the first item from the agenda deletes the item at the front of the queue in
9901 the first time segment. If this deletion makes the time segment empty, we remove it from the list of
9902 segments: 29
9903 (define (remove-first-agenda-item! agenda)
9904 (let ((q (segment-queue (first-segment agenda))))
9905 (delete-queue! q)
9906 (if (empty-queue? q)
9907 (set-segments! agenda (rest-segments agenda)))))
9908 The first agenda item is found at the head of the queue in the first time segment. Whenever we extract
9909 an item, we also update the current time: 30
9910 (define (first-agenda-item agenda)
9911 (if (empty-agenda? agenda)
9912 (error "Agenda is empty -- FIRST-AGENDA-ITEM")
9913 (let ((first-seg (first-segment agenda)))
9914 (set-current-time! agenda (segment-time first-seg))
9915 (front-queue (segment-queue first-seg)))))
9916 Exercise 3.32. The procedures to be run during each time segment of the agenda are kept in a queue.
9917 Thus, the procedures for each segment are called in the order in which they were added to the agenda
9918 (first in, first out). Explain why this order must be used. In particular, trace the behavior of an and-gate
9919 whose inputs change from 0,1 to 1,0 in the same segment and say how the behavior would differ if we
9920 stored a segment’s procedures in an ordinary list, adding and removing procedures only at the front
9921 (last in, first out).
9922
9923 3.3.5 Propagation of Constraints
9924 Computer programs are traditionally organized as one-directional computations, which perform
9925 operations on prespecified arguments to produce desired outputs. On the other hand, we often model
9926 systems in terms of relations among quantities. For example, a mathematical model of a mechanical
9927 structure might include the information that the deflection d of a metal rod is related to the force F on
9928 the rod, the length L of the rod, the cross-sectional area A, and the elastic modulus E via the equation
9929
9930 \fSuch an equation is not one-directional. Given any four of the quantities, we can use it to compute the
9931 fifth. Yet translating the equation into a traditional computer language would force us to choose one of
9932 the quantities to be computed in terms of the other four. Thus, a procedure for computing the area A
9933 could not be used to compute the deflection d, even though the computations of A and d arise from the
9934 same equation. 31
9935 In this section, we sketch the design of a language that enables us to work in terms of relations
9936 themselves. The primitive elements of the language are primitive constraints, which state that certain
9937 relations hold between quantities. For example, (adder a b c) specifies that the quantities a, b,
9938 and c must be related by the equation a + b = c, (multiplier x y z) expresses the constraint xy
9939 = z, and (constant 3.14 x) says that the value of x must be 3.14.
9940 Our language provides a means of combining primitive constraints in order to express more complex
9941 relations. We combine constraints by constructing constraint networks, in which constraints are joined
9942 by connectors. A connector is an object that ‘‘holds’’ a value that may participate in one or more
9943 constraints. For example, we know that the relationship between Fahrenheit and Celsius temperatures
9944 is
9945
9946 Such a constraint can be thought of as a network consisting of primitive adder, multiplier, and constant
9947 constraints (figure 3.28). In the figure, we see on the left a multiplier box with three terminals, labeled
9948 m1, m2, and p. These connect the multiplier to the rest of the network as follows: The m1 terminal is
9949 linked to a connector C, which will hold the Celsius temperature. The m2 terminal is linked to a
9950 connector w, which is also linked to a constant box that holds 9. The p terminal, which the multiplier
9951 box constrains to be the product of m1 and m2, is linked to the p terminal of another multiplier box,
9952 whose m2 is connected to a constant 5 and whose m1 is connected to one of the terms in a sum.
9953
9954 Figure 3.28: The relation 9C = 5(F - 32) expressed as a constraint network.
9955 Figure 3.28: The relation 9C = 5(F - 32) expressed as a constraint network.
9956 Computation by such a network proceeds as follows: When a connector is given a value (by the user or
9957 by a constraint box to which it is linked), it awakens all of its associated constraints (except for the
9958 constraint that just awakened it) to inform them that it has a value. Each awakened constraint box then
9959 polls its connectors to see if there is enough information to determine a value for a connector. If so, the
9960 box sets that connector, which then awakens all of its associated constraints, and so on. For instance,
9961 in conversion between Celsius and Fahrenheit, w, x, and y are immediately set by the constant boxes to
9962 9, 5, and 32, respectively. The connectors awaken the multipliers and the adder, which determine that
9963 there is not enough information to proceed. If the user (or some other part of the network) sets C to a
9964 value (say 25), the leftmost multiplier will be awakened, and it will set u to 25 · 9 = 225. Then u
9965 awakens the second multiplier, which sets v to 45, and v awakens the adder, which sets F to 77.
9966
9967 \fUsing the constraint system
9968 To use the constraint system to carry out the temperature computation outlined above, we first create
9969 two connectors, C and F, by calling the constructor make-connector, and link C and F in an
9970 appropriate network:
9971 (define C (make-connector))
9972 (define F (make-connector))
9973 (celsius-fahrenheit-converter C F)
9974 ok
9975 The procedure that creates the network is defined as follows:
9976 (define (celsius-fahrenheit-converter c f)
9977 (let ((u (make-connector))
9978 (v (make-connector))
9979 (w (make-connector))
9980 (x (make-connector))
9981 (y (make-connector)))
9982 (multiplier c w u)
9983 (multiplier v x u)
9984 (adder v y f)
9985 (constant 9 w)
9986 (constant 5 x)
9987 (constant 32 y)
9988 ’ok))
9989 This procedure creates the internal connectors u, v, w, x, and y, and links them as shown in
9990 figure 3.28 using the primitive constraint constructors adder, multiplier, and constant. Just
9991 as with the digital-circuit simulator of section 3.3.4, expressing these combinations of primitive
9992 elements in terms of procedures automatically provides our language with a means of abstraction for
9993 compound objects.
9994 To watch the network in action, we can place probes on the connectors C and F, using a probe
9995 procedure similar to the one we used to monitor wires in section 3.3.4. Placing a probe on a connector
9996 will cause a message to be printed whenever the connector is given a value:
9997 (probe "Celsius temp" C)
9998 (probe "Fahrenheit temp" F)
9999 Next we set the value of C to 25. (The third argument to set-value! tells C that this directive
10000 comes from the user.)
10001 (set-value! C 25 ’user)
10002 Probe: Celsius temp = 25
10003 Probe: Fahrenheit temp = 77
10004 done
10005 The probe on C awakens and reports the value. C also propagates its value through the network as
10006 described above. This sets F to 77, which is reported by the probe on F.
10007
10008 \fNow we can try to set F to a new value, say 212:
10009 (set-value! F 212 ’user)
10010 Error! Contradiction (77 212)
10011 The connector complains that it has sensed a contradiction: Its value is 77, and someone is trying to set
10012 it to 212. If we really want to reuse the network with new values, we can tell C to forget its old value:
10013 (forget-value! C ’user)
10014 Probe: Celsius temp = ?
10015 Probe: Fahrenheit temp = ?
10016 done
10017 C finds that the user, who set its value originally, is now retracting that value, so C agrees to lose its
10018 value, as shown by the probe, and informs the rest of the network of this fact. This information
10019 eventually propagates to F, which now finds that it has no reason for continuing to believe that its own
10020 value is 77. Thus, F also gives up its value, as shown by the probe.
10021 Now that F has no value, we are free to set it to 212:
10022 (set-value! F 212 ’user)
10023 Probe: Fahrenheit temp = 212
10024 Probe: Celsius temp = 100
10025 done
10026 This new value, when propagated through the network, forces C to have a value of 100, and this is
10027 registered by the probe on C. Notice that the very same network is being used to compute C given F
10028 and to compute F given C. This nondirectionality of computation is the distinguishing feature of
10029 constraint-based systems.
10030
10031 Implementing the constraint system
10032 The constraint system is implemented via procedural objects with local state, in a manner very similar
10033 to the digital-circuit simulator of section 3.3.4. Although the primitive objects of the constraint system
10034 are somewhat more complex, the overall system is simpler, since there is no concern about agendas
10035 and logic delays.
10036 The basic operations on connectors are the following:
10037 (has-value? <connector>)
10038 tells whether the connector has a value.
10039 (get-value <connector>)
10040 returns the connector’s current value.
10041 (set-value! <connector> <new-value> <informant>)
10042 indicates that the informant is requesting the connector to set its value to the new value.
10043 (forget-value! <connector> <retractor>)
10044 tells the connector that the retractor is requesting it to forget its value.
10045
10046 \f(connect <connector> <new-constraint>)
10047 tells the connector to participate in the new constraint.
10048 The connectors communicate with the constraints by means of the procedures
10049 inform-about-value, which tells the given constraint that the connector has a value, and
10050 inform-about-no-value, which tells the constraint that the connector has lost its value.
10051 Adder constructs an adder constraint among summand connectors a1 and a2 and a sum connector.
10052 An adder is implemented as a procedure with local state (the procedure me below):
10053 (define (adder a1 a2 sum)
10054 (define (process-new-value)
10055 (cond ((and (has-value? a1) (has-value? a2))
10056 (set-value! sum
10057 (+ (get-value a1) (get-value a2))
10058 me))
10059 ((and (has-value? a1) (has-value? sum))
10060 (set-value! a2
10061 (- (get-value sum) (get-value a1))
10062 me))
10063 ((and (has-value? a2) (has-value? sum))
10064 (set-value! a1
10065 (- (get-value sum) (get-value a2))
10066 me))))
10067 (define (process-forget-value)
10068 (forget-value! sum me)
10069 (forget-value! a1 me)
10070 (forget-value! a2 me)
10071 (process-new-value))
10072 (define (me request)
10073 (cond ((eq? request ’I-have-a-value)
10074 (process-new-value))
10075 ((eq? request ’I-lost-my-value)
10076 (process-forget-value))
10077 (else
10078 (error "Unknown request -- ADDER" request))))
10079 (connect a1 me)
10080 (connect a2 me)
10081 (connect sum me)
10082 me)
10083 Adder connects the new adder to the designated connectors and returns it as its value. The procedure
10084 me, which represents the adder, acts as a dispatch to the local procedures. The following ‘‘syntax
10085 interfaces’’ (see footnote 27 in section 3.3.4) are used in conjunction with the dispatch:
10086 (define (inform-about-value constraint)
10087 (constraint ’I-have-a-value))
10088 (define (inform-about-no-value constraint)
10089 (constraint ’I-lost-my-value))
10090
10091 \fThe adder’s local procedure process-new-value is called when the adder is informed that one of
10092 its connectors has a value. The adder first checks to see if both a1 and a2 have values. If so, it tells
10093 sum to set its value to the sum of the two addends. The informant argument to set-value! is
10094 me, which is the adder object itself. If a1 and a2 do not both have values, then the adder checks to see
10095 if perhaps a1 and sum have values. If so, it sets a2 to the difference of these two. Finally, if a2 and
10096 sum have values, this gives the adder enough information to set a1. If the adder is told that one of its
10097 connectors has lost a value, it requests that all of its connectors now lose their values. (Only those
10098 values that were set by this adder are actually lost.) Then it runs process-new-value. The reason
10099 for this last step is that one or more connectors may still have a value (that is, a connector may have
10100 had a value that was not originally set by the adder), and these values may need to be propagated back
10101 through the adder.
10102 A multiplier is very similar to an adder. It will set its product to 0 if either of the factors is 0, even if
10103 the other factor is not known.
10104 (define (multiplier m1 m2 product)
10105 (define (process-new-value)
10106 (cond ((or (and (has-value? m1) (= (get-value m1) 0))
10107 (and (has-value? m2) (= (get-value m2) 0)))
10108 (set-value! product 0 me))
10109 ((and (has-value? m1) (has-value? m2))
10110 (set-value! product
10111 (* (get-value m1) (get-value m2))
10112 me))
10113 ((and (has-value? product) (has-value? m1))
10114 (set-value! m2
10115 (/ (get-value product) (get-value m1))
10116 me))
10117 ((and (has-value? product) (has-value? m2))
10118 (set-value! m1
10119 (/ (get-value product) (get-value m2))
10120 me))))
10121 (define (process-forget-value)
10122 (forget-value! product me)
10123 (forget-value! m1 me)
10124 (forget-value! m2 me)
10125 (process-new-value))
10126 (define (me request)
10127 (cond ((eq? request ’I-have-a-value)
10128 (process-new-value))
10129 ((eq? request ’I-lost-my-value)
10130 (process-forget-value))
10131 (else
10132 (error "Unknown request -- MULTIPLIER" request))))
10133 (connect m1 me)
10134 (connect m2 me)
10135 (connect product me)
10136 me)
10137
10138 \fA constant constructor simply sets the value of the designated connector. Any
10139 I-have-a-value or I-lost-my-value message sent to the constant box will produce an error.
10140 (define (constant value connector)
10141 (define (me request)
10142 (error "Unknown request -- CONSTANT" request))
10143 (connect connector me)
10144 (set-value! connector value me)
10145 me)
10146 Finally, a probe prints a message about the setting or unsetting of the designated connector:
10147 (define (probe name connector)
10148 (define (print-probe value)
10149 (newline)
10150 (display "Probe: ")
10151 (display name)
10152 (display " = ")
10153 (display value))
10154 (define (process-new-value)
10155 (print-probe (get-value connector)))
10156 (define (process-forget-value)
10157 (print-probe "?"))
10158 (define (me request)
10159 (cond ((eq? request ’I-have-a-value)
10160 (process-new-value))
10161 ((eq? request ’I-lost-my-value)
10162 (process-forget-value))
10163 (else
10164 (error "Unknown request -- PROBE" request))))
10165 (connect connector me)
10166 me)
10167
10168 Representing connectors
10169 A connector is represented as a procedural object with local state variables value, the current value
10170 of the connector; informant, the object that set the connector’s value; and constraints, a list of
10171 the constraints in which the connector participates.
10172 (define (make-connector)
10173 (let ((value false) (informant false) (constraints ’()))
10174 (define (set-my-value newval setter)
10175 (cond ((not (has-value? me))
10176 (set! value newval)
10177 (set! informant setter)
10178 (for-each-except setter
10179 inform-about-value
10180 constraints))
10181 ((not (= value newval))
10182 (error "Contradiction" (list value newval)))
10183 (else ’ignored)))
10184
10185 \f(define (forget-my-value retractor)
10186 (if (eq? retractor informant)
10187 (begin (set! informant false)
10188 (for-each-except retractor
10189 inform-about-no-value
10190 constraints))
10191 ’ignored))
10192 (define (connect new-constraint)
10193 (if (not (memq new-constraint constraints))
10194 (set! constraints
10195 (cons new-constraint constraints)))
10196 (if (has-value? me)
10197 (inform-about-value new-constraint))
10198 ’done)
10199 (define (me request)
10200 (cond ((eq? request ’has-value?)
10201 (if informant true false))
10202 ((eq? request ’value) value)
10203 ((eq? request ’set-value!) set-my-value)
10204 ((eq? request ’forget) forget-my-value)
10205 ((eq? request ’connect) connect)
10206 (else (error "Unknown operation -- CONNECTOR"
10207 request))))
10208 me))
10209 The connector’s local procedure set-my-value is called when there is a request to set the
10210 connector’s value. If the connector does not currently have a value, it will set its value and remember
10211 as informant the constraint that requested the value to be set. 32 Then the connector will notify all
10212 of its participating constraints except the constraint that requested the value to be set. This is
10213 accomplished using the following iterator, which applies a designated procedure to all items in a list
10214 except a given one:
10215 (define (for-each-except exception procedure list)
10216 (define (loop items)
10217 (cond ((null? items) ’done)
10218 ((eq? (car items) exception) (loop (cdr items)))
10219 (else (procedure (car items))
10220 (loop (cdr items)))))
10221 (loop list))
10222 If a connector is asked to forget its value, it runs the local procedure forget-my-value, which first
10223 checks to make sure that the request is coming from the same object that set the value originally. If so,
10224 the connector informs its associated constraints about the loss of the value.
10225 The local procedure connect adds the designated new constraint to the list of constraints if it is not
10226 already in that list. Then, if the connector has a value, it informs the new constraint of this fact.
10227 The connector’s procedure me serves as a dispatch to the other internal procedures and also represents
10228 the connector as an object. The following procedures provide a syntax interface for the dispatch:
10229
10230 \f(define (has-value? connector)
10231 (connector ’has-value?))
10232 (define (get-value connector)
10233 (connector ’value))
10234 (define (set-value! connector new-value informant)
10235 ((connector ’set-value!) new-value informant))
10236 (define (forget-value! connector retractor)
10237 ((connector ’forget) retractor))
10238 (define (connect connector new-constraint)
10239 ((connector ’connect) new-constraint))
10240 Exercise 3.33. Using primitive multiplier, adder, and constant constraints, define a procedure
10241 averager that takes three connectors a, b, and c as inputs and establishes the constraint that the
10242 value of c is the average of the values of a and b.
10243 Exercise 3.34. Louis Reasoner wants to build a squarer, a constraint device with two terminals such
10244 that the value of connector b on the second terminal will always be the square of the value a on the
10245 first terminal. He proposes the following simple device made from a multiplier:
10246 (define (squarer a b)
10247 (multiplier a a b))
10248 There is a serious flaw in this idea. Explain.
10249 Exercise 3.35. Ben Bitdiddle tells Louis that one way to avoid the trouble in exercise 3.34 is to define
10250 a squarer as a new primitive constraint. Fill in the missing portions in Ben’s outline for a procedure to
10251 implement such a constraint:
10252 (define (squarer a b)
10253 (define (process-new-value)
10254 (if (has-value? b)
10255 (if (< (get-value b) 0)
10256 (error "square less than 0 -- SQUARER" (get-value b))
10257 <alternative1>)
10258 <alternative2>))
10259 (define (process-forget-value) <body1>)
10260 (define (me request) <body2>)
10261 <rest of definition>
10262 me)
10263 Exercise 3.36. Suppose we evaluate the following sequence of expressions in the global environment:
10264 (define a (make-connector))
10265 (define b (make-connector))
10266 (set-value! a 10 ’user)
10267 At some time during evaluation of the set-value!, the following expression from the connector’s
10268 local procedure is evaluated:
10269 (for-each-except setter inform-about-value constraints)
10270
10271 \fDraw an environment diagram showing the environment in which the above expression is evaluated.
10272 Exercise 3.37. The celsius-fahrenheit-converter procedure is cumbersome when
10273 compared with a more expression-oriented style of definition, such as
10274 (define (celsius-fahrenheit-converter x)
10275 (c+ (c* (c/ (cv 9) (cv 5))
10276 x)
10277 (cv 32)))
10278 (define C (make-connector))
10279 (define F (celsius-fahrenheit-converter C))
10280 Here c+, c*, etc. are the ‘‘constraint’’ versions of the arithmetic operations. For example, c+ takes
10281 two connectors as arguments and returns a connector that is related to these by an adder constraint:
10282 (define (c+ x y)
10283 (let ((z (make-connector)))
10284 (adder x y z)
10285 z))
10286 Define analogous procedures c-, c*, c/, and cv (constant value) that enable us to define compound
10287 constraints as in the converter example above. 33
10288 16 Set-car! and set-cdr! return implementation-dependent values. Like set!, they should be
10289
10290 used only for their effect.
10291 17 We see from this that mutation operations on lists can create ‘‘garbage’’ that is not part of any
10292
10293 accessible structure. We will see in section 5.3.2 that Lisp memory-management systems include a
10294 garbage collector, which identifies and recycles the memory space used by unneeded pairs.
10295 18 Get-new-pair is one of the operations that must be implemented as part of the memory
10296
10297 management required by a Lisp implementation. We will discuss this in section 5.3.1.
10298 19 The two pairs are distinct because each call to cons returns a new pair. The symbols are shared; in
10299
10300 Scheme there is a unique symbol with any given name. Since Scheme provides no way to mutate a
10301 symbol, this sharing is undetectable. Note also that the sharing is what enables us to compare symbols
10302 using eq?, which simply checks equality of pointers.
10303 20 The subtleties of dealing with sharing of mutable data objects reflect the underlying issues of
10304
10305 ‘‘sameness’’ and ‘‘change’’ that were raised in section 3.1.3. We mentioned there that admitting
10306 change to our language requires that a compound object must have an ‘‘identity’’ that is something
10307 different from the pieces from which it is composed. In Lisp, we consider this ‘‘identity’’ to be the
10308 quality that is tested by eq?, i.e., by equality of pointers. Since in most Lisp implementations a pointer
10309 is essentially a memory address, we are ‘‘solving the problem’’ of defining the identity of objects by
10310 stipulating that a data object ‘‘itself’’ is the information stored in some particular set of memory
10311 locations in the computer. This suffices for simple Lisp programs, but is hardly a general way to
10312 resolve the issue of ‘‘sameness’’ in computational models.
10313 21 On the other hand, from the viewpoint of implementation, assignment requires us to modify the
10314
10315 environment, which is itself a mutable data structure. Thus, assignment and mutation are equipotent:
10316 Each can be implemented in terms of the other.
10317
10318 \f22 If the first item is the final item in the queue, the front pointer will be the empty list after the
10319
10320 deletion, which will mark the queue as empty; we needn’t worry about updating the rear pointer,
10321 which will still point to the deleted item, because empty-queue? looks only at the front pointer.
10322 23 Be careful not to make the interpreter try to print a structure that contains cycles. (See
10323
10324 exercise 3.13.)
10325 24 Because assoc uses equal?, it can recognize keys that are symbols, numbers, or list structure.
10326 25 Thus, the first backbone pair is the object that represents the table ‘‘itself’’; that is, a pointer to the
10327
10328 table is a pointer to this pair. This same backbone pair always starts the table. If we did not arrange
10329 things in this way, insert! would have to return a new value for the start of the table when it added
10330 a new record.
10331 26 A full-adder is a basic circuit element used in adding two binary numbers. Here A and B are the
10332
10333 bits at corresponding positions in the two numbers to be added, and C in is the carry bit from the
10334 addition one place to the right. The circuit generates SUM, which is the sum bit in the corresponding
10335 position, and C out , which is the carry bit to be propagated to the left.
10336 27 These procedures are simply syntactic sugar that allow us to use ordinary procedural syntax to
10337
10338 access the local procedures of objects. It is striking that we can interchange the role of ‘‘procedures’’
10339 and ‘‘data’’ in such a simple way. For example, if we write (wire ’get-signal) we think of
10340 wire as a procedure that is called with the message get-signal as input. Alternatively, writing
10341 (get-signal wire) encourages us to think of wire as a data object that is the input to a
10342 procedure get-signal. The truth of the matter is that, in a language in which we can deal with
10343 procedures as objects, there is no fundamental difference between ‘‘procedures’’ and ‘‘data,’’ and we
10344 can choose our syntactic sugar to allow us to program in whatever style we choose.
10345 28 The agenda is a headed list, like the tables in section 3.3.3, but since the list is headed by the time,
10346
10347 we do not need an additional dummy header (such as the *table* symbol used with tables).
10348 29 Observe that the if expression in this procedure has no <alternative> expression. Such a
10349
10350 ‘‘one-armed if statement’’ is used to decide whether to do something, rather than to select between
10351 two expressions. An if expression returns an unspecified value if the predicate is false and there is no
10352 <alternative>.
10353 30 In this way, the current time will always be the time of the action most recently processed. Storing
10354
10355 this time at the head of the agenda ensures that it will still be available even if the associated time
10356 segment has been deleted.
10357 31 Constraint propagation first appeared in the incredibly forward-looking SKETCHPAD system of
10358
10359 Ivan Sutherland (1963). A beautiful constraint-propagation system based on the Smalltalk language
10360 was developed by Alan Borning (1977) at Xerox Palo Alto Research Center. Sussman, Stallman, and
10361 Steele applied constraint propagation to electrical circuit analysis (Sussman and Stallman 1975;
10362 Sussman and Steele 1980). TK!Solver (Konopasek and Jayaraman 1984) is an extensive modeling
10363 environment based on constraints.
10364 32 The setter might not be a constraint. In our temperature example, we used user as the
10365
10366 setter.
10367
10368 \f33 The expression-oriented format is convenient because it avoids the need to name the intermediate
10369
10370 expressions in a computation. Our original formulation of the constraint language is cumbersome in
10371 the same way that many languages are cumbersome when dealing with operations on compound data.
10372 For example, if we wanted to compute the product (a + b) · (c + d), where the variables represent
10373 vectors, we could work in ‘‘imperative style,’’ using procedures that set the values of designated
10374 vector arguments but do not themselves return vectors as values:
10375 (v-sum a b temp1)
10376 (v-sum c d temp2)
10377 (v-prod temp1 temp2 answer)
10378 Alternatively, we could deal with expressions, using procedures that return vectors as values, and thus
10379 avoid explicitly mentioning temp1 and temp2:
10380 (define answer (v-prod (v-sum a b) (v-sum c d)))
10381 Since Lisp allows us to return compound objects as values of procedures, we can transform our
10382 imperative-style constraint language into an expression-oriented style as shown in this exercise. In
10383 languages that are impoverished in handling compound objects, such as Algol, Basic, and Pascal
10384 (unless one explicitly uses Pascal pointer variables), one is usually stuck with the imperative style
10385 when manipulating compound objects. Given the advantage of the expression-oriented format, one
10386 might ask if there is any reason to have implemented the system in imperative style, as we did in this
10387 section. One reason is that the non-expression-oriented constraint language provides a handle on
10388 constraint objects (e.g., the value of the adder procedure) as well as on connector objects. This is
10389 useful if we wish to extend the system with new operations that communicate with constraints directly
10390 rather than only indirectly via operations on connectors. Although it is easy to implement the
10391 expression-oriented style in terms of the imperative implementation, it is very difficult to do the
10392 converse.
10393 [Go to first, previous, next page; contents; index]
10394
10395 \f[Go to first, previous, next page; contents; index]
10396
10397 3.4 Concurrency: Time Is of the Essence
10398 We’ve seen the power of computational objects with local state as tools for modeling. Yet, as
10399 section 3.1.3 warned, this power extracts a price: the loss of referential transparency, giving rise to a
10400 thicket of questions about sameness and change, and the need to abandon the substitution model of
10401 evaluation in favor of the more intricate environment model.
10402 The central issue lurking beneath the complexity of state, sameness, and change is that by introducing
10403 assignment we are forced to admit time into our computational models. Before we introduced
10404 assignment, all our programs were timeless, in the sense that any expression that has a value always
10405 has the same value. In contrast, recall the example of modeling withdrawals from a bank account and
10406 returning the resulting balance, introduced at the beginning of section 3.1.1:
10407 (withdraw 25)
10408 75
10409 (withdraw 25)
10410 50
10411 Here successive evaluations of the same expression yield different values. This behavior arises from
10412 the fact that the execution of assignment statements (in this case, assignments to the variable
10413 balance) delineates moments in time when values change. The result of evaluating an expression
10414 depends not only on the expression itself, but also on whether the evaluation occurs before or after
10415 these moments. Building models in terms of computational objects with local state forces us to
10416 confront time as an essential concept in programming.
10417 We can go further in structuring computational models to match our perception of the physical world.
10418 Objects in the world do not change one at a time in sequence. Rather we perceive them as acting
10419 concurrently -- all at once. So it is often natural to model systems as collections of computational
10420 processes that execute concurrently. Just as we can make our programs modular by organizing models
10421 in terms of objects with separate local state, it is often appropriate to divide computational models into
10422 parts that evolve separately and concurrently. Even if the programs are to be executed on a sequential
10423 computer, the practice of writing programs as if they were to be executed concurrently forces the
10424 programmer to avoid inessential timing constraints and thus makes programs more modular.
10425 In addition to making programs more modular, concurrent computation can provide a speed advantage
10426 over sequential computation. Sequential computers execute only one operation at a time, so the
10427 amount of time it takes to perform a task is proportional to the total number of operations
10428 performed. 34 However, if it is possible to decompose a problem into pieces that are relatively
10429 independent and need to communicate only rarely, it may be possible to allocate pieces to separate
10430 computing processors, producing a speed advantage proportional to the number of processors
10431 available.
10432 Unfortunately, the complexities introduced by assignment become even more problematic in the
10433 presence of concurrency. The fact of concurrent execution, either because the world operates in
10434 parallel or because our computers do, entails additional complexity in our understanding of time.
10435
10436 \f3.4.1 The Nature of Time in Concurrent Systems
10437 On the surface, time seems straightforward. It is an ordering imposed on events. 35 For any events A
10438 and B, either A occurs before B, A and B are simultaneous, or A occurs after B. For instance, returning
10439 to the bank account example, suppose that Peter withdraws $10 and Paul withdraws $25 from a joint
10440 account that initially contains $100, leaving $65 in the account. Depending on the order of the two
10441 withdrawals, the sequence of balances in the account is either $100 $90 $65 or $100 $75
10442 $65. In a computer implementation of the banking system, this changing sequence of balances could
10443 be modeled by successive assignments to a variable balance.
10444 In complex situations, however, such a view can be problematic. Suppose that Peter and Paul, and
10445 other people besides, are accessing the same bank account through a network of banking machines
10446 distributed all over the world. The actual sequence of balances in the account will depend critically on
10447 the detailed timing of the accesses and the details of the communication among the machines.
10448 This indeterminacy in the order of events can pose serious problems in the design of concurrent
10449 systems. For instance, suppose that the withdrawals made by Peter and Paul are implemented as two
10450 separate processes sharing a common variable balance, each process specified by the procedure
10451 given in section 3.1.1:
10452 (define (withdraw amount)
10453 (if (>= balance amount)
10454 (begin (set! balance (- balance amount))
10455 balance)
10456 "Insufficient funds"))
10457 If the two processes operate independently, then Peter might test the balance and attempt to withdraw
10458 a legitimate amount. However, Paul might withdraw some funds in between the time that Peter checks
10459 the balance and the time Peter completes the withdrawal, thus invalidating Peter’s test.
10460 Things can be worse still. Consider the expression
10461 (set! balance (- balance amount))
10462 executed as part of each withdrawal process. This consists of three steps: (1) accessing the value of the
10463 balance variable; (2) computing the new balance; (3) setting balance to this new value. If Peter
10464 and Paul’s withdrawals execute this statement concurrently, then the two withdrawals might interleave
10465 the order in which they access balance and set it to the new value.
10466 The timing diagram in figure 3.29 depicts an order of events where balance starts at 100, Peter
10467 withdraws 10, Paul withdraws 25, and yet the final value of balance is 75. As shown in the diagram,
10468 the reason for this anomaly is that Paul’s assignment of 75 to balance is made under the assumption
10469 that the value of balance to be decremented is 100. That assumption, however, became invalid when
10470 Peter changed balance to 90. This is a catastrophic failure for the banking system, because the total
10471 amount of money in the system is not conserved. Before the transactions, the total amount of money
10472 was $100. Afterwards, Peter has $10, Paul has $25, and the bank has $75. 36
10473 The general phenomenon illustrated here is that several processes may share a common state variable.
10474 What makes this complicated is that more than one process may be trying to manipulate the shared
10475 state at the same time. For the bank account example, during each transaction, each customer should
10476 be able to act as if the other customers did not exist. When a customer changes the balance in a way
10477 that depends on the balance, he must be able to assume that, just before the moment of change, the
10478
10479 \fbalance is still what he thought it was.
10480
10481 Correct behavior of concurrent programs
10482 The above example typifies the subtle bugs that can creep into concurrent programs. The root of this
10483 complexity lies in the assignments to variables that are shared among the different processes. We
10484 already know that we must be careful in writing programs that use set!, because the results of a
10485 computation depend on the order in which the assignments occur. 37 With concurrent processes we
10486 must be especially careful about assignments, because we may not be able to control the order of the
10487 assignments made by the different processes. If several such changes might be made concurrently (as
10488 with two depositors accessing a joint account) we need some way to ensure that our system behaves
10489 correctly. For example, in the case of withdrawals from a joint bank account, we must ensure that
10490 money is conserved. To make concurrent programs behave correctly, we may have to place some
10491 restrictions on concurrent execution.
10492
10493 Figure 3.29: Timing diagram showing how interleaving the order of events in two banking
10494 withdrawals can lead to an incorrect final balance.
10495 Figure 3.29: Timing diagram showing how interleaving the order of events in two banking
10496 withdrawals can lead to an incorrect final balance.
10497 One possible restriction on concurrency would stipulate that no two operations that change any shared
10498 state variables can occur at the same time. This is an extremely stringent requirement. For distributed
10499 banking, it would require the system designer to ensure that only one transaction could proceed at a
10500 time. This would be both inefficient and overly conservative. Figure 3.30 shows Peter and Paul sharing
10501 a bank account, where Paul has a private account as well. The diagram illustrates two withdrawals
10502 from the shared account (one by Peter and one by Paul) and a deposit to Paul’s private account. 38 The
10503 two withdrawals from the shared account must not be concurrent (since both access and update the
10504 same account), and Paul’s deposit and withdrawal must not be concurrent (since both access and
10505 update the amount in Paul’s wallet). But there should be no problem permitting Paul’s deposit to his
10506 private account to proceed concurrently with Peter’s withdrawal from the shared account.
10507
10508 \fFigure 3.30: Concurrent deposits and withdrawals from a joint account in Bank1 and a private
10509 account in Bank2.
10510 Figure 3.30: Concurrent deposits and withdrawals from a joint account in Bank1 and a private
10511 account in Bank2.
10512 A less stringent restriction on concurrency would ensure that a concurrent system produces the same
10513 result as if the processes had run sequentially in some order. There are two important aspects to this
10514 requirement. First, it does not require the processes to actually run sequentially, but only to produce
10515 results that are the same as if they had run sequentially. For the example in figure 3.30, the designer of
10516 the bank account system can safely allow Paul’s deposit and Peter’s withdrawal to happen
10517 concurrently, because the net result will be the same as if the two operations had happened
10518 sequentially. Second, there may be more than one possible ‘‘correct’’ result produced by a concurrent
10519 program, because we require only that the result be the same as for some sequential order. For
10520 example, suppose that Peter and Paul’s joint account starts out with $100, and Peter deposits $40 while
10521 Paul concurrently withdraws half the money in the account. Then sequential execution could result in
10522 the account balance being either $70 or $90 (see exercise 3.38). 39
10523 There are still weaker requirements for correct execution of concurrent programs. A program for
10524 simulating diffusion (say, the flow of heat in an object) might consist of a large number of processes,
10525 each one representing a small volume of space, that update their values concurrently. Each process
10526 repeatedly changes its value to the average of its own value and its neighbors’ values. This algorithm
10527 converges to the right answer independent of the order in which the operations are done; there is no
10528 need for any restrictions on concurrent use of the shared values.
10529 Exercise 3.38. Suppose that Peter, Paul, and Mary share a joint bank account that initially contains
10530 $100. Concurrently, Peter deposits $10, Paul withdraws $20, and Mary withdraws half the money in
10531 the account, by executing the following commands:
10532
10533 \fPeter:
10534
10535 (set! balance (+ balance 10))
10536
10537 Paul:
10538
10539 (set! balance (- balance 20))
10540
10541 Mary:
10542
10543 (set! balance (- balance (/ balance 2)))
10544
10545 a. List all the different possible values for balance after these three transactions have been
10546 completed, assuming that the banking system forces the three processes to run sequentially in some
10547 order.
10548 b. What are some other values that could be produced if the system allows the processes to be
10549 interleaved? Draw timing diagrams like the one in figure 3.29 to explain how these values can occur.
10550
10551 3.4.2 Mechanisms for Controlling Concurrency
10552 We’ve seen that the difficulty in dealing with concurrent processes is rooted in the need to consider the
10553 interleaving of the order of events in the different processes. For example, suppose we have two
10554 processes, one with three ordered events (a,b,c) and one with three ordered events (x,y,z). If the two
10555 processes run concurrently, with no constraints on how their execution is interleaved, then there are 20
10556 different possible orderings for the events that are consistent with the individual orderings for the two
10557 processes:
10558
10559 As programmers designing this system, we would have to consider the effects of each of these 20
10560 orderings and check that each behavior is acceptable. Such an approach rapidly becomes unwieldy as
10561 the numbers of processes and events increase.
10562 A more practical approach to the design of concurrent systems is to devise general mechanisms that
10563 allow us to constrain the interleaving of concurrent processes so that we can be sure that the program
10564 behavior is correct. Many mechanisms have been developed for this purpose. In this section, we
10565 describe one of them, the serializer.
10566
10567 Serializing access to shared state
10568 Serialization implements the following idea: Processes will execute concurrently, but there will be
10569 certain collections of procedures that cannot be executed concurrently. More precisely, serialization
10570 creates distinguished sets of procedures such that only one execution of a procedure in each serialized
10571 set is permitted to happen at a time. If some procedure in the set is being executed, then a process that
10572 attempts to execute any procedure in the set will be forced to wait until the first execution has finished.
10573 We can use serialization to control access to shared variables. For example, if we want to update a
10574 shared variable based on the previous value of that variable, we put the access to the previous value of
10575 the variable and the assignment of the new value to the variable in the same procedure. We then ensure
10576 that no other procedure that assigns to the variable can run concurrently with this procedure by
10577 serializing all of these procedures with the same serializer. This guarantees that the value of the
10578 variable cannot be changed between an access and the corresponding assignment.
10579
10580 \fSerializers in Scheme
10581 To make the above mechanism more concrete, suppose that we have extended Scheme to include a
10582 procedure called parallel-execute:
10583 (parallel-execute <p 1 > <p 2 > ... <p k >)
10584 Each <p> must be a procedure of no arguments. Parallel-execute creates a separate process for
10585 each <p>, which applies <p> (to no arguments). These processes all run concurrently. 40
10586 As an example of how this is used, consider
10587 (define x 10)
10588 (parallel-execute (lambda () (set! x (* x x)))
10589 (lambda () (set! x (+ x 1))))
10590 This creates two concurrent processes -- P 1 , which sets x to x times x, and P 2 , which increments x.
10591 After execution is complete, x will be left with one of five possible values, depending on the
10592 interleaving of the events of P 1 and P 2 :
10593 101:
10594
10595 P 1 sets x to 100 and then P 2 increments x to 101.
10596
10597 121:
10598
10599 P 2 increments x to 11 and then P 1 sets x to x times x.
10600
10601 110:
10602
10603 P 2 changes x from 10 to 11 between the two times that P 1 accesses the value of x during
10604 the evaluation of (* x x).
10605
10606 11:
10607
10608 P 2 accesses x, then P 1 sets x to 100, then P 2 sets x.
10609
10610 100:
10611
10612 P 1 accesses x (twice), then P 2 sets x to 11, then P 1 sets x.
10613
10614 We can constrain the concurrency by using serialized procedures, which are created by serializers.
10615 Serializers are constructed by make-serializer, whose implementation is given below. A
10616 serializer takes a procedure as argument and returns a serialized procedure that behaves like the
10617 original procedure. All calls to a given serializer return serialized procedures in the same set.
10618 Thus, in contrast to the example above, executing
10619 (define x 10)
10620 (define s (make-serializer))
10621 (parallel-execute (s (lambda () (set! x (* x x))))
10622 (s (lambda () (set! x (+ x 1)))))
10623 can produce only two possible values for x, 101 or 121. The other possibilities are eliminated, because
10624 the execution of P 1 and P 2 cannot be interleaved.
10625 Here is a version of the make-account procedure from section 3.1.1, where the deposits and
10626 withdrawals have been serialized:
10627
10628 \f(define (make-account balance)
10629 (define (withdraw amount)
10630 (if (>= balance amount)
10631 (begin (set! balance (- balance amount))
10632 balance)
10633 "Insufficient funds"))
10634 (define (deposit amount)
10635 (set! balance (+ balance amount))
10636 balance)
10637 (let ((protected (make-serializer)))
10638 (define (dispatch m)
10639 (cond ((eq? m ’withdraw) (protected withdraw))
10640 ((eq? m ’deposit) (protected deposit))
10641 ((eq? m ’balance) balance)
10642 (else (error "Unknown request -- MAKE-ACCOUNT"
10643 m))))
10644 dispatch))
10645 With this implementation, two processes cannot be withdrawing from or depositing into a single
10646 account concurrently. This eliminates the source of the error illustrated in figure 3.29, where Peter
10647 changes the account balance between the times when Paul accesses the balance to compute the new
10648 value and when Paul actually performs the assignment. On the other hand, each account has its own
10649 serializer, so that deposits and withdrawals for different accounts can proceed concurrently.
10650 Exercise 3.39. Which of the five possibilities in the parallel execution shown above remain if we
10651 instead serialize execution as follows:
10652 (define x 10)
10653 (define s (make-serializer))
10654 (parallel-execute (lambda () (set! x ((s (lambda () (* x x))))))
10655 (s (lambda () (set! x (+ x 1)))))
10656 Exercise 3.40. Give all possible values of x that can result from executing
10657 (define x 10)
10658 (parallel-execute (lambda () (set! x (* x x)))
10659 (lambda () (set! x (* x x x))))
10660 Which of these possibilities remain if we instead use serialized procedures:
10661 (define x 10)
10662 (define s (make-serializer))
10663 (parallel-execute (s (lambda () (set! x (* x x))))
10664 (s (lambda () (set! x (* x x x)))))
10665 Exercise 3.41. Ben Bitdiddle worries that it would be better to implement the bank account as follows
10666 (where the commented line has been changed):
10667 (define (make-account balance)
10668 (define (withdraw amount)
10669 (if (>= balance amount)
10670 (begin (set! balance (- balance amount))
10671
10672 \fbalance)
10673 "Insufficient funds"))
10674 (define (deposit amount)
10675 (set! balance (+ balance amount))
10676 balance)
10677 ;; continued on next page
10678 (let ((protected (make-serializer)))
10679 (define (dispatch m)
10680 (cond ((eq? m ’withdraw) (protected withdraw))
10681 ((eq? m ’deposit) (protected deposit))
10682 ((eq? m ’balance)
10683 ((protected (lambda () balance)))) ; serialized
10684 (else (error "Unknown request -- MAKE-ACCOUNT"
10685 m))))
10686 dispatch))
10687 because allowing unserialized access to the bank balance can result in anomalous behavior. Do you
10688 agree? Is there any scenario that demonstrates Ben’s concern?
10689 Exercise 3.42. Ben Bitdiddle suggests that it’s a waste of time to create a new serialized procedure in
10690 response to every withdraw and deposit message. He says that make-account could be
10691 changed so that the calls to protected are done outside the dispatch procedure. That is, an
10692 account would return the same serialized procedure (which was created at the same time as the
10693 account) each time it is asked for a withdrawal procedure.
10694 (define (make-account balance)
10695 (define (withdraw amount)
10696 (if (>= balance amount)
10697 (begin (set! balance (- balance amount))
10698 balance)
10699 "Insufficient funds"))
10700 (define (deposit amount)
10701 (set! balance (+ balance amount))
10702 balance)
10703 (let ((protected (make-serializer)))
10704 (let ((protected-withdraw (protected withdraw))
10705 (protected-deposit (protected deposit)))
10706 (define (dispatch m)
10707 (cond ((eq? m ’withdraw) protected-withdraw)
10708 ((eq? m ’deposit) protected-deposit)
10709 ((eq? m ’balance) balance)
10710 (else (error "Unknown request -- MAKE-ACCOUNT"
10711 m))))
10712 dispatch)))
10713 Is this a safe change to make? In particular, is there any difference in what concurrency is allowed by
10714 these two versions of make-account ?
10715
10716 \fComplexity of using multiple shared resources
10717 Serializers provide a powerful abstraction that helps isolate the complexities of concurrent programs
10718 so that they can be dealt with carefully and (hopefully) correctly. However, while using serializers is
10719 relatively straightforward when there is only a single shared resource (such as a single bank account),
10720 concurrent programming can be treacherously difficult when there are multiple shared resources.
10721 To illustrate one of the difficulties that can arise, suppose we wish to swap the balances in two bank
10722 accounts. We access each account to find the balance, compute the difference between the balances,
10723 withdraw this difference from one account, and deposit it in the other account. We could implement
10724 this as follows: 41
10725 (define (exchange account1 account2)
10726 (let ((difference (- (account1 ’balance)
10727 (account2 ’balance))))
10728 ((account1 ’withdraw) difference)
10729 ((account2 ’deposit) difference)))
10730 This procedure works well when only a single process is trying to do the exchange. Suppose, however,
10731 that Peter and Paul both have access to accounts a1, a2, and a3, and that Peter exchanges a1 and a2
10732 while Paul concurrently exchanges a1 and a3. Even with account deposits and withdrawals serialized
10733 for individual accounts (as in the make-account procedure shown above in this section),
10734 exchange can still produce incorrect results. For example, Peter might compute the difference in the
10735 balances for a1 and a2, but then Paul might change the balance in a1 before Peter is able to complete
10736 the exchange. 42 For correct behavior, we must arrange for the exchange procedure to lock out any
10737 other concurrent accesses to the accounts during the entire time of the exchange.
10738 One way we can accomplish this is by using both accounts’ serializers to serialize the entire
10739 exchange procedure. To do this, we will arrange for access to an account’s serializer. Note that we
10740 are deliberately breaking the modularity of the bank-account object by exposing the serializer. The
10741 following version of make-account is identical to the original version given in section 3.1.1, except
10742 that a serializer is provided to protect the balance variable, and the serializer is exported via message
10743 passing:
10744 (define (make-account-and-serializer balance)
10745 (define (withdraw amount)
10746 (if (>= balance amount)
10747 (begin (set! balance (- balance amount))
10748 balance)
10749 "Insufficient funds"))
10750 (define (deposit amount)
10751 (set! balance (+ balance amount))
10752 balance)
10753 (let ((balance-serializer (make-serializer)))
10754 (define (dispatch m)
10755 (cond ((eq? m ’withdraw) withdraw)
10756 ((eq? m ’deposit) deposit)
10757 ((eq? m ’balance) balance)
10758 ((eq? m ’serializer) balance-serializer)
10759 (else (error "Unknown request -- MAKE-ACCOUNT"
10760 m))))
10761
10762 \fdispatch))
10763 We can use this to do serialized deposits and withdrawals. However, unlike our earlier serialized
10764 account, it is now the responsibility of each user of bank-account objects to explicitly manage the
10765 serialization, for example as follows: 43
10766 (define (deposit account amount)
10767 (let ((s (account ’serializer))
10768 (d (account ’deposit)))
10769 ((s d) amount)))
10770 Exporting the serializer in this way gives us enough flexibility to implement a serialized exchange
10771 program. We simply serialize the original exchange procedure with the serializers for both accounts:
10772 (define (serialized-exchange account1 account2)
10773 (let ((serializer1 (account1 ’serializer))
10774 (serializer2 (account2 ’serializer)))
10775 ((serializer1 (serializer2 exchange))
10776 account1
10777 account2)))
10778 Exercise 3.43. Suppose that the balances in three accounts start out as $10, $20, and $30, and that
10779 multiple processes run, exchanging the balances in the accounts. Argue that if the processes are run
10780 sequentially, after any number of concurrent exchanges, the account balances should be $10, $20, and
10781 $30 in some order. Draw a timing diagram like the one in figure 3.29 to show how this condition can
10782 be violated if the exchanges are implemented using the first version of the account-exchange program
10783 in this section. On the other hand, argue that even with this exchange program, the sum of the
10784 balances in the accounts will be preserved. Draw a timing diagram to show how even this condition
10785 would be violated if we did not serialize the transactions on individual accounts.
10786 Exercise 3.44. Consider the problem of transferring an amount from one account to another. Ben
10787 Bitdiddle claims that this can be accomplished with the following procedure, even if there are multiple
10788 people concurrently transferring money among multiple accounts, using any account mechanism that
10789 serializes deposit and withdrawal transactions, for example, the version of make-account in the
10790 text above.
10791 (define (transfer from-account to-account amount)
10792 ((from-account ’withdraw) amount)
10793 ((to-account ’deposit) amount))
10794 Louis Reasoner claims that there is a problem here, and that we need to use a more sophisticated
10795 method, such as the one required for dealing with the exchange problem. Is Louis right? If not, what is
10796 the essential difference between the transfer problem and the exchange problem? (You should assume
10797 that the balance in from-account is at least amount.)
10798 Exercise 3.45. Louis Reasoner thinks our bank-account system is unnecessarily complex and
10799 error-prone now that deposits and withdrawals aren’t automatically serialized. He suggests that
10800 make-account-and-serializer should have exported the serializer (for use by such
10801 procedures as serialized-exchange) in addition to (rather than instead of) using it to serialize
10802 accounts and deposits as make-account did. He proposes to redefine accounts as follows:
10803
10804 \f(define (make-account-and-serializer balance)
10805 (define (withdraw amount)
10806 (if (>= balance amount)
10807 (begin (set! balance (- balance amount))
10808 balance)
10809 "Insufficient funds"))
10810 (define (deposit amount)
10811 (set! balance (+ balance amount))
10812 balance)
10813 (let ((balance-serializer (make-serializer)))
10814 (define (dispatch m)
10815 (cond ((eq? m ’withdraw) (balance-serializer withdraw))
10816 ((eq? m ’deposit) (balance-serializer deposit))
10817 ((eq? m ’balance) balance)
10818 ((eq? m ’serializer) balance-serializer)
10819 (else (error "Unknown request -- MAKE-ACCOUNT"
10820 m))))
10821 dispatch))
10822 Then deposits are handled as with the original make-account:
10823 (define (deposit account amount)
10824 ((account ’deposit) amount))
10825 Explain what is wrong with Louis’s reasoning. In particular, consider what happens when
10826 serialized-exchange is called.
10827
10828 Implementing serializers
10829 We implement serializers in terms of a more primitive synchronization mechanism called a mutex. A
10830 mutex is an object that supports two operations -- the mutex can be acquired, and the mutex can be
10831 released. Once a mutex has been acquired, no other acquire operations on that mutex may proceed
10832 until the mutex is released. 44 In our implementation, each serializer has an associated mutex. Given a
10833 procedure p, the serializer returns a procedure that acquires the mutex, runs p, and then releases the
10834 mutex. This ensures that only one of the procedures produced by the serializer can be running at once,
10835 which is precisely the serialization property that we need to guarantee.
10836 (define (make-serializer)
10837 (let ((mutex (make-mutex)))
10838 (lambda (p)
10839 (define (serialized-p . args)
10840 (mutex ’acquire)
10841 (let ((val (apply p args)))
10842 (mutex ’release)
10843 val))
10844 serialized-p)))
10845 The mutex is a mutable object (here we’ll use a one-element list, which we’ll refer to as a cell) that can
10846 hold the value true or false. When the value is false, the mutex is available to be acquired. When the
10847 value is true, the mutex is unavailable, and any process that attempts to acquire the mutex must wait.
10848
10849 \fOur mutex constructor make-mutex begins by initializing the cell contents to false. To acquire the
10850 mutex, we test the cell. If the mutex is available, we set the cell contents to true and proceed.
10851 Otherwise, we wait in a loop, attempting to acquire over and over again, until we find that the mutex is
10852 available. 45 To release the mutex, we set the cell contents to false.
10853 (define (make-mutex)
10854 (let ((cell (list false)))
10855 (define (the-mutex m)
10856 (cond ((eq? m ’acquire)
10857 (if (test-and-set! cell)
10858 (the-mutex ’acquire))) ; retry
10859 ((eq? m ’release) (clear! cell))))
10860 the-mutex))
10861 (define (clear! cell)
10862 (set-car! cell false))
10863 Test-and-set! tests the cell and returns the result of the test. In addition, if the test was false,
10864 test-and-set! sets the cell contents to true before returning false. We can express this behavior
10865 as the following procedure:
10866 (define (test-and-set! cell)
10867 (if (car cell)
10868 true
10869 (begin (set-car! cell true)
10870 false)))
10871 However, this implementation of test-and-set! does not suffice as it stands. There is a crucial
10872 subtlety here, which is the essential place where concurrency control enters the system: The
10873 test-and-set! operation must be performed atomically. That is, we must guarantee that, once a
10874 process has tested the cell and found it to be false, the cell contents will actually be set to true before
10875 any other process can test the cell. If we do not make this guarantee, then the mutex can fail in a way
10876 similar to the bank-account failure in figure 3.29. (See exercise 3.46.)
10877 The actual implementation of test-and-set! depends on the details of how our system runs
10878 concurrent processes. For example, we might be executing concurrent processes on a sequential
10879 processor using a time-slicing mechanism that cycles through the processes, permitting each process to
10880 run for a short time before interrupting it and moving on to the next process. In that case,
10881 test-and-set! can work by disabling time slicing during the testing and setting. 46 Alternatively,
10882 multiprocessing computers provide instructions that support atomic operations directly in hardware. 47
10883 Exercise 3.46. Suppose that we implement test-and-set! using an ordinary procedure as shown
10884 in the text, without attempting to make the operation atomic. Draw a timing diagram like the one in
10885 figure 3.29 to demonstrate how the mutex implementation can fail by allowing two processes to
10886 acquire the mutex at the same time.
10887 Exercise 3.47. A semaphore (of size n) is a generalization of a mutex. Like a mutex, a semaphore
10888 supports acquire and release operations, but it is more general in that up to n processes can acquire it
10889 concurrently. Additional processes that attempt to acquire the semaphore must wait for release
10890 operations. Give implementations of semaphores
10891
10892 \fa. in terms of mutexes
10893 b. in terms of atomic test-and-set! operations.
10894
10895 Deadlock
10896 Now that we have seen how to implement serializers, we can see that account exchanging still has a
10897 problem, even with the serialized-exchange procedure above. Imagine that Peter attempts to
10898 exchange a1 with a2 while Paul concurrently attempts to exchange a2 with a1. Suppose that Peter’s
10899 process reaches the point where it has entered a serialized procedure protecting a1 and, just after that,
10900 Paul’s process enters a serialized procedure protecting a2. Now Peter cannot proceed (to enter a
10901 serialized procedure protecting a2) until Paul exits the serialized procedure protecting a2. Similarly,
10902 Paul cannot proceed until Peter exits the serialized procedure protecting a1. Each process is stalled
10903 forever, waiting for the other. This situation is called a deadlock. Deadlock is always a danger in
10904 systems that provide concurrent access to multiple shared resources.
10905 One way to avoid the deadlock in this situation is to give each account a unique identification number
10906 and rewrite serialized-exchange so that a process will always attempt to enter a procedure
10907 protecting the lowest-numbered account first. Although this method works well for the exchange
10908 problem, there are other situations that require more sophisticated deadlock-avoidance techniques, or
10909 where deadlock cannot be avoided at all. (See exercises 3.48 and 3.49.) 48
10910 Exercise 3.48. Explain in detail why the deadlock-avoidance method described above, (i.e., the
10911 accounts are numbered, and each process attempts to acquire the smaller-numbered account first)
10912 avoids deadlock in the exchange problem. Rewrite serialized-exchange to incorporate this
10913 idea. (You will also need to modify make-account so that each account is created with a number,
10914 which can be accessed by sending an appropriate message.)
10915 Exercise 3.49. Give a scenario where the deadlock-avoidance mechanism described above does not
10916 work. (Hint: In the exchange problem, each process knows in advance which accounts it will need to
10917 get access to. Consider a situation where a process must get access to some shared resources before it
10918 can know which additional shared resources it will require.)
10919
10920 Concurrency, time, and communication
10921 We’ve seen how programming concurrent systems requires controlling the ordering of events when
10922 different processes access shared state, and we’ve seen how to achieve this control through judicious
10923 use of serializers. But the problems of concurrency lie deeper than this, because, from a fundamental
10924 point of view, it’s not always clear what is meant by ‘‘shared state.’’
10925 Mechanisms such as test-and-set! require processes to examine a global shared flag at arbitrary
10926 times. This is problematic and inefficient to implement in modern high-speed processors, where due to
10927 optimization techniques such as pipelining and cached memory, the contents of memory may not be in
10928 a consistent state at every instant. In contemporary multiprocessing systems, therefore, the serializer
10929 paradigm is being supplanted by new approaches to concurrency control. 49
10930 The problematic aspects of shared state also arise in large, distributed systems. For instance, imagine a
10931 distributed banking system where individual branch banks maintain local values for bank balances and
10932 periodically compare these with values maintained by other branches. In such a system the value of
10933 ‘‘the account balance’’ would be undetermined, except right after synchronization. If Peter deposits
10934 money in an account he holds jointly with Paul, when should we say that the account balance has
10935 changed -- when the balance in the local branch changes, or not until after the synchronization? And if
10936
10937 \fPaul accesses the account from a different branch, what are the reasonable constraints to place on the
10938 banking system such that the behavior is ‘‘correct’’? The only thing that might matter for correctness
10939 is the behavior observed by Peter and Paul individually and the ‘‘state’’ of the account immediately
10940 after synchronization. Questions about the ‘‘real’’ account balance or the order of events between
10941 synchronizations may be irrelevant or meaningless. 50
10942 The basic phenomenon here is that synchronizing different processes, establishing shared state, or
10943 imposing an order on events requires communication among the processes. In essence, any notion of
10944 time in concurrency control must be intimately tied to communication. 51 It is intriguing that a similar
10945 connection between time and communication also arises in the Theory of Relativity, where the speed
10946 of light (the fastest signal that can be used to synchronize events) is a fundamental constant relating
10947 time and space. The complexities we encounter in dealing with time and state in our computational
10948 models may in fact mirror a fundamental complexity of the physical universe.
10949 34 Most real processors actually execute a few operations at a time, following a strategy called
10950
10951 pipelining. Although this technique greatly improves the effective utilization of the hardware, it is used
10952 only to speed up the execution of a sequential instruction stream, while retaining the behavior of the
10953 sequential program.
10954 35 To quote some graffiti seen on a Cambridge building wall: ‘‘Time is a device that was invented to
10955
10956 keep everything from happening at once.’’
10957 36 An even worse failure for this system could occur if the two set! operations attempt to change the
10958
10959 balance simultaneously, in which case the actual data appearing in memory might end up being a
10960 random combination of the information being written by the two processes. Most computers have
10961 interlocks on the primitive memory-write operations, which protect against such simultaneous access.
10962 Even this seemingly simple kind of protection, however, raises implementation challenges in the
10963 design of multiprocessing computers, where elaborate cache-coherence protocols are required to
10964 ensure that the various processors will maintain a consistent view of memory contents, despite the fact
10965 that data may be replicated (‘‘cached’’) among the different processors to increase the speed of
10966 memory access.
10967 37 The factorial program in section 3.1.3 illustrates this for a single sequential process.
10968 38 The columns show the contents of Peter’s wallet, the joint account (in Bank1), Paul’s wallet, and
10969
10970 Paul’s private account (in Bank2), before and after each withdrawal (W) and deposit (D). Peter
10971 withdraws $10 from Bank1; Paul deposits $5 in Bank2, then withdraws $25 from Bank1.
10972 39 A more formal way to express this idea is to say that concurrent programs are inherently
10973
10974 nondeterministic. That is, they are described not by single-valued functions, but by functions whose
10975 results are sets of possible values. In section 4.3 we will study a language for expressing
10976 nondeterministic computations.
10977 40 Parallel-execute is not part of standard Scheme, but it can be implemented in MIT Scheme.
10978
10979 In our implementation, the new concurrent processes also run concurrently with the original Scheme
10980 process. Also, in our implementation, the value returned by parallel-execute is a special
10981 control object that can be used to halt the newly created processes.
10982 41 We have simplified exchange by exploiting the fact that our deposit message accepts negative
10983
10984 amounts. (This is a serious bug in our banking system!)
10985
10986 \f42 If the account balances start out as $10, $20, and $30, then after any number of concurrent
10987
10988 exchanges, the balances should still be $10, $20, and $30 in some order. Serializing the deposits to
10989 individual accounts is not sufficient to guarantee this. See exercise 3.43.
10990 43 Exercise 3.45 investigates why deposits and withdrawals are no longer automatically serialized by
10991
10992 the account.
10993 44 The term ‘‘mutex’’ is an abbreviation for mutual exclusion. The general problem of arranging a
10994
10995 mechanism that permits concurrent processes to safely share resources is called the mutual exclusion
10996 problem. Our mutex is a simple variant of the semaphore mechanism (see exercise 3.47), which was
10997 introduced in the ‘‘THE’’ Multiprogramming System developed at the Technological University of
10998 Eindhoven and named for the university’s initials in Dutch (Dijkstra 1968a). The acquire and release
10999 operations were originally called P and V, from the Dutch words passeren (to pass) and vrijgeven (to
11000 release), in reference to the semaphores used on railroad systems. Dijkstra’s classic exposition (1968b)
11001 was one of the first to clearly present the issues of concurrency control, and showed how to use
11002 semaphores to handle a variety of concurrency problems.
11003 45 In most time-shared operating systems, processes that are blocked by a mutex do not waste time
11004
11005 ‘‘busy-waiting’’ as above. Instead, the system schedules another process to run while the first is
11006 waiting, and the blocked process is awakened when the mutex becomes available.
11007 46 In MIT Scheme for a single processor, which uses a time-slicing model, test-and-set! can be
11008
11009 implemented as follows:
11010 (define (test-and-set! cell)
11011 (without-interrupts
11012 (lambda ()
11013 (if (car cell)
11014 true
11015 (begin (set-car! cell true)
11016 false)))))
11017 Without-interrupts disables time-slicing interrupts while its procedure argument is being
11018 executed.
11019 47 There are many variants of such instructions -- including test-and-set, test-and-clear, swap,
11020
11021 compare-and-exchange, load-reserve, and store-conditional -- whose design must be carefully matched
11022 to the machine’s processor-memory interface. One issue that arises here is to determine what happens
11023 if two processes attempt to acquire the same resource at exactly the same time by using such an
11024 instruction. This requires some mechanism for making a decision about which process gets control.
11025 Such a mechanism is called an arbiter. Arbiters usually boil down to some sort of hardware device.
11026 Unfortunately, it is possible to prove that one cannot physically construct a fair arbiter that works
11027 100% of the time unless one allows the arbiter an arbitrarily long time to make its decision. The
11028 fundamental phenomenon here was originally observed by the fourteenth-century French philosopher
11029 Jean Buridan in his commentary on Aristotle’s De caelo. Buridan argued that a perfectly rational dog
11030 placed between two equally attractive sources of food will starve to death, because it is incapable of
11031 deciding which to go to first.
11032 48 The general technique for avoiding deadlock by numbering the shared resources and acquiring
11033
11034 them in order is due to Havender (1968). Situations where deadlock cannot be avoided require
11035 deadlock-recovery methods, which entail having processes ‘‘back out’’ of the deadlocked state and try
11036 again. Deadlock-recovery mechanisms are widely used in database management systems, a topic that
11037
11038 \fis treated in detail in Gray and Reuter 1993.
11039 49 One such alternative to serialization is called barrier synchronization. The programmer permits
11040
11041 concurrent processes to execute as they please, but establishes certain synchronization points
11042 (‘‘barriers’’) through which no process can proceed until all the processes have reached the barrier.
11043 Modern processors provide machine instructions that permit programmers to establish synchronization
11044 points at places where consistency is required. The PowerPC TM , for example, includes for this
11045 purpose two instructions called SYNC and EIEIO (Enforced In-order Execution of Input/Output).
11046 50 This may seem like a strange point of view, but there are systems that work this way. International
11047
11048 charges to credit-card accounts, for example, are normally cleared on a per-country basis, and the
11049 charges made in different countries are periodically reconciled. Thus the account balance may be
11050 different in different countries.
11051 51 For distributed systems, this perspective was pursued by Lamport (1978), who showed how to use
11052
11053 communication to establish ‘‘global clocks’’ that can be used to establish orderings on events in
11054 distributed systems.
11055 [Go to first, previous, next page; contents; index]
11056
11057 \f[Go to first, previous, next page; contents; index]
11058
11059 3.5 Streams
11060 We’ve gained a good understanding of assignment as a tool in modeling, as well as an appreciation of
11061 the complex problems that assignment raises. It is time to ask whether we could have gone about
11062 things in a different way, so as to avoid some of these problems. In this section, we explore an
11063 alternative approach to modeling state, based on data structures called streams. As we shall see,
11064 streams can mitigate some of the complexity of modeling state.
11065 Let’s step back and review where this complexity comes from. In an attempt to model real-world
11066 phenomena, we made some apparently reasonable decisions: We modeled real-world objects with
11067 local state by computational objects with local variables. We identified time variation in the real world
11068 with time variation in the computer. We implemented the time variation of the states of the model
11069 objects in the computer with assignments to the local variables of the model objects.
11070 Is there another approach? Can we avoid identifying time in the computer with time in the modeled
11071 world? Must we make the model change with time in order to model phenomena in a changing world?
11072 Think about the issue in terms of mathematical functions. We can describe the time-varying behavior
11073 of a quantity x as a function of time x(t). If we concentrate on x instant by instant, we think of it as a
11074 changing quantity. Yet if we concentrate on the entire time history of values, we do not emphasize
11075 change -- the function itself does not change. 52
11076 If time is measured in discrete steps, then we can model a time function as a (possibly infinite)
11077 sequence. In this section, we will see how to model change in terms of sequences that represent the
11078 time histories of the systems being modeled. To accomplish this, we introduce new data structures
11079 called streams. From an abstract point of view, a stream is simply a sequence. However, we will find
11080 that the straightforward implementation of streams as lists (as in section 2.2.1) doesn’t fully reveal the
11081 power of stream processing. As an alternative, we introduce the technique of delayed evaluation,
11082 which enables us to represent very large (even infinite) sequences as streams.
11083 Stream processing lets us model systems that have state without ever using assignment or mutable
11084 data. This has important implications, both theoretical and practical, because we can build models that
11085 avoid the drawbacks inherent in introducing assignment. On the other hand, the stream framework
11086 raises difficulties of its own, and the question of which modeling technique leads to more modular and
11087 more easily maintained systems remains open.
11088
11089 3.5.1 Streams Are Delayed Lists
11090 As we saw in section 2.2.3, sequences can serve as standard interfaces for combining program
11091 modules. We formulated powerful abstractions for manipulating sequences, such as map, filter,
11092 and accumulate, that capture a wide variety of operations in a manner that is both succinct and
11093 elegant.
11094 Unfortunately, if we represent sequences as lists, this elegance is bought at the price of severe
11095 inefficiency with respect to both the time and space required by our computations. When we represent
11096 manipulations on sequences as transformations of lists, our programs must construct and copy data
11097 structures (which may be huge) at every step of a process.
11098
11099 \fTo see why this is true, let us compare two programs for computing the sum of all the prime numbers
11100 in an interval. The first program is written in standard iterative style: 53
11101 (define (sum-primes a b)
11102 (define (iter count accum)
11103 (cond ((> count b) accum)
11104 ((prime? count) (iter (+ count 1) (+ count accum)))
11105 (else (iter (+ count 1) accum))))
11106 (iter a 0))
11107 The second program performs the same computation using the sequence operations of section 2.2.3:
11108 (define (sum-primes a b)
11109 (accumulate +
11110 0
11111 (filter prime? (enumerate-interval a b))))
11112 In carrying out the computation, the first program needs to store only the sum being accumulated. In
11113 contrast, the filter in the second program cannot do any testing until enumerate-interval has
11114 constructed a complete list of the numbers in the interval. The filter generates another list, which in
11115 turn is passed to accumulate before being collapsed to form a sum. Such large intermediate storage
11116 is not needed by the first program, which we can think of as enumerating the interval incrementally,
11117 adding each prime to the sum as it is generated.
11118 The inefficiency in using lists becomes painfully apparent if we use the sequence paradigm to compute
11119 the second prime in the interval from 10,000 to 1,000,000 by evaluating the expression
11120 (car (cdr (filter prime?
11121 (enumerate-interval 10000 1000000))))
11122 This expression does find the second prime, but the computational overhead is outrageous. We
11123 construct a list of almost a million integers, filter this list by testing each element for primality, and
11124 then ignore almost all of the result. In a more traditional programming style, we would interleave the
11125 enumeration and the filtering, and stop when we reached the second prime.
11126 Streams are a clever idea that allows one to use sequence manipulations without incurring the costs of
11127 manipulating sequences as lists. With streams we can achieve the best of both worlds: We can
11128 formulate programs elegantly as sequence manipulations, while attaining the efficiency of incremental
11129 computation. The basic idea is to arrange to construct a stream only partially, and to pass the partial
11130 construction to the program that consumes the stream. If the consumer attempts to access a part of the
11131 stream that has not yet been constructed, the stream will automatically construct just enough more of
11132 itself to produce the required part, thus preserving the illusion that the entire stream exists. In other
11133 words, although we will write programs as if we were processing complete sequences, we design our
11134 stream implementation to automatically and transparently interleave the construction of the stream
11135 with its use.
11136 On the surface, streams are just lists with different names for the procedures that manipulate them.
11137 There is a constructor, cons-stream, and two selectors, stream-car and stream-cdr, which
11138 satisfy the constraints
11139
11140 \fThere is a distinguishable object, the-empty-stream, which cannot be the result of any
11141 cons-stream operation, and which can be identified with the predicate stream-null?. 54 Thus
11142 we can make and use streams, in just the same way as we can make and use lists, to represent
11143 aggregate data arranged in a sequence. In particular, we can build stream analogs of the list operations
11144 from chapter 2, such as list-ref, map, and for-each: 55
11145 (define (stream-ref s n)
11146 (if (= n 0)
11147 (stream-car s)
11148 (stream-ref (stream-cdr s) (- n 1))))
11149 (define (stream-map proc s)
11150 (if (stream-null? s)
11151 the-empty-stream
11152 (cons-stream (proc (stream-car s))
11153 (stream-map proc (stream-cdr s)))))
11154 (define (stream-for-each proc s)
11155 (if (stream-null? s)
11156 ’done
11157 (begin (proc (stream-car s))
11158 (stream-for-each proc (stream-cdr s)))))
11159 Stream-for-each is useful for viewing streams:
11160 (define (display-stream s)
11161 (stream-for-each display-line s))
11162 (define (display-line x)
11163 (newline)
11164 (display x))
11165 To make the stream implementation automatically and transparently interleave the construction of a
11166 stream with its use, we will arrange for the cdr of a stream to be evaluated when it is accessed by the
11167 stream-cdr procedure rather than when the stream is constructed by cons-stream. This
11168 implementation choice is reminiscent of our discussion of rational numbers in section 2.1.2, where we
11169 saw that we can choose to implement rational numbers so that the reduction of numerator and
11170 denominator to lowest terms is performed either at construction time or at selection time. The two
11171 rational-number implementations produce the same data abstraction, but the choice has an effect on
11172 efficiency. There is a similar relationship between streams and ordinary lists. As a data abstraction,
11173 streams are the same as lists. The difference is the time at which the elements are evaluated. With
11174 ordinary lists, both the car and the cdr are evaluated at construction time. With streams, the cdr is
11175 evaluated at selection time.
11176 Our implementation of streams will be based on a special form called delay. Evaluating (delay
11177 <exp>) does not evaluate the expression <exp>, but rather returns a so-called delayed object, which
11178 we can think of as a ‘‘promise’’ to evaluate <exp> at some future time. As a companion to delay,
11179 there is a procedure called force that takes a delayed object as argument and performs the evaluation
11180 -- in effect, forcing the delay to fulfill its promise. We will see below how delay and force can
11181 be implemented, but first let us use these to construct streams.
11182
11183 \fCons-stream is a special form defined so that
11184 (cons-stream <a> <b>)
11185 is equivalent to
11186 (cons <a> (delay <b>))
11187 What this means is that we will construct streams using pairs. However, rather than placing the value
11188 of the rest of the stream into the cdr of the pair we will put there a promise to compute the rest if it is
11189 ever requested. Stream-car and stream-cdr can now be defined as procedures:
11190 (define (stream-car stream) (car stream))
11191 (define (stream-cdr stream) (force (cdr stream)))
11192 Stream-car selects the car of the pair; stream-cdr selects the cdr of the pair and evaluates the
11193 delayed expression found there to obtain the rest of the stream. 56
11194
11195 The stream implementation in action
11196 To see how this implementation behaves, let us analyze the ‘‘outrageous’’ prime computation we saw
11197 above, reformulated in terms of streams:
11198 (stream-car
11199 (stream-cdr
11200 (stream-filter prime?
11201 (stream-enumerate-interval 10000 1000000))))
11202 We will see that it does indeed work efficiently.
11203 We begin by calling stream-enumerate-interval with the arguments 10,000 and 1,000,000.
11204 Stream-enumerate-interval is the stream analog of enumerate-interval
11205 (section 2.2.3):
11206 (define (stream-enumerate-interval low high)
11207 (if (> low high)
11208 the-empty-stream
11209 (cons-stream
11210 low
11211 (stream-enumerate-interval (+ low 1) high))))
11212 and thus the result returned by stream-enumerate-interval, formed by the cons-stream,
11213 is 57
11214 (cons 10000
11215 (delay (stream-enumerate-interval 10001 1000000)))
11216 That is, stream-enumerate-interval returns a stream represented as a pair whose car is
11217 10,000 and whose cdr is a promise to enumerate more of the interval if so requested. This stream is
11218 now filtered for primes, using the stream analog of the filter procedure (section 2.2.3):
11219
11220 \f(define (stream-filter pred stream)
11221 (cond ((stream-null? stream) the-empty-stream)
11222 ((pred (stream-car stream))
11223 (cons-stream (stream-car stream)
11224 (stream-filter pred
11225 (stream-cdr stream))))
11226 (else (stream-filter pred (stream-cdr stream)))))
11227 Stream-filter tests the stream-car of the stream (the car of the pair, which is 10,000). Since
11228 this is not prime, stream-filter examines the stream-cdr of its input stream. The call to
11229 stream-cdr forces evaluation of the delayed stream-enumerate-interval, which now
11230 returns
11231 (cons 10001
11232 (delay (stream-enumerate-interval 10002 1000000)))
11233 Stream-filter now looks at the stream-car of this stream, 10,001, sees that this is not prime
11234 either, forces another stream-cdr, and so on, until stream-enumerate-interval yields the
11235 prime 10,007, whereupon stream-filter, according to its definition, returns
11236 (cons-stream (stream-car stream)
11237 (stream-filter pred (stream-cdr stream)))
11238 which in this case is
11239 (cons 10007
11240 (delay
11241 (stream-filter
11242 prime?
11243 (cons 10008
11244 (delay
11245 (stream-enumerate-interval 10009
11246 1000000))))))
11247 This result is now passed to stream-cdr in our original expression. This forces the delayed
11248 stream-filter, which in turn keeps forcing the delayed stream-enumerate-interval
11249 until it finds the next prime, which is 10,009. Finally, the result passed to stream-car in our
11250 original expression is
11251 (cons 10009
11252 (delay
11253 (stream-filter
11254 prime?
11255 (cons 10010
11256 (delay
11257 (stream-enumerate-interval 10011
11258 1000000))))))
11259 Stream-car returns 10,009, and the computation is complete. Only as many integers were tested for
11260 primality as were necessary to find the second prime, and the interval was enumerated only as far as
11261 was necessary to feed the prime filter.
11262
11263 \fIn general, we can think of delayed evaluation as ‘‘demand-driven’’ programming, whereby each stage
11264 in the stream process is activated only enough to satisfy the next stage. What we have done is to
11265 decouple the actual order of events in the computation from the apparent structure of our procedures.
11266 We write procedures as if the streams existed ‘‘all at once’’ when, in reality, the computation is
11267 performed incrementally, as in traditional programming styles.
11268
11269 Implementing delay and force
11270 Although delay and force may seem like mysterious operations, their implementation is really
11271 quite straightforward. Delay must package an expression so that it can be evaluated later on demand,
11272 and we can accomplish this simply by treating the expression as the body of a procedure. Delay can
11273 be a special form such that
11274 (delay <exp>)
11275 is syntactic sugar for
11276 (lambda () <exp>)
11277 Force simply calls the procedure (of no arguments) produced by delay, so we can implement
11278 force as a procedure:
11279 (define (force delayed-object)
11280 (delayed-object))
11281 This implementation suffices for delay and force to work as advertised, but there is an important
11282 optimization that we can include. In many applications, we end up forcing the same delayed object
11283 many times. This can lead to serious inefficiency in recursive programs involving streams. (See
11284 exercise 3.57.) The solution is to build delayed objects so that the first time they are forced, they store
11285 the value that is computed. Subsequent forcings will simply return the stored value without repeating
11286 the computation. In other words, we implement delay as a special-purpose memoized procedure
11287 similar to the one described in exercise 3.27. One way to accomplish this is to use the following
11288 procedure, which takes as argument a procedure (of no arguments) and returns a memoized version of
11289 the procedure. The first time the memoized procedure is run, it saves the computed result. On
11290 subsequent evaluations, it simply returns the result.
11291 (define (memo-proc proc)
11292 (let ((already-run? false) (result false))
11293 (lambda ()
11294 (if (not already-run?)
11295 (begin (set! result (proc))
11296 (set! already-run? true)
11297 result)
11298 result))))
11299 Delay is then defined so that (delay <exp>) is equivalent to
11300 (memo-proc (lambda () <exp>))
11301 and force is as defined previously. 58
11302
11303 \fExercise 3.50. Complete the following definition, which generalizes stream-map to allow
11304 procedures that take multiple arguments, analogous to map in section 2.2.3, footnote 12.
11305 (define (stream-map proc . argstreams)
11306 (if (<??> (car argstreams))
11307 the-empty-stream
11308 (<??>
11309 (apply proc (map <??> argstreams))
11310 (apply stream-map
11311 (cons proc (map <??> argstreams))))))
11312 Exercise 3.51. In order to take a closer look at delayed evaluation, we will use the following
11313 procedure, which simply returns its argument after printing it:
11314 (define (show x)
11315 (display-line x)
11316 x)
11317 What does the interpreter print in response to evaluating each expression in the following sequence? 59
11318 (define x (stream-map show (stream-enumerate-interval 0 10)))
11319 (stream-ref x 5)
11320 (stream-ref x 7)
11321 Exercise 3.52. Consider the sequence of expressions
11322 (define
11323 (define
11324 (set!
11325 sum)
11326 (define
11327 (define
11328 (define
11329
11330 sum 0)
11331 (accum x)
11332 sum (+ x sum))
11333
11334 seq (stream-map accum (stream-enumerate-interval 1 20)))
11335 y (stream-filter even? seq))
11336 z (stream-filter (lambda (x) (= (remainder x 5) 0))
11337 seq))
11338 (stream-ref y 7)
11339 (display-stream z)
11340 What is the value of sum after each of the above expressions is evaluated? What is the printed
11341 response to evaluating the stream-ref and display-stream expressions? Would these
11342 responses differ if we had implemented (delay <exp>) simply as (lambda () <exp>)
11343 without using the optimization provided by memo-proc ? Explain.
11344
11345 3.5.2 Infinite Streams
11346 We have seen how to support the illusion of manipulating streams as complete entities even though, in
11347 actuality, we compute only as much of the stream as we need to access. We can exploit this technique
11348 to represent sequences efficiently as streams, even if the sequences are very long. What is more
11349 striking, we can use streams to represent sequences that are infinitely long. For instance, consider the
11350 following definition of the stream of positive integers:
11351
11352 \f(define (integers-starting-from n)
11353 (cons-stream n (integers-starting-from (+ n 1))))
11354 (define integers (integers-starting-from 1))
11355 This makes sense because integers will be a pair whose car is 1 and whose cdr is a promise to
11356 produce the integers beginning with 2. This is an infinitely long stream, but in any given time we can
11357 examine only a finite portion of it. Thus, our programs will never know that the entire infinite stream
11358 is not there.
11359 Using integers we can define other infinite streams, such as the stream of integers that are not
11360 divisible by 7:
11361 (define (divisible? x y) (= (remainder x y) 0))
11362 (define no-sevens
11363 (stream-filter (lambda (x) (not (divisible? x 7)))
11364 integers))
11365 Then we can find integers not divisible by 7 simply by accessing elements of this stream:
11366 (stream-ref no-sevens 100)
11367 117
11368 In analogy with integers, we can define the infinite stream of Fibonacci numbers:
11369 (define (fibgen a b)
11370 (cons-stream a (fibgen b (+ a b))))
11371 (define fibs (fibgen 0 1))
11372 Fibs is a pair whose car is 0 and whose cdr is a promise to evaluate (fibgen 1 1). When we
11373 evaluate this delayed (fibgen 1 1), it will produce a pair whose car is 1 and whose cdr is a
11374 promise to evaluate (fibgen 1 2), and so on.
11375 For a look at a more exciting infinite stream, we can generalize the no-sevens example to construct
11376 the infinite stream of prime numbers, using a method known as the sieve of Eratosthenes. 60 We start
11377 with the integers beginning with 2, which is the first prime. To get the rest of the primes, we start by
11378 filtering the multiples of 2 from the rest of the integers. This leaves a stream beginning with 3, which
11379 is the next prime. Now we filter the multiples of 3 from the rest of this stream. This leaves a stream
11380 beginning with 5, which is the next prime, and so on. In other words, we construct the primes by a
11381 sieving process, described as follows: To sieve a stream S, form a stream whose first element is the
11382 first element of S and the rest of which is obtained by filtering all multiples of the first element of S
11383 out of the rest of S and sieving the result. This process is readily described in terms of stream
11384 operations:
11385 (define (sieve stream)
11386 (cons-stream
11387 (stream-car stream)
11388 (sieve (stream-filter
11389 (lambda (x)
11390 (not (divisible? x (stream-car stream))))
11391 (stream-cdr stream)))))
11392 (define primes (sieve (integers-starting-from 2)))
11393
11394 \fNow to find a particular prime we need only ask for it:
11395 (stream-ref primes 50)
11396 233
11397 It is interesting to contemplate the signal-processing system set up by sieve, shown in the
11398 ‘‘Henderson diagram’’ in figure 3.31. 61 The input stream feeds into an ‘‘unconser’’ that separates
11399 the first element of the stream from the rest of the stream. The first element is used to construct a
11400 divisibility filter, through which the rest is passed, and the output of the filter is fed to another sieve
11401 box. Then the original first element is consed onto the output of the internal sieve to form the output
11402 stream. Thus, not only is the stream infinite, but the signal processor is also infinite, because the sieve
11403 contains a sieve within it.
11404
11405 Figure 3.31: The prime sieve viewed as a signal-processing system.
11406 Figure 3.31: The prime sieve viewed as a signal-processing system.
11407
11408 Defining streams implicitly
11409 The integers and fibs streams above were defined by specifying ‘‘generating’’ procedures that
11410 explicitly compute the stream elements one by one. An alternative way to specify streams is to take
11411 advantage of delayed evaluation to define streams implicitly. For example, the following expression
11412 defines the stream ones to be an infinite stream of ones:
11413 (define ones (cons-stream 1 ones))
11414 This works much like the definition of a recursive procedure: ones is a pair whose car is 1 and
11415 whose cdr is a promise to evaluate ones. Evaluating the cdr gives us again a 1 and a promise to
11416 evaluate ones, and so on.
11417 We can do more interesting things by manipulating streams with operations such as add-streams,
11418 which produces the elementwise sum of two given streams: 62
11419 (define (add-streams s1 s2)
11420 (stream-map + s1 s2))
11421 Now we can define the integers as follows:
11422 (define integers (cons-stream 1 (add-streams ones integers)))
11423
11424 \fThis defines integers to be a stream whose first element is 1 and the rest of which is the sum of
11425 ones and integers. Thus, the second element of integers is 1 plus the first element of
11426 integers, or 2; the third element of integers is 1 plus the second element of integers, or 3;
11427 and so on. This definition works because, at any point, enough of the integers stream has been
11428 generated so that we can feed it back into the definition to produce the next integer.
11429 We can define the Fibonacci numbers in the same style:
11430 (define fibs
11431 (cons-stream 0
11432 (cons-stream 1
11433 (add-streams (stream-cdr fibs)
11434 fibs))))
11435 This definition says that fibs is a stream beginning with 0 and 1, such that the rest of the stream can
11436 be generated by adding fibs to itself shifted by one place:
11437
11438 0
11439
11440 1
11441
11442 1
11443
11444 1
11445
11446 2
11447
11448 3
11449
11450 5
11451
11452 8
11453
11454 13
11455
11456 21
11457
11458 ... = (stream-cdr fibs)
11459
11460 0
11461
11462 1
11463
11464 1
11465
11466 2
11467
11468 3
11469
11470 5
11471
11472 8
11473
11474 13
11475
11476 ... = fibs
11477
11478 1
11479
11480 2
11481
11482 3
11483
11484 5
11485
11486 8
11487
11488 13
11489
11490 21
11491
11492 34
11493
11494 ... = fibs
11495
11496 Scale-stream is another useful procedure in formulating such stream definitions. This multiplies
11497 each item in a stream by a given constant:
11498 (define (scale-stream stream factor)
11499 (stream-map (lambda (x) (* x factor)) stream))
11500 For example,
11501 (define double (cons-stream 1 (scale-stream double 2)))
11502 produces the stream of powers of 2: 1, 2, 4, 8, 16, 32, ....
11503 An alternate definition of the stream of primes can be given by starting with the integers and filtering
11504 them by testing for primality. We will need the first prime, 2, to get started:
11505 (define primes
11506 (cons-stream
11507 2
11508 (stream-filter prime? (integers-starting-from 3))))
11509 This definition is not so straightforward as it appears, because we will test whether a number n is
11510 prime by checking whether n is divisible by a prime (not by just any integer) less than or equal to
11511 (define (prime? n)
11512 (define (iter ps)
11513 (cond ((> (square (stream-car ps)) n) true)
11514 ((divisible? n (stream-car ps)) false)
11515 (else (iter (stream-cdr ps)))))
11516 (iter primes))
11517
11518 n:
11519
11520 \fThis is a recursive definition, since primes is defined in terms of the prime? predicate, which itself
11521 uses the primes stream. The reason this procedure works is that, at any point, enough of the primes
11522 stream has been generated to test the primality of the numbers we need to check next. That is, for
11523 every n we test for primality, either n is not prime (in which case there is a prime already generated
11524 that divides it) or n is prime (in which case there is a prime already generated -- i.e., a prime less than
11525 n -- that is greater than n). 63
11526 Exercise 3.53. Without running the program, describe the elements of the stream defined by
11527 (define s (cons-stream 1 (add-streams s s)))
11528 Exercise 3.54. Define a procedure mul-streams, analogous to add-streams, that produces the
11529 elementwise product of its two input streams. Use this together with the stream of integers to
11530 complete the following definition of the stream whose nth element (counting from 0) is n + 1 factorial:
11531 (define factorials (cons-stream 1 (mul-streams <??> <??>)))
11532 Exercise 3.55. Define a procedure partial-sums that takes as argument a stream S and returns the
11533 stream whose elements are S 0 , S 0 + S 1 , S 0 + S 1 + S 2 , .... For example, (partial-sums
11534 integers) should be the stream 1, 3, 6, 10, 15, ....
11535 Exercise 3.56. A famous problem, first raised by R. Hamming, is to enumerate, in ascending order
11536 with no repetitions, all positive integers with no prime factors other than 2, 3, or 5. One obvious way
11537 to do this is to simply test each integer in turn to see whether it has any factors other than 2, 3, and 5.
11538 But this is very inefficient, since, as the integers get larger, fewer and fewer of them fit the
11539 requirement. As an alternative, let us call the required stream of numbers S and notice the following
11540 facts about it.
11541 S begins with 1.
11542 The elements of (scale-stream S 2) are also elements of S.
11543 The same is true for (scale-stream S 3) and (scale-stream 5 S).
11544 These are all the elements of S.
11545 Now all we have to do is combine elements from these sources. For this we define a procedure merge
11546 that combines two ordered streams into one ordered result stream, eliminating repetitions:
11547 (define (merge s1 s2)
11548 (cond ((stream-null? s1) s2)
11549 ((stream-null? s2) s1)
11550 (else
11551 (let ((s1car (stream-car s1))
11552 (s2car (stream-car s2)))
11553 (cond ((< s1car s2car)
11554 (cons-stream s1car (merge (stream-cdr s1) s2)))
11555 ((> s1car s2car)
11556 (cons-stream s2car (merge s1 (stream-cdr s2))))
11557 (else
11558 (cons-stream s1car
11559 (merge (stream-cdr s1)
11560
11561 \f(stream-cdr s2)))))))))
11562 Then the required stream may be constructed with merge, as follows:
11563 (define S (cons-stream 1 (merge <??> <??>)))
11564 Fill in the missing expressions in the places marked <??> above.
11565 Exercise 3.57. How many additions are performed when we compute the nth Fibonacci number using
11566 the definition of fibs based on the add-streams procedure? Show that the number of additions
11567 would be exponentially greater if we had implemented (delay <exp>) simply as (lambda ()
11568 <exp>), without using the optimization provided by the memo-proc procedure described in
11569 section 3.5.1. 64
11570 Exercise 3.58. Give an interpretation of the stream computed by the following procedure:
11571 (define (expand num den radix)
11572 (cons-stream
11573 (quotient (* num radix) den)
11574 (expand (remainder (* num radix) den) den radix)))
11575 (Quotient is a primitive that returns the integer quotient of two integers.) What are the successive
11576 elements produced by (expand 1 7 10) ? What is produced by (expand 3 8 10) ?
11577 Exercise 3.59. In section 2.5.3 we saw how to implement a polynomial arithmetic system
11578 representing polynomials as lists of terms. In a similar way, we can work with power series, such as
11579
11580 represented as infinite streams. We will represent the series a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ··· as the
11581 stream whose elements are the coefficients a 0 , a 1 , a 2 , a 3 , ....
11582 a. The integral of the series a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ··· is the series
11583
11584 where c is any constant. Define a procedure integrate-series that takes as input a stream a 0 ,
11585 a 1 , a 2 , ... representing a power series and returns the stream a 0 , (1/2)a 1 , (1/3)a 2 , ... of
11586 coefficients of the non-constant terms of the integral of the series. (Since the result has no constant
11587 term, it doesn’t represent a power series; when we use integrate-series, we will cons on the
11588 appropriate constant.)
11589
11590 \fb. The function x e x is its own derivative. This implies that e x and the integral of e x are the same
11591 series, except for the constant term, which is e 0 = 1. Accordingly, we can generate the series for e x as
11592 (define exp-series
11593 (cons-stream 1 (integrate-series exp-series)))
11594 Show how to generate the series for sine and cosine, starting from the facts that the derivative of sine
11595 is cosine and the derivative of cosine is the negative of sine:
11596 (define cosine-series
11597 (cons-stream 1 <??>))
11598 (define sine-series
11599 (cons-stream 0 <??>))
11600 Exercise 3.60. With power series represented as streams of coefficients as in exercise 3.59, adding
11601 series is implemented by add-streams. Complete the definition of the following procedure for
11602 multiplying series:
11603 (define (mul-series s1 s2)
11604 (cons-stream <??> (add-streams <??> <??>)))
11605 You can test your procedure by verifying that sin 2 x + cos 2 x = 1, using the series from exercise 3.59.
11606 Exercise 3.61. Let S be a power series (exercise 3.59) whose constant term is 1. Suppose we want to
11607 find the power series 1/S, that is, the series X such that S · X = 1. Write S = 1 + S R where S R is the part
11608 of S after the constant term. Then we can solve for X as follows:
11609
11610 In other words, X is the power series whose constant term is 1 and whose higher-order terms are given
11611 by the negative of S R times X. Use this idea to write a procedure invert-unit-series that
11612 computes 1/S for a power series S with constant term 1. You will need to use mul-series from
11613 exercise 3.60.
11614 Exercise 3.62. Use the results of exercises 3.60 and 3.61 to define a procedure div-series that
11615 divides two power series. Div-series should work for any two series, provided that the
11616 denominator series begins with a nonzero constant term. (If the denominator has a zero constant term,
11617 then div-series should signal an error.) Show how to use div-series together with the result
11618 of exercise 3.59 to generate the power series for tangent.
11619
11620 3.5.3 Exploiting the Stream Paradigm
11621 Streams with delayed evaluation can be a powerful modeling tool, providing many of the benefits of
11622 local state and assignment. Moreover, they avoid some of the theoretical tangles that accompany the
11623 introduction of assignment into a programming language.
11624
11625 \fThe stream approach can be illuminating because it allows us to build systems with different module
11626 boundaries than systems organized around assignment to state variables. For example, we can think of
11627 an entire time series (or signal) as a focus of interest, rather than the values of the state variables at
11628 individual moments. This makes it convenient to combine and compare components of state from
11629 different moments.
11630
11631 Formulating iterations as stream processes
11632 In section 1.2.1, we introduced iterative processes, which proceed by updating state variables. We
11633 know now that we can represent state as a ‘‘timeless’’ stream of values rather than as a set of variables
11634 to be updated. Let’s adopt this perspective in revisiting the square-root procedure from section 1.1.7.
11635 Recall that the idea is to generate a sequence of better and better guesses for the square root of x by
11636 applying over and over again the procedure that improves guesses:
11637 (define (sqrt-improve guess x)
11638 (average guess (/ x guess)))
11639 In our original sqrt procedure, we made these guesses be the successive values of a state variable.
11640 Instead we can generate the infinite stream of guesses, starting with an initial guess of 1: 65
11641 (define (sqrt-stream x)
11642 (define guesses
11643 (cons-stream 1.0
11644 (stream-map (lambda (guess)
11645 (sqrt-improve guess x))
11646 guesses)))
11647 guesses)
11648 (display-stream (sqrt-stream 2))
11649 1.
11650 1.5
11651 1.4166666666666665
11652 1.4142156862745097
11653 1.4142135623746899
11654 ...
11655 We can generate more and more terms of the stream to get better and better guesses. If we like, we can
11656 write a procedure that keeps generating terms until the answer is good enough. (See exercise 3.64.)
11657 Another iteration that we can treat in the same way is to generate an approximation to
11658 the alternating series that we saw in section 1.3.1:
11659
11660 , based upon
11661
11662 We first generate the stream of summands of the series (the reciprocals of the odd integers, with
11663 alternating signs). Then we take the stream of sums of more and more terms (using the
11664 partial-sums procedure of exercise 3.55) and scale the result by 4:
11665 (define (pi-summands n)
11666 (cons-stream (/ 1.0 n)
11667 (stream-map - (pi-summands (+ n 2)))))
11668 (define pi-stream
11669
11670 \f(scale-stream (partial-sums (pi-summands 1)) 4))
11671 (display-stream pi-stream)
11672 4.
11673 2.666666666666667
11674 3.466666666666667
11675 2.8952380952380956
11676 3.3396825396825403
11677 2.9760461760461765
11678 3.2837384837384844
11679 3.017071817071818
11680 ...
11681 This gives us a stream of better and better approximations to , although the approximations converge
11682 rather slowly. Eight terms of the sequence bound the value of between 3.284 and 3.017.
11683 So far, our use of the stream of states approach is not much different from updating state variables. But
11684 streams give us an opportunity to do some interesting tricks. For example, we can transform a stream
11685 with a sequence accelerator that converts a sequence of approximations to a new sequence that
11686 converges to the same value as the original, only faster.
11687 One such accelerator, due to the eighteenth-century Swiss mathematician Leonhard Euler, works well
11688 with sequences that are partial sums of alternating series (series of terms with alternating signs). In
11689 Euler’s technique, if S n is the nth term of the original sum sequence, then the accelerated sequence has
11690 terms
11691
11692 Thus, if the original sequence is represented as a stream of values, the transformed sequence is given
11693 by
11694 (define (euler-transform s)
11695 (let ((s0 (stream-ref s 0))
11696 ; S n-1
11697 (s1 (stream-ref s 1))
11698 ; Sn
11699 (s2 (stream-ref s 2)))
11700 ; S n+1
11701 (cons-stream (- s2 (/ (square (- s2 s1))
11702 (+ s0 (* -2 s1) s2)))
11703 (euler-transform (stream-cdr s)))))
11704 We can demonstrate Euler acceleration with our sequence of approximations to
11705 (display-stream (euler-transform pi-stream))
11706 3.166666666666667
11707 3.1333333333333337
11708 3.1452380952380956
11709 3.13968253968254
11710 3.1427128427128435
11711 3.1408813408813416
11712 3.142071817071818
11713 3.1412548236077655
11714 ...
11715
11716 :
11717
11718 \fEven better, we can accelerate the accelerated sequence, and recursively accelerate that, and so on.
11719 Namely, we create a stream of streams (a structure we’ll call a tableau) in which each stream is the
11720 transform of the preceding one:
11721 (define (make-tableau transform s)
11722 (cons-stream s
11723 (make-tableau transform
11724 (transform s))))
11725 The tableau has the form
11726
11727 Finally, we form a sequence by taking the first term in each row of the tableau:
11728 (define (accelerated-sequence transform s)
11729 (stream-map stream-car
11730 (make-tableau transform s)))
11731 We can demonstrate this kind of ‘‘super-acceleration’’ of the
11732
11733 sequence:
11734
11735 (display-stream (accelerated-sequence euler-transform
11736 pi-stream))
11737 4.
11738 3.166666666666667
11739 3.142105263157895
11740 3.141599357319005
11741 3.1415927140337785
11742 3.1415926539752927
11743 3.1415926535911765
11744 3.141592653589778
11745 ...
11746 The result is impressive. Taking eight terms of the sequence yields the correct value of to 14 decimal
11747 places. If we had used only the original sequence, we would need to compute on the order of 10 13
11748 terms (i.e., expanding the series far enough so that the individual terms are less then 10 -13 ) to get that
11749 much accuracy! We could have implemented these acceleration techniques without using streams. But
11750 the stream formulation is particularly elegant and convenient because the entire sequence of states is
11751 available to us as a data structure that can be manipulated with a uniform set of operations.
11752 Exercise 3.63. Louis Reasoner asks why the sqrt-stream procedure was not written in the
11753 following more straightforward way, without the local variable guesses:
11754 (define (sqrt-stream x)
11755 (cons-stream 1.0
11756 (stream-map (lambda (guess)
11757 (sqrt-improve guess x))
11758 (sqrt-stream x))))
11759
11760 \fAlyssa P. Hacker replies that this version of the procedure is considerably less efficient because it
11761 performs redundant computation. Explain Alyssa’s answer. Would the two versions still differ in
11762 efficiency if our implementation of delay used only (lambda () <exp>) without using the
11763 optimization provided by memo-proc (section 3.5.1)?
11764 Exercise 3.64. Write a procedure stream-limit that takes as arguments a stream and a number
11765 (the tolerance). It should examine the stream until it finds two successive elements that differ in
11766 absolute value by less than the tolerance, and return the second of the two elements. Using this, we
11767 could compute square roots up to a given tolerance by
11768 (define (sqrt x tolerance)
11769 (stream-limit (sqrt-stream x) tolerance))
11770 Exercise 3.65. Use the series
11771
11772 to compute three sequences of approximations to the natural logarithm of 2, in the same way we did
11773 above for . How rapidly do these sequences converge?
11774
11775 Infinite streams of pairs
11776 In section 2.2.3, we saw how the sequence paradigm handles traditional nested loops as processes
11777 defined on sequences of pairs. If we generalize this technique to infinite streams, then we can write
11778 programs that are not easily represented as loops, because the ‘‘looping’’ must range over an infinite
11779 set.
11780 For example, suppose we want to generalize the prime-sum-pairs procedure of section 2.2.3 to
11781 produce the stream of pairs of all integers (i,j) with i < j such that i + j is prime. If int-pairs is the
11782 sequence of all pairs of integers (i,j) with i < j, then our required stream is simply 66
11783 (stream-filter (lambda (pair)
11784 (prime? (+ (car pair) (cadr pair))))
11785 int-pairs)
11786 Our problem, then, is to produce the stream int-pairs. More generally, suppose we have two
11787 streams S = (S i ) and T = (T j ), and imagine the infinite rectangular array
11788
11789 We wish to generate a stream that contains all the pairs in the array that lie on or above the diagonal,
11790 i.e., the pairs
11791
11792 \f(If we take both S and T to be the stream of integers, then this will be our desired stream
11793 int-pairs.)
11794 Call the general stream of pairs (pairs S T), and consider it to be composed of three parts: the
11795 pair (S 0 ,T 0 ), the rest of the pairs in the first row, and the remaining pairs: 67
11796
11797 Observe that the third piece in this decomposition (pairs that are not in the first row) is (recursively)
11798 the pairs formed from (stream-cdr S) and (stream-cdr T). Also note that the second piece
11799 (the rest of the first row) is
11800 (stream-map (lambda (x) (list (stream-car s) x))
11801 (stream-cdr t))
11802 Thus we can form our stream of pairs as follows:
11803 (define (pairs s t)
11804 (cons-stream
11805 (list (stream-car s) (stream-car t))
11806 (<combine-in-some-way>
11807 (stream-map (lambda (x) (list (stream-car s) x))
11808 (stream-cdr t))
11809 (pairs (stream-cdr s) (stream-cdr t)))))
11810 In order to complete the procedure, we must choose some way to combine the two inner streams. One
11811 idea is to use the stream analog of the append procedure from section 2.2.1:
11812 (define (stream-append s1 s2)
11813 (if (stream-null? s1)
11814 s2
11815 (cons-stream (stream-car s1)
11816 (stream-append (stream-cdr s1) s2))))
11817 This is unsuitable for infinite streams, however, because it takes all the elements from the first stream
11818 before incorporating the second stream. In particular, if we try to generate all pairs of positive integers
11819 using
11820 (pairs integers integers)
11821 our stream of results will first try to run through all pairs with the first integer equal to 1, and hence
11822 will never produce pairs with any other value of the first integer.
11823 To handle infinite streams, we need to devise an order of combination that ensures that every element
11824 will eventually be reached if we let our program run long enough. An elegant way to accomplish this
11825 is with the following interleave procedure: 68
11826
11827 \f(define (interleave s1 s2)
11828 (if (stream-null? s1)
11829 s2
11830 (cons-stream (stream-car s1)
11831 (interleave s2 (stream-cdr s1)))))
11832 Since interleave takes elements alternately from the two streams, every element of the second
11833 stream will eventually find its way into the interleaved stream, even if the first stream is infinite.
11834 We can thus generate the required stream of pairs as
11835 (define (pairs s t)
11836 (cons-stream
11837 (list (stream-car s) (stream-car t))
11838 (interleave
11839 (stream-map (lambda (x) (list (stream-car s) x))
11840 (stream-cdr t))
11841 (pairs (stream-cdr s) (stream-cdr t)))))
11842 Exercise 3.66. Examine the stream (pairs integers integers). Can you make any general
11843 comments about the order in which the pairs are placed into the stream? For example, about how many
11844 pairs precede the pair (1,100)? the pair (99,100)? the pair (100,100)? (If you can make precise
11845 mathematical statements here, all the better. But feel free to give more qualitative answers if you find
11846 yourself getting bogged down.)
11847 Exercise 3.67. Modify the pairs procedure so that (pairs integers integers) will
11848 produce the stream of all pairs of integers (i,j) (without the condition i < j). Hint: You will need to mix
11849 in an additional stream.
11850 Exercise 3.68. Louis Reasoner thinks that building a stream of pairs from three parts is unnecessarily
11851 complicated. Instead of separating the pair (S 0 ,T 0 ) from the rest of the pairs in the first row, he
11852 proposes to work with the whole first row, as follows:
11853 (define (pairs s t)
11854 (interleave
11855 (stream-map (lambda (x) (list (stream-car s) x))
11856 t)
11857 (pairs (stream-cdr s) (stream-cdr t))))
11858 Does this work? Consider what happens if we evaluate (pairs integers integers) using
11859 Louis’s definition of pairs.
11860 Exercise 3.69. Write a procedure triples that takes three infinite streams, S, T, and U, and
11861 produces the stream of triples (S i ,T j ,U k ) such that i < j < k. Use triples to generate the stream of
11862 all Pythagorean triples of positive integers, i.e., the triples (i,j,k) such that i < j and i 2 + j 2 = k 2 .
11863 Exercise 3.70. It would be nice to be able to generate streams in which the pairs appear in some
11864 useful order, rather than in the order that results from an ad hoc interleaving process. We can use a
11865 technique similar to the merge procedure of exercise 3.56, if we define a way to say that one pair of
11866 integers is ‘‘less than’’ another. One way to do this is to define a ‘‘weighting function’’ W(i,j) and
11867 stipulate that (i 1 ,j 1 ) is less than (i 2 ,j 2 ) if W(i 1 ,j 1 ) < W(i 2 ,j 2 ). Write a procedure
11868 merge-weighted that is like merge, except that merge-weighted takes an additional
11869
11870 \fargument weight, which is a procedure that computes the weight of a pair, and is used to determine
11871 the order in which elements should appear in the resulting merged stream. 69 Using this, generalize
11872 pairs to a procedure weighted-pairs that takes two streams, together with a procedure that
11873 computes a weighting function, and generates the stream of pairs, ordered according to weight. Use
11874 your procedure to generate
11875 a. the stream of all pairs of positive integers (i,j) with i < j ordered according to the sum i + j
11876 b. the stream of all pairs of positive integers (i,j) with i < j, where neither i nor j is divisible by 2, 3, or
11877 5, and the pairs are ordered according to the sum 2 i + 3 j + 5 i j.
11878 Exercise 3.71. Numbers that can be expressed as the sum of two cubes in more than one way are
11879 sometimes called Ramanujan numbers, in honor of the mathematician Srinivasa Ramanujan. 70
11880 Ordered streams of pairs provide an elegant solution to the problem of computing these numbers. To
11881 find a number that can be written as the sum of two cubes in two different ways, we need only
11882 generate the stream of pairs of integers (i,j) weighted according to the sum i 3 + j 3 (see exercise 3.70),
11883 then search the stream for two consecutive pairs with the same weight. Write a procedure to generate
11884 the Ramanujan numbers. The first such number is 1,729. What are the next five?
11885 Exercise 3.72. In a similar way to exercise 3.71 generate a stream of all numbers that can be written
11886 as the sum of two squares in three different ways (showing how they can be so written).
11887
11888 Streams as signals
11889 We began our discussion of streams by describing them as computational analogs of the ‘‘signals’’ in
11890 signal-processing systems. In fact, we can use streams to model signal-processing systems in a very
11891 direct way, representing the values of a signal at successive time intervals as consecutive elements of a
11892 stream. For instance, we can implement an integrator or summer that, for an input stream x = (x i ), an
11893 initial value C, and a small increment dt, accumulates the sum
11894
11895 and returns the stream of values S = (S i ). The following integral procedure is reminiscent of the
11896 ‘‘implicit style’’ definition of the stream of integers (section 3.5.2):
11897 (define (integral integrand initial-value dt)
11898 (define int
11899 (cons-stream initial-value
11900 (add-streams (scale-stream integrand dt)
11901 int)))
11902 int)
11903
11904 \fFigure 3.32: The integral procedure viewed as a signal-processing system.
11905 Figure 3.32: The integral procedure viewed as a signal-processing system.
11906 Figure 3.32 is a picture of a signal-processing system that corresponds to the integral procedure.
11907 The input stream is scaled by dt and passed through an adder, whose output is passed back through the
11908 same adder. The self-reference in the definition of int is reflected in the figure by the feedback loop
11909 that connects the output of the adder to one of the inputs.
11910 Exercise 3.73.
11911
11912 v = v 0 + (1/C)
11913
11914 0
11915
11916 ti
11917
11918 dt + R i
11919
11920 Figure 3.33: An RC circuit and the associated signal-flow diagram.
11921 Figure 3.33: An RC circuit and the associated signal-flow diagram.
11922 We can model electrical circuits using streams to represent the values of currents or voltages at a
11923 sequence of times. For instance, suppose we have an RC circuit consisting of a resistor of resistance R
11924 and a capacitor of capacitance C in series. The voltage response v of the circuit to an injected current i
11925 is determined by the formula in figure 3.33, whose structure is shown by the accompanying
11926 signal-flow diagram.
11927 Write a procedure RC that models this circuit. RC should take as inputs the values of R, C, and dt and
11928 should return a procedure that takes as inputs a stream representing the current i and an initial value for
11929 the capacitor voltage v 0 and produces as output the stream of voltages v. For example, you should be
11930 able to use RC to model an RC circuit with R = 5 ohms, C = 1 farad, and a 0.5-second time step by
11931 evaluating (define RC1 (RC 5 1 0.5)). This defines RC1 as a procedure that takes a stream
11932
11933 \frepresenting the time sequence of currents and an initial capacitor voltage and produces the output
11934 stream of voltages.
11935 Exercise 3.74. Alyssa P. Hacker is designing a system to process signals coming from physical
11936 sensors. One important feature she wishes to produce is a signal that describes the zero crossings of
11937 the input signal. That is, the resulting signal should be + 1 whenever the input signal changes from
11938 negative to positive, - 1 whenever the input signal changes from positive to negative, and 0 otherwise.
11939 (Assume that the sign of a 0 input is positive.) For example, a typical input signal with its associated
11940 zero-crossing signal would be
11941 ...1
11942
11943 2
11944
11945 1.5
11946
11947 1
11948
11949 0.5
11950
11951 -0.1
11952
11953 -2
11954
11955 -3
11956
11957 -2
11958
11959 -0.5
11960
11961 0.2
11962
11963 3
11964
11965 4 ...... 0
11966
11967 0
11968
11969 0
11970
11971 0
11972
11973 0
11974
11975 -1
11976
11977 0
11978
11979 0
11980
11981 0
11982
11983 0
11984
11985 1
11986
11987 0
11988
11989 0 ...
11990
11991 In Alyssa’s system, the signal from the sensor is represented as a stream sense-data and the stream
11992 zero-crossings is the corresponding stream of zero crossings. Alyssa first writes a procedure
11993 sign-change-detector that takes two values as arguments and compares the signs of the values
11994 to produce an appropriate 0, 1, or - 1. She then constructs her zero-crossing stream as follows:
11995 (define (make-zero-crossings input-stream last-value)
11996 (cons-stream
11997 (sign-change-detector (stream-car input-stream) last-value)
11998 (make-zero-crossings (stream-cdr input-stream)
11999 (stream-car input-stream))))
12000 (define zero-crossings (make-zero-crossings sense-data 0))
12001 Alyssa’s boss, Eva Lu Ator, walks by and suggests that this program is approximately equivalent to
12002 the following one, which uses the generalized version of stream-map from exercise 3.50:
12003 (define zero-crossings
12004 (stream-map sign-change-detector sense-data <expression>))
12005 Complete the program by supplying the indicated <expression>.
12006 Exercise 3.75. Unfortunately, Alyssa’s zero-crossing detector in exercise 3.74 proves to be
12007 insufficient, because the noisy signal from the sensor leads to spurious zero crossings. Lem E.
12008 Tweakit, a hardware specialist, suggests that Alyssa smooth the signal to filter out the noise before
12009 extracting the zero crossings. Alyssa takes his advice and decides to extract the zero crossings from the
12010 signal constructed by averaging each value of the sense data with the previous value. She explains the
12011 problem to her assistant, Louis Reasoner, who attempts to implement the idea, altering Alyssa’s
12012 program as follows:
12013 (define (make-zero-crossings input-stream last-value)
12014 (let ((avpt (/ (+ (stream-car input-stream) last-value) 2)))
12015 (cons-stream (sign-change-detector avpt last-value)
12016 (make-zero-crossings (stream-cdr input-stream)
12017 avpt))))
12018 This does not correctly implement Alyssa’s plan. Find the bug that Louis has installed and fix it
12019 without changing the structure of the program. (Hint: You will need to increase the number of
12020 arguments to make-zero-crossings.)
12021
12022 \fExercise 3.76. Eva Lu Ator has a criticism of Louis’s approach in exercise 3.75. The program he
12023 wrote is not modular, because it intermixes the operation of smoothing with the zero-crossing
12024 extraction. For example, the extractor should not have to be changed if Alyssa finds a better way to
12025 condition her input signal. Help Louis by writing a procedure smooth that takes a stream as input and
12026 produces a stream in which each element is the average of two successive input stream elements. Then
12027 use smooth as a component to implement the zero-crossing detector in a more modular style.
12028
12029 3.5.4 Streams and Delayed Evaluation
12030 The integral procedure at the end of the preceding section shows how we can use streams to model
12031 signal-processing systems that contain feedback loops. The feedback loop for the adder shown in
12032 figure 3.32 is modeled by the fact that integral’s internal stream int is defined in terms of itself:
12033 (define int
12034 (cons-stream initial-value
12035 (add-streams (scale-stream integrand dt)
12036 int)))
12037 The interpreter’s ability to deal with such an implicit definition depends on the delay that is
12038 incorporated into cons-stream. Without this delay, the interpreter could not construct int
12039 before evaluating both arguments to cons-stream, which would require that int already be
12040 defined. In general, delay is crucial for using streams to model signal-processing systems that
12041 contain loops. Without delay, our models would have to be formulated so that the inputs to any
12042 signal-processing component would be fully evaluated before the output could be produced. This
12043 would outlaw loops.
12044 Unfortunately, stream models of systems with loops may require uses of delay beyond the ‘‘hidden’’
12045 delay supplied by cons-stream. For instance, figure 3.34 shows a signal-processing system for
12046 solving the differential equation dy/dt = f(y) where f is a given function. The figure shows a mapping
12047 component, which applies f to its input signal, linked in a feedback loop to an integrator in a manner
12048 very similar to that of the analog computer circuits that are actually used to solve such equations.
12049
12050 Figure 3.34: An ‘‘analog computer circuit’’ that solves the equation dy/dt = f(y).
12051 Figure 3.34: An ‘‘analog computer circuit’’ that solves the equation dy/dt = f(y).
12052 Assuming we are given an initial value y 0 for y, we could try to model this system using the procedure
12053 (define (solve f y0 dt)
12054 (define y (integral dy y0 dt))
12055 (define dy (stream-map f y))
12056 y)
12057
12058 \fThis procedure does not work, because in the first line of solve the call to integral requires that
12059 the input dy be defined, which does not happen until the second line of solve.
12060 On the other hand, the intent of our definition does make sense, because we can, in principle, begin to
12061 generate the y stream without knowing dy. Indeed, integral and many other stream operations
12062 have properties similar to those of cons-stream, in that we can generate part of the answer given
12063 only partial information about the arguments. For integral, the first element of the output stream is
12064 the specified initial-value. Thus, we can generate the first element of the output stream without
12065 evaluating the integrand dy. Once we know the first element of y, the stream-map in the second
12066 line of solve can begin working to generate the first element of dy, which will produce the next
12067 element of y, and so on.
12068 To take advantage of this idea, we will redefine integral to expect the integrand stream to be a
12069 delayed argument. Integral will force the integrand to be evaluated only when it is required to
12070 generate more than the first element of the output stream:
12071 (define (integral delayed-integrand initial-value dt)
12072 (define int
12073 (cons-stream initial-value
12074 (let ((integrand (force delayed-integrand)))
12075 (add-streams (scale-stream integrand dt)
12076 int))))
12077 int)
12078 Now we can implement our solve procedure by delaying the evaluation of dy in the definition of
12079 y: 71
12080 (define (solve f y0 dt)
12081 (define y (integral (delay dy) y0 dt))
12082 (define dy (stream-map f y))
12083 y)
12084 In general, every caller of integral must now delay the integrand argument. We can demonstrate
12085 that the solve procedure works by approximating e 2.718 by computing the value at y = 1 of the
12086 solution to the differential equation dy/dt = y with initial condition y(0) = 1:
12087 (stream-ref (solve (lambda (y) y) 1 0.001) 1000)
12088 2.716924
12089 Exercise 3.77. The integral procedure used above was analogous to the ‘‘implicit’’ definition of
12090 the infinite stream of integers in section 3.5.2. Alternatively, we can give a definition of integral
12091 that is more like integers-starting-from (also in section 3.5.2):
12092 (define (integral integrand initial-value dt)
12093 (cons-stream initial-value
12094 (if (stream-null? integrand)
12095 the-empty-stream
12096 (integral (stream-cdr integrand)
12097 (+ (* dt (stream-car integrand))
12098 initial-value)
12099 dt))))
12100
12101 \fWhen used in systems with loops, this procedure has the same problem as does our original version of
12102 integral. Modify the procedure so that it expects the integrand as a delayed argument and
12103 hence can be used in the solve procedure shown above.
12104 Exercise 3.78.
12105
12106 Figure 3.35: Signal-flow diagram for the solution to a second-order linear differential equation.
12107 Figure 3.35: Signal-flow diagram for the solution to a second-order linear differential equation.
12108 Consider the problem of designing a signal-processing system to study the homogeneous second-order
12109 linear differential equation
12110
12111 The output stream, modeling y, is generated by a network that contains a loop. This is because the
12112 value of d 2 y/dt 2 depends upon the values of y and dy/dt and both of these are determined by
12113 integrating d 2 y/dt 2 . The diagram we would like to encode is shown in figure 3.35. Write a procedure
12114 solve-2nd that takes as arguments the constants a, b, and dt and the initial values y 0 and dy 0 for y
12115 and dy/dt and generates the stream of successive values of y.
12116 Exercise 3.79. Generalize the solve-2nd procedure of exercise 3.78 so that it can be used to solve
12117 general second-order differential equations d 2 y/dt 2 = f(dy/dt, y).
12118 Exercise 3.80. A series RLC circuit consists of a resistor, a capacitor, and an inductor connected in
12119 series, as shown in figure 3.36. If R, L, and C are the resistance, inductance, and capacitance, then the
12120 relations between voltage (v) and current (i) for the three components are described by the equations
12121
12122 \fand the circuit connections dictate the relations
12123
12124 Combining these equations shows that the state of the circuit (summarized by v C , the voltage across
12125 the capacitor, and i L , the current in the inductor) is described by the pair of differential equations
12126
12127 The signal-flow diagram representing this system of differential equations is shown in figure 3.37.
12128
12129 Figure 3.36: A series RLC circuit.
12130 Figure 3.36: A series RLC circuit.
12131
12132 Figure 3.37: A signal-flow diagram for the solution to a series RLC circuit.
12133 Figure 3.37: A signal-flow diagram for the solution to a series RLC circuit.
12134
12135 \fWrite a procedure RLC that takes as arguments the parameters R, L, and C of the circuit and the time
12136 increment dt. In a manner similar to that of the RC procedure of exercise 3.73, RLC should produce a
12137 procedure that takes the initial values of the state variables, v C 0 and i L 0 , and produces a pair (using
12138 cons) of the streams of states v C and i L . Using RLC, generate the pair of streams that models the
12139 behavior of a series RLC circuit with R = 1 ohm, C = 0.2 farad, L = 1 henry, dt = 0.1 second, and
12140 initial values i L 0 = 0 amps and v C 0 = 10 volts.
12141
12142 Normal-order evaluation
12143 The examples in this section illustrate how the explicit use of delay and force provides great
12144 programming flexibility, but the same examples also show how this can make our programs more
12145 complex. Our new integral procedure, for instance, gives us the power to model systems with
12146 loops, but we must now remember that integral should be called with a delayed integrand, and
12147 every procedure that uses integral must be aware of this. In effect, we have created two classes of
12148 procedures: ordinary procedures and procedures that take delayed arguments. In general, creating
12149 separate classes of procedures forces us to create separate classes of higher-order procedures as
12150 well. 72
12151 One way to avoid the need for two different classes of procedures is to make all procedures take
12152 delayed arguments. We could adopt a model of evaluation in which all arguments to procedures are
12153 automatically delayed and arguments are forced only when they are actually needed (for example,
12154 when they are required by a primitive operation). This would transform our language to use
12155 normal-order evaluation, which we first described when we introduced the substitution model for
12156 evaluation in section 1.1.5. Converting to normal-order evaluation provides a uniform and elegant way
12157 to simplify the use of delayed evaluation, and this would be a natural strategy to adopt if we were
12158 concerned only with stream processing. In section 4.2, after we have studied the evaluator, we will see
12159 how to transform our language in just this way. Unfortunately, including delays in procedure calls
12160 wreaks havoc with our ability to design programs that depend on the order of events, such as programs
12161 that use assignment, mutate data, or perform input or output. Even the single delay in
12162 cons-stream can cause great confusion, as illustrated by exercises 3.51 and 3.52. As far as anyone
12163 knows, mutability and delayed evaluation do not mix well in programming languages, and devising
12164 ways to deal with both of these at once is an active area of research.
12165
12166 3.5.5 Modularity of Functional Programs and Modularity of Objects
12167 As we saw in section 3.1.2, one of the major benefits of introducing assignment is that we can increase
12168 the modularity of our systems by encapsulating, or ‘‘hiding,’’ parts of the state of a large system
12169 within local variables. Stream models can provide an equivalent modularity without the use of
12170 assignment. As an illustration, we can reimplement the Monte Carlo estimation of , which we
12171 examined in section 3.1.2, from a stream-processing point of view.
12172 The key modularity issue was that we wished to hide the internal state of a random-number generator
12173 from programs that used random numbers. We began with a procedure rand-update, whose
12174 successive values furnished our supply of random numbers, and used this to produce a random-number
12175 generator:
12176 (define rand
12177 (let ((x random-init))
12178 (lambda ()
12179 (set! x (rand-update x))
12180
12181 \fx)))
12182 In the stream formulation there is no random-number generator per se, just a stream of random
12183 numbers produced by successive calls to rand-update:
12184 (define random-numbers
12185 (cons-stream random-init
12186 (stream-map rand-update random-numbers)))
12187 We use this to construct the stream of outcomes of the Cesàro experiment performed on consecutive
12188 pairs in the random-numbers stream:
12189 (define cesaro-stream
12190 (map-successive-pairs (lambda (r1 r2) (= (gcd r1 r2) 1))
12191 random-numbers))
12192 (define (map-successive-pairs f s)
12193 (cons-stream
12194 (f (stream-car s) (stream-car (stream-cdr s)))
12195 (map-successive-pairs f (stream-cdr (stream-cdr s)))))
12196 The cesaro-stream is now fed to a monte-carlo procedure, which produces a stream of
12197 estimates of probabilities. The results are then converted into a stream of estimates of . This version
12198 of the program doesn’t need a parameter telling how many trials to perform. Better estimates of (from
12199 performing more experiments) are obtained by looking farther into the pi stream:
12200 (define (monte-carlo experiment-stream passed failed)
12201 (define (next passed failed)
12202 (cons-stream
12203 (/ passed (+ passed failed))
12204 (monte-carlo
12205 (stream-cdr experiment-stream) passed failed)))
12206 (if (stream-car experiment-stream)
12207 (next (+ passed 1) failed)
12208 (next passed (+ failed 1))))
12209 (define pi
12210 (stream-map (lambda (p) (sqrt (/ 6 p)))
12211 (monte-carlo cesaro-stream 0 0)))
12212 There is considerable modularity in this approach, because we still can formulate a general
12213 monte-carlo procedure that can deal with arbitrary experiments. Yet there is no assignment or
12214 local state.
12215 Exercise 3.81. Exercise 3.6 discussed generalizing the random-number generator to allow one to reset
12216 the random-number sequence so as to produce repeatable sequences of ‘‘random’’ numbers. Produce a
12217 stream formulation of this same generator that operates on an input stream of requests to generate a
12218 new random number or to reset the sequence to a specified value and that produces the desired
12219 stream of random numbers. Don’t use assignment in your solution.
12220 Exercise 3.82. Redo exercise 3.5 on Monte Carlo integration in terms of streams. The stream version
12221 of estimate-integral will not have an argument telling how many trials to perform. Instead, it
12222 will produce a stream of estimates based on successively more trials.
12223
12224 \fA functional-programming view of time
12225 Let us now return to the issues of objects and state that were raised at the beginning of this chapter and
12226 examine them in a new light. We introduced assignment and mutable objects to provide a mechanism
12227 for modular construction of programs that model systems with state. We constructed computational
12228 objects with local state variables and used assignment to modify these variables. We modeled the
12229 temporal behavior of the objects in the world by the temporal behavior of the corresponding
12230 computational objects.
12231 Now we have seen that streams provide an alternative way to model objects with local state. We can
12232 model a changing quantity, such as the local state of some object, using a stream that represents the
12233 time history of successive states. In essence, we represent time explicitly, using streams, so that we
12234 decouple time in our simulated world from the sequence of events that take place during evaluation.
12235 Indeed, because of the presence of delay there may be little relation between simulated time in the
12236 model and the order of events during the evaluation.
12237 In order to contrast these two approaches to modeling, let us reconsider the implementation of a
12238 ‘‘withdrawal processor’’ that monitors the balance in a bank account. In section 3.1.3 we implemented
12239 a simplified version of such a processor:
12240 (define (make-simplified-withdraw balance)
12241 (lambda (amount)
12242 (set! balance (- balance amount))
12243 balance))
12244 Calls to make-simplified-withdraw produce computational objects, each with a local state
12245 variable balance that is decremented by successive calls to the object. The object takes an amount
12246 as an argument and returns the new balance. We can imagine the user of a bank account typing a
12247 sequence of inputs to such an object and observing the sequence of returned values shown on a display
12248 screen.
12249 Alternatively, we can model a withdrawal processor as a procedure that takes as input a balance and a
12250 stream of amounts to withdraw and produces the stream of successive balances in the account:
12251 (define (stream-withdraw balance amount-stream)
12252 (cons-stream
12253 balance
12254 (stream-withdraw (- balance (stream-car amount-stream))
12255 (stream-cdr amount-stream))))
12256 Stream-withdraw implements a well-defined mathematical function whose output is fully
12257 determined by its input. Suppose, however, that the input amount-stream is the stream of
12258 successive values typed by the user and that the resulting stream of balances is displayed. Then, from
12259 the perspective of the user who is typing values and watching results, the stream process has the same
12260 behavior as the object created by make-simplified-withdraw. However, with the stream
12261 version, there is no assignment, no local state variable, and consequently none of the theoretical
12262 difficulties that we encountered in section 3.1.3. Yet the system has state!
12263 This is really remarkable. Even though stream-withdraw implements a well-defined
12264 mathematical function whose behavior does not change, the user’s perception here is one of interacting
12265 with a system that has a changing state. One way to resolve this paradox is to realize that it is the
12266 user’s temporal existence that imposes state on the system. If the user could step back from the
12267
12268 \finteraction and think in terms of streams of balances rather than individual transactions, the system
12269 would appear stateless. 73
12270 From the point of view of one part of a complex process, the other parts appear to change with time.
12271 They have hidden time-varying local state. If we wish to write programs that model this kind of natural
12272 decomposition in our world (as we see it from our viewpoint as a part of that world) with structures in
12273 our computer, we make computational objects that are not functional -- they must change with time.
12274 We model state with local state variables, and we model the changes of state with assignments to those
12275 variables. By doing this we make the time of execution of a computation model time in the world that
12276 we are part of, and thus we get ‘‘objects’’ in our computer.
12277 Modeling with objects is powerful and intuitive, largely because this matches the perception of
12278 interacting with a world of which we are part. However, as we’ve seen repeatedly throughout this
12279 chapter, these models raise thorny problems of constraining the order of events and of synchronizing
12280 multiple processes. The possibility of avoiding these problems has stimulated the development of
12281 functional programming languages, which do not include any provision for assignment or mutable
12282 data. In such a language, all procedures implement well-defined mathematical functions of their
12283 arguments, whose behavior does not change. The functional approach is extremely attractive for
12284 dealing with concurrent systems. 74
12285 On the other hand, if we look closely, we can see time-related problems creeping into functional
12286 models as well. One particularly troublesome area arises when we wish to design interactive systems,
12287 especially ones that model interactions between independent entities. For instance, consider once more
12288 the implementation a banking system that permits joint bank accounts. In a conventional system using
12289 assignment and objects, we would model the fact that Peter and Paul share an account by having both
12290 Peter and Paul send their transaction requests to the same bank-account object, as we saw in
12291 section 3.1.3. From the stream point of view, where there are no ‘‘objects’’ per se, we have already
12292 indicated that a bank account can be modeled as a process that operates on a stream of transaction
12293 requests to produce a stream of responses. Accordingly, we could model the fact that Peter and Paul
12294 have a joint bank account by merging Peter’s stream of transaction requests with Paul’s stream of
12295 requests and feeding the result to the bank-account stream process, as shown in figure 3.38.
12296
12297 Figure 3.38: A joint bank account, modeled by merging two streams of transaction requests.
12298 Figure 3.38: A joint bank account, modeled by merging two streams of transaction requests.
12299 The trouble with this formulation is in the notion of merge. It will not do to merge the two streams by
12300 simply taking alternately one request from Peter and one request from Paul. Suppose Paul accesses the
12301 account only very rarely. We could hardly force Peter to wait for Paul to access the account before he
12302 could issue a second transaction. However such a merge is implemented, it must interleave the two
12303 transaction streams in some way that is constrained by ‘‘real time’’ as perceived by Peter and Paul, in
12304 the sense that, if Peter and Paul meet, they can agree that certain transactions were processed before
12305 the meeting, and other transactions were processed after the meeting. 75 This is precisely the same
12306 constraint that we had to deal with in section 3.4.1, where we found the need to introduce explicit
12307 synchronization to ensure a ‘‘correct’’ order of events in concurrent processing of objects with state.
12308 Thus, in an attempt to support the functional style, the need to merge inputs from different agents
12309
12310 \freintroduces the same problems that the functional style was meant to eliminate.
12311 We began this chapter with the goal of building computational models whose structure matches our
12312 perception of the real world we are trying to model. We can model the world as a collection of
12313 separate, time-bound, interacting objects with state, or we can model the world as a single, timeless,
12314 stateless unity. Each view has powerful advantages, but neither view alone is completely satisfactory.
12315 A grand unification has yet to emerge. 76
12316 52 Physicists sometimes adopt this view by introducing the ‘‘world lines’’ of particles as a device for
12317
12318 reasoning about motion. We’ve also already mentioned (section 2.2.3) that this is the natural way to
12319 think about signal-processing systems. We will explore applications of streams to signal processing in
12320 section 3.5.3.
12321 53 Assume that we have a predicate prime? (e.g., as in section 1.2.6) that tests for primality.
12322 54 In the MIT implementation, the-empty-stream is the same as the empty list ’(), and
12323
12324 stream-null? is the same as null?.
12325 55 This should bother you. The fact that we are defining such similar procedures for streams and lists
12326
12327 indicates that we are missing some underlying abstraction. Unfortunately, in order to exploit this
12328 abstraction, we will need to exert finer control over the process of evaluation than we can at present.
12329 We will discuss this point further at the end of section 3.5.4. In section 4.2, we’ll develop a framework
12330 that unifies lists and streams.
12331 56 Although stream-car and stream-cdr can be defined as procedures, cons-stream must
12332
12333 be a special form. If cons-stream were a procedure, then, according to our model of evaluation,
12334 evaluating (cons-stream <a> <b>) would automatically cause <b> to be evaluated, which is
12335 precisely what we do not want to happen. For the same reason, delay must be a special form, though
12336 force can be an ordinary procedure.
12337 57 The numbers shown here do not really appear in the delayed expression. What actually appears is
12338
12339 the original expression, in an environment in which the variables are bound to the appropriate
12340 numbers. For example, (+ low 1) with low bound to 10,000 actually appears where 10001 is
12341 shown.
12342 58 There are many possible implementations of streams other than the one described in this section.
12343
12344 Delayed evaluation, which is the key to making streams practical, was inherent in Algol 60’s
12345 call-by-name parameter-passing method. The use of this mechanism to implement streams was first
12346 described by Landin (1965). Delayed evaluation for streams was introduced into Lisp by Friedman and
12347 Wise (1976). In their implementation, cons always delays evaluating its arguments, so that lists
12348 automatically behave as streams. The memoizing optimization is also known as call-by-need. The
12349 Algol community would refer to our original delayed objects as call-by-name thunks and to the
12350 optimized versions as call-by-need thunks.
12351 59 Exercises such as 3.51 and 3.52 are valuable for testing our understanding of how delay works.
12352
12353 On the other hand, intermixing delayed evaluation with printing -- and, even worse, with assignment -is extremely confusing, and instructors of courses on computer languages have traditionally tormented
12354 their students with examination questions such as the ones in this section. Needless to say, writing
12355 programs that depend on such subtleties is odious programming style. Part of the power of stream
12356 processing is that it lets us ignore the order in which events actually happen in our programs.
12357 Unfortunately, this is precisely what we cannot afford to do in the presence of assignment, which
12358
12359 \fforces us to be concerned with time and change.
12360 60 Eratosthenes, a third-century B.C. Alexandrian Greek philosopher, is famous for giving the first
12361
12362 accurate estimate of the circumference of the Earth, which he computed by observing shadows cast at
12363 noon on the day of the summer solstice. Eratosthenes’s sieve method, although ancient, has formed the
12364 basis for special-purpose hardware ‘‘sieves’’ that, until recently, were the most powerful tools in
12365 existence for locating large primes. Since the 70s, however, these methods have been superseded by
12366 outgrowths of the probabilistic techniques discussed in section 1.2.6.
12367 61 We have named these figures after Peter Henderson, who was the first person to show us diagrams
12368
12369 of this sort as a way of thinking about stream processing. Each solid line represents a stream of values
12370 being transmitted. The dashed line from the car to the cons and the filter indicates that this is a
12371 single value rather than a stream.
12372 62 This uses the generalized version of stream-map from exercise 3.50.
12373 63 This last point is very subtle and relies on the fact that p
12374 2
12375 n+1 < p n . (Here, p k denotes the kth
12376
12377 prime.) Estimates such as these are very difficult to establish. The ancient proof by Euclid that there
12378 are an infinite number of primes shows that p n+1 < p 1 p 2 ··· p n + 1, and no substantially better
12379 result was proved until 1851, when the Russian mathematician P. L. Chebyshev established that p n+1 <
12380 2p n for all n. This result, originally conjectured in 1845, is known as Bertrand’s hypothesis. A proof
12381 can be found in section 22.3 of Hardy and Wright 1960.
12382 64 This exercise shows how call-by-need is closely related to ordinary memoization as described in
12383
12384 exercise 3.27. In that exercise, we used assignment to explicitly construct a local table. Our
12385 call-by-need stream optimization effectively constructs such a table automatically, storing values in
12386 the previously forced parts of the stream.
12387 65 We can’t use let to bind the local variable guesses, because the value of guesses depends on
12388
12389 guesses itself. Exercise 3.63 addresses why we want a local variable here.
12390 66 As in section 2.2.3, we represent a pair of integers as a list rather than a Lisp pair.
12391 67 See exercise 3.68 for some insight into why we chose this decomposition.
12392 68 The precise statement of the required property on the order of combination is as follows: There
12393
12394 should be a function f of two arguments such that the pair corresponding to element i of the first
12395 stream and element j of the second stream will appear as element number f(i,j) of the output stream.
12396 The trick of using interleave to accomplish this was shown to us by David Turner, who employed
12397 it in the language KRC (Turner 1981).
12398 69 We will require that the weighting function be such that the weight of a pair increases as we move
12399
12400 out along a row or down along a column of the array of pairs.
12401 70 To quote from G. H. Hardy’s obituary of Ramanujan (Hardy 1921): ‘‘It was Mr. Littlewood (I
12402
12403 believe) who remarked that ‘every positive integer was one of his friends.’ I remember once going to
12404 see him when he was lying ill at Putney. I had ridden in taxi-cab No. 1729, and remarked that the
12405 number seemed to me a rather dull one, and that I hoped it was not an unfavorable omen. ‘No,’ he
12406 replied, ‘it is a very interesting number; it is the smallest number expressible as the sum of two cubes
12407 in two different ways.’ ’’ The trick of using weighted pairs to generate the Ramanujan numbers was
12408 shown to us by Charles Leiserson.
12409
12410 \f71 This procedure is not guaranteed to work in all Scheme implementations, although for any
12411
12412 implementation there is a simple variation that will work. The problem has to do with subtle
12413 differences in the ways that Scheme implementations handle internal definitions. (See section 4.1.6.)
12414 72 This is a small reflection, in Lisp, of the difficulties that conventional strongly typed languages
12415
12416 such as Pascal have in coping with higher-order procedures. In such languages, the programmer must
12417 specify the data types of the arguments and the result of each procedure: number, logical value,
12418 sequence, and so on. Consequently, we could not express an abstraction such as ‘‘map a given
12419 procedure proc over all the elements in a sequence’’ by a single higher-order procedure such as
12420 stream-map. Rather, we would need a different mapping procedure for each different combination
12421 of argument and result data types that might be specified for a proc. Maintaining a practical notion of
12422 ‘‘data type’’ in the presence of higher-order procedures raises many difficult issues. One way of
12423 dealing with this problem is illustrated by the language ML (Gordon, Milner, and Wadsworth 1979),
12424 whose ‘‘polymorphic data types’’ include templates for higher-order transformations between data
12425 types. Moreover, data types for most procedures in ML are never explicitly declared by the
12426 programmer. Instead, ML includes a type-inferencing mechanism that uses information in the
12427 environment to deduce the data types for newly defined procedures.
12428 73 Similarly in physics, when we observe a moving particle, we say that the position (state) of the
12429
12430 particle is changing. However, from the perspective of the particle’s world line in space-time there is
12431 no change involved.
12432 74 John Backus, the inventor of Fortran, gave high visibility to functional programming when he was
12433
12434 awarded the ACM Turing award in 1978. His acceptance speech (Backus 1978) strongly advocated the
12435 functional approach. A good overview of functional programming is given in Henderson 1980 and in
12436 Darlington, Henderson, and Turner 1982.
12437 75 Observe that, for any two streams, there is in general more than one acceptable order of
12438
12439 interleaving. Thus, technically, ‘‘merge’’ is a relation rather than a function -- the answer is not a
12440 deterministic function of the inputs. We already mentioned (footnote 39) that nondeterminism is
12441 essential when dealing with concurrency. The merge relation illustrates the same essential
12442 nondeterminism, from the functional perspective. In section 4.3, we will look at nondeterminism from
12443 yet another point of view.
12444 76 The object model approximates the world by dividing it into separate pieces. The functional model
12445
12446 does not modularize along object boundaries. The object model is useful when the unshared state of
12447 the ‘‘objects’’ is much larger than the state that they share. An example of a place where the object
12448 viewpoint fails is quantum mechanics, where thinking of things as individual particles leads to
12449 paradoxes and confusions. Unifying the object view with the functional view may have little to do
12450 with programming, but rather with fundamental epistemological issues.
12451 [Go to first, previous, next page; contents; index]
12452
12453 \f[Go to first, previous, next page; contents; index]
12454
12455 Chapter 4
12456 Metalinguistic Abstraction
12457 ... It’s in words that the magic is -- Abracadabra, Open
12458 Sesame, and the rest -- but the magic words in one story
12459 aren’t magical in the next. The real magic is to understand
12460 which words work, and when, and for what; the trick is to
12461 learn the trick.
12462 ... And those words are made from the letters of our
12463 alphabet: a couple-dozen squiggles we can draw with the
12464 pen. This is the key! And the treasure, too, if we can only
12465 get our hands on it! It’s as if -- as if the key to the treasure
12466 is the treasure!
12467 John Barth, Chimera
12468 In our study of program design, we have seen that expert programmers control the complexity of their
12469 designs with the same general techniques used by designers of all complex systems. They combine
12470 primitive elements to form compound objects, they abstract compound objects to form higher-level
12471 building blocks, and they preserve modularity by adopting appropriate large-scale views of system
12472 structure. In illustrating these techniques, we have used Lisp as a language for describing processes
12473 and for constructing computational data objects and processes to model complex phenomena in the
12474 real world. However, as we confront increasingly complex problems, we will find that Lisp, or indeed
12475 any fixed programming language, is not sufficient for our needs. We must constantly turn to new
12476 languages in order to express our ideas more effectively. Establishing new languages is a powerful
12477 strategy for controlling complexity in engineering design; we can often enhance our ability to deal
12478 with a complex problem by adopting a new language that enables us to describe (and hence to think
12479 about) the problem in a different way, using primitives, means of combination, and means of
12480 abstraction that are particularly well suited to the problem at hand. 1
12481 Programming is endowed with a multitude of languages. There are physical languages, such as the
12482 machine languages for particular computers. These languages are concerned with the representation of
12483 data and control in terms of individual bits of storage and primitive machine instructions. The
12484 machine-language programmer is concerned with using the given hardware to erect systems and
12485 utilities for the efficient implementation of resource-limited computations. High-level languages,
12486 erected on a machine-language substrate, hide concerns about the representation of data as collections
12487 of bits and the representation of programs as sequences of primitive instructions. These languages
12488 have means of combination and abstraction, such as procedure definition, that are appropriate to the
12489 larger-scale organization of systems.
12490 Metalinguistic abstraction -- establishing new languages -- plays an important role in all branches of
12491 engineering design. It is particularly important to computer programming, because in programming not
12492 only can we formulate new languages but we can also implement these languages by constructing
12493 evaluators. An evaluator (or interpreter) for a programming language is a procedure that, when
12494 applied to an expression of the language, performs the actions required to evaluate that expression.
12495
12496 \fIt is no exaggeration to regard this as the most fundamental idea in programming:
12497 The evaluator, which determines the meaning of expressions in a programming language, is just
12498 another program.
12499 To appreciate this point is to change our images of ourselves as programmers. We come to see
12500 ourselves as designers of languages, rather than only users of languages designed by others.
12501 In fact, we can regard almost any program as the evaluator for some language. For instance, the
12502 polynomial manipulation system of section 2.5.3 embodies the rules of polynomial arithmetic and
12503 implements them in terms of operations on list-structured data. If we augment this system with
12504 procedures to read and print polynomial expressions, we have the core of a special-purpose language
12505 for dealing with problems in symbolic mathematics. The digital-logic simulator of section 3.3.4 and
12506 the constraint propagator of section 3.3.5 are legitimate languages in their own right, each with its own
12507 primitives, means of combination, and means of abstraction. Seen from this perspective, the
12508 technology for coping with large-scale computer systems merges with the technology for building new
12509 computer languages, and computer science itself becomes no more (and no less) than the discipline of
12510 constructing appropriate descriptive languages.
12511 We now embark on a tour of the technology by which languages are established in terms of other
12512 languages. In this chapter we shall use Lisp as a base, implementing evaluators as Lisp procedures.
12513 Lisp is particularly well suited to this task, because of its ability to represent and manipulate symbolic
12514 expressions. We will take the first step in understanding how languages are implemented by building
12515 an evaluator for Lisp itself. The language implemented by our evaluator will be a subset of the Scheme
12516 dialect of Lisp that we use in this book. Although the evaluator described in this chapter is written for
12517 a particular dialect of Lisp, it contains the essential structure of an evaluator for any
12518 expression-oriented language designed for writing programs for a sequential machine. (In fact, most
12519 language processors contain, deep within them, a little ‘‘Lisp’’ evaluator.) The evaluator has been
12520 simplified for the purposes of illustration and discussion, and some features have been left out that
12521 would be important to include in a production-quality Lisp system. Nevertheless, this simple evaluator
12522 is adequate to execute most of the programs in this book. 2
12523 An important advantage of making the evaluator accessible as a Lisp program is that we can
12524 implement alternative evaluation rules by describing these as modifications to the evaluator program.
12525 One place where we can use this power to good effect is to gain extra control over the ways in which
12526 computational models embody the notion of time, which was so central to the discussion in chapter 3.
12527 There, we mitigated some of the complexities of state and assignment by using streams to decouple the
12528 representation of time in the world from time in the computer. Our stream programs, however, were
12529 sometimes cumbersome, because they were constrained by the applicative-order evaluation of
12530 Scheme. In section 4.2, we’ll change the underlying language to provide for a more elegant approach,
12531 by modifying the evaluator to provide for normal-order evaluation.
12532 Section 4.3 implements a more ambitious linguistic change, whereby expressions have many values,
12533 rather than just a single value. In this language of nondeterministic computing, it is natural to express
12534 processes that generate all possible values for expressions and then search for those values that satisfy
12535 certain constraints. In terms of models of computation and time, this is like having time branch into a
12536 set of ‘‘possible futures’’ and then searching for appropriate time lines. With our nondeterministic
12537 evaluator, keeping track of multiple values and performing searches are handled automatically by the
12538 underlying mechanism of the language.
12539
12540 \fIn section 4.4 we implement a logic-programming language in which knowledge is expressed in terms
12541 of relations, rather than in terms of computations with inputs and outputs. Even though this makes the
12542 language drastically different from Lisp, or indeed from any conventional language, we will see that
12543 the logic-programming evaluator shares the essential structure of the Lisp evaluator.
12544 1 The same idea is pervasive throughout all of engineering. For example, electrical engineers use
12545
12546 many different languages for describing circuits. Two of these are the language of electrical networks
12547 and the language of electrical systems. The network language emphasizes the physical modeling of
12548 devices in terms of discrete electrical elements. The primitive objects of the network language are
12549 primitive electrical components such as resistors, capacitors, inductors, and transistors, which are
12550 characterized in terms of physical variables called voltage and current. When describing circuits in the
12551 network language, the engineer is concerned with the physical characteristics of a design. In contrast,
12552 the primitive objects of the system language are signal-processing modules such as filters and
12553 amplifiers. Only the functional behavior of the modules is relevant, and signals are manipulated
12554 without concern for their physical realization as voltages and currents. The system language is erected
12555 on the network language, in the sense that the elements of signal-processing systems are constructed
12556 from electrical networks. Here, however, the concerns are with the large-scale organization of
12557 electrical devices to solve a given application problem; the physical feasibility of the parts is assumed.
12558 This layered collection of languages is another example of the stratified design technique illustrated by
12559 the picture language of section 2.2.4.
12560 2 The most important features that our evaluator leaves out are mechanisms for handling errors and
12561
12562 supporting debugging. For a more extensive discussion of evaluators, see Friedman, Wand, and
12563 Haynes 1992, which gives an exposition of programming languages that proceeds via a sequence of
12564 evaluators written in Scheme.
12565 [Go to first, previous, next page; contents; index]
12566
12567 \f[Go to first, previous, next page; contents; index]
12568
12569 4.1 The Metacircular Evaluator
12570 Our evaluator for Lisp will be implemented as a Lisp program. It may seem circular to think about
12571 evaluating Lisp programs using an evaluator that is itself implemented in Lisp. However, evaluation is
12572 a process, so it is appropriate to describe the evaluation process using Lisp, which, after all, is our tool
12573 for describing processes. 3 An evaluator that is written in the same language that it evaluates is said to
12574 be metacircular.
12575 The metacircular evaluator is essentially a Scheme formulation of the environment model of
12576 evaluation described in section 3.2. Recall that the model has two basic parts:
12577 1. To evaluate a combination (a compound expression other than a special form), evaluate the
12578 subexpressions and then apply the value of the operator subexpression to the values of the
12579 operand subexpressions.
12580 2. To apply a compound procedure to a set of arguments, evaluate the body of the procedure in a
12581 new environment. To construct this environment, extend the environment part of the procedure
12582 object by a frame in which the formal parameters of the procedure are bound to the arguments to
12583 which the procedure is applied.
12584 These two rules describe the essence of the evaluation process, a basic cycle in which expressions to
12585 be evaluated in environments are reduced to procedures to be applied to arguments, which in turn are
12586 reduced to new expressions to be evaluated in new environments, and so on, until we get down to
12587 symbols, whose values are looked up in the environment, and to primitive procedures, which are
12588 applied directly (see figure 4.1). 4 This evaluation cycle will be embodied by the interplay between the
12589 two critical procedures in the evaluator, eval and apply, which are described in section 4.1.1 (see
12590 figure 4.1).
12591 The implementation of the evaluator will depend upon procedures that define the syntax of the
12592 expressions to be evaluated. We will use data abstraction to make the evaluator independent of the
12593 representation of the language. For example, rather than committing to a choice that an assignment is
12594 to be represented by a list beginning with the symbol set! we use an abstract predicate
12595 assignment? to test for an assignment, and we use abstract selectors assignment-variable
12596 and assignment-value to access the parts of an assignment. Implementation of expressions will
12597 be described in detail in section 4.1.2. There are also operations, described in section 4.1.3, that
12598 specify the representation of procedures and environments. For example, make-procedure
12599 constructs compound procedures, lookup-variable-value accesses the values of variables, and
12600 apply-primitive-procedure applies a primitive procedure to a given list of arguments.
12601
12602 4.1.1 The Core of the Evaluator
12603
12604 \fFigure 4.1: The eval-apply cycle exposes the essence of a computer language.
12605 Figure 4.1: The eval-apply cycle exposes the essence of a computer language.
12606 The evaluation process can be described as the interplay between two procedures: eval and apply.
12607
12608 Eval
12609 Eval takes as arguments an expression and an environment. It classifies the expression and directs its
12610 evaluation. Eval is structured as a case analysis of the syntactic type of the expression to be
12611 evaluated. In order to keep the procedure general, we express the determination of the type of an
12612 expression abstractly, making no commitment to any particular representation for the various types of
12613 expressions. Each type of expression has a predicate that tests for it and an abstract means for selecting
12614 its parts. This abstract syntax makes it easy to see how we can change the syntax of the language by
12615 using the same evaluator, but with a different collection of syntax procedures.
12616
12617 Primitive expressions
12618 For self-evaluating expressions, such as numbers, eval returns the expression itself.
12619 Eval must look up variables in the environment to find their values.
12620
12621 Special forms
12622 For quoted expressions, eval returns the expression that was quoted.
12623 An assignment to (or a definition of) a variable must recursively call eval to compute the new
12624 value to be associated with the variable. The environment must be modified to change (or create)
12625 the binding of the variable.
12626 An if expression requires special processing of its parts, so as to evaluate the consequent if the
12627 predicate is true, and otherwise to evaluate the alternative.
12628 A lambda expression must be transformed into an applicable procedure by packaging together
12629 the parameters and body specified by the lambda expression with the environment of the
12630 evaluation.
12631 A begin expression requires evaluating its sequence of expressions in the order in which they
12632 appear.
12633
12634 \fA case analysis (cond) is transformed into a nest of if expressions and then evaluated.
12635
12636 Combinations
12637 For a procedure application, eval must recursively evaluate the operator part and the operands
12638 of the combination. The resulting procedure and arguments are passed to apply, which handles
12639 the actual procedure application.
12640 Here is the definition of eval:
12641 (define (eval exp env)
12642 (cond ((self-evaluating? exp) exp)
12643 ((variable? exp) (lookup-variable-value exp env))
12644 ((quoted? exp) (text-of-quotation exp))
12645 ((assignment? exp) (eval-assignment exp env))
12646 ((definition? exp) (eval-definition exp env))
12647 ((if? exp) (eval-if exp env))
12648 ((lambda? exp)
12649 (make-procedure (lambda-parameters exp)
12650 (lambda-body exp)
12651 env))
12652 ((begin? exp)
12653 (eval-sequence (begin-actions exp) env))
12654 ((cond? exp) (eval (cond->if exp) env))
12655 ((application? exp)
12656 (apply (eval (operator exp) env)
12657 (list-of-values (operands exp) env)))
12658 (else
12659 (error "Unknown expression type -- EVAL" exp))))
12660 For clarity, eval has been implemented as a case analysis using cond. The disadvantage of this is
12661 that our procedure handles only a few distinguishable types of expressions, and no new ones can be
12662 defined without editing the definition of eval. In most Lisp implementations, dispatching on the type
12663 of an expression is done in a data-directed style. This allows a user to add new types of expressions
12664 that eval can distinguish, without modifying the definition of eval itself. (See exercise 4.3.)
12665
12666 Apply
12667 Apply takes two arguments, a procedure and a list of arguments to which the procedure should be
12668 applied. Apply classifies procedures into two kinds: It calls apply-primitive-procedure to
12669 apply primitives; it applies compound procedures by sequentially evaluating the expressions that make
12670 up the body of the procedure. The environment for the evaluation of the body of a compound
12671 procedure is constructed by extending the base environment carried by the procedure to include a
12672 frame that binds the parameters of the procedure to the arguments to which the procedure is to be
12673 applied. Here is the definition of apply:
12674 (define (apply procedure arguments)
12675 (cond ((primitive-procedure? procedure)
12676 (apply-primitive-procedure procedure arguments))
12677 ((compound-procedure? procedure)
12678 (eval-sequence
12679
12680 \f(procedure-body procedure)
12681 (extend-environment
12682 (procedure-parameters procedure)
12683 arguments
12684 (procedure-environment procedure))))
12685 (else
12686 (error
12687 "Unknown procedure type -- APPLY" procedure))))
12688
12689 Procedure arguments
12690 When eval processes a procedure application, it uses list-of-values to produce the list of
12691 arguments to which the procedure is to be applied. List-of-values takes as an argument the
12692 operands of the combination. It evaluates each operand and returns a list of the corresponding values: 5
12693 (define (list-of-values exps env)
12694 (if (no-operands? exps)
12695 ’()
12696 (cons (eval (first-operand exps) env)
12697 (list-of-values (rest-operands exps) env))))
12698
12699 Conditionals
12700 Eval-if evaluates the predicate part of an if expression in the given environment. If the result is
12701 true, eval-if evaluates the consequent, otherwise it evaluates the alternative:
12702 (define (eval-if exp env)
12703 (if (true? (eval (if-predicate exp) env))
12704 (eval (if-consequent exp) env)
12705 (eval (if-alternative exp) env)))
12706 The use of true? in eval-if highlights the issue of the connection between an implemented
12707 language and an implementation language. The if-predicate is evaluated in the language being
12708 implemented and thus yields a value in that language. The interpreter predicate true? translates that
12709 value into a value that can be tested by the if in the implementation language: The metacircular
12710 representation of truth might not be the same as that of the underlying Scheme. 6
12711
12712 Sequences
12713 Eval-sequence is used by apply to evaluate the sequence of expressions in a procedure body and
12714 by eval to evaluate the sequence of expressions in a begin expression. It takes as arguments a
12715 sequence of expressions and an environment, and evaluates the expressions in the order in which they
12716 occur. The value returned is the value of the final expression.
12717 (define (eval-sequence exps env)
12718 (cond ((last-exp? exps) (eval (first-exp exps) env))
12719 (else (eval (first-exp exps) env)
12720 (eval-sequence (rest-exps exps) env))))
12721
12722 \fAssignments and definitions
12723 The following procedure handles assignments to variables. It calls eval to find the value to be
12724 assigned and transmits the variable and the resulting value to set-variable-value! to be
12725 installed in the designated environment.
12726 (define (eval-assignment exp env)
12727 (set-variable-value! (assignment-variable exp)
12728 (eval (assignment-value exp) env)
12729 env)
12730 ’ok)
12731 Definitions of variables are handled in a similar manner. 7
12732 (define (eval-definition exp env)
12733 (define-variable! (definition-variable exp)
12734 (eval (definition-value exp) env)
12735 env)
12736 ’ok)
12737 We have chosen here to return the symbol ok as the value of an assignment or a definition. 8
12738 Exercise 4.1. Notice that we cannot tell whether the metacircular evaluator evaluates operands from
12739 left to right or from right to left. Its evaluation order is inherited from the underlying Lisp: If the
12740 arguments to cons in list-of-values are evaluated from left to right, then list-of-values
12741 will evaluate operands from left to right; and if the arguments to cons are evaluated from right to left,
12742 then list-of-values will evaluate operands from right to left.
12743 Write a version of list-of-values that evaluates operands from left to right regardless of the
12744 order of evaluation in the underlying Lisp. Also write a version of list-of-values that evaluates
12745 operands from right to left.
12746
12747 4.1.2 Representing Expressions
12748 The evaluator is reminiscent of the symbolic differentiation program discussed in section 2.3.2. Both
12749 programs operate on symbolic expressions. In both programs, the result of operating on a compound
12750 expression is determined by operating recursively on the pieces of the expression and combining the
12751 results in a way that depends on the type of the expression. In both programs we used data abstraction
12752 to decouple the general rules of operation from the details of how expressions are represented. In the
12753 differentiation program this meant that the same differentiation procedure could deal with algebraic
12754 expressions in prefix form, in infix form, or in some other form. For the evaluator, this means that the
12755 syntax of the language being evaluated is determined solely by the procedures that classify and extract
12756 pieces of expressions.
12757 Here is the specification of the syntax of our language:
12758 ¤ The only self-evaluating items are numbers and strings:
12759 (define (self-evaluating? exp)
12760 (cond ((number? exp) true)
12761 ((string? exp) true)
12762 (else false)))
12763
12764 \f¤ Variables are represented by symbols:
12765 (define (variable? exp) (symbol? exp))
12766 ¤ Quotations have the form (quote <text-of-quotation>): 9
12767 (define (quoted? exp)
12768 (tagged-list? exp ’quote))
12769 (define (text-of-quotation exp) (cadr exp))
12770 Quoted? is defined in terms of the procedure tagged-list?, which identifies lists beginning with
12771 a designated symbol:
12772 (define (tagged-list? exp tag)
12773 (if (pair? exp)
12774 (eq? (car exp) tag)
12775 false))
12776 ¤ Assignments have the form (set! <var> <value>):
12777 (define (assignment? exp)
12778 (tagged-list? exp ’set!))
12779 (define (assignment-variable exp) (cadr exp))
12780 (define (assignment-value exp) (caddr exp))
12781 ¤ Definitions have the form
12782 (define <var> <value>)
12783 or the form
12784 (define (<var> <parameter 1 > ... <parameter n >)
12785 <body>)
12786 The latter form (standard procedure definition) is syntactic sugar for
12787 (define <var>
12788 (lambda (<parameter 1 > ... <parameter n >)
12789 <body>))
12790 The corresponding syntax procedures are the following:
12791 (define (definition? exp)
12792 (tagged-list? exp ’define))
12793 (define (definition-variable exp)
12794 (if (symbol? (cadr exp))
12795 (cadr exp)
12796 (caadr exp)))
12797 (define (definition-value exp)
12798 (if (symbol? (cadr exp))
12799 (caddr exp)
12800 (make-lambda (cdadr exp)
12801 ; formal parameters
12802 (cddr exp)))) ; body
12803
12804 \f¤ Lambda expressions are lists that begin with the symbol lambda:
12805 (define (lambda? exp) (tagged-list? exp ’lambda))
12806 (define (lambda-parameters exp) (cadr exp))
12807 (define (lambda-body exp) (cddr exp))
12808 We also provide a constructor for lambda expressions, which is used by definition-value,
12809 above:
12810 (define (make-lambda parameters body)
12811 (cons ’lambda (cons parameters body)))
12812 ¤ Conditionals begin with if and have a predicate, a consequent, and an (optional) alternative. If the
12813 expression has no alternative part, we provide false as the alternative. 10
12814 (define (if? exp) (tagged-list? exp ’if))
12815 (define (if-predicate exp) (cadr exp))
12816 (define (if-consequent exp) (caddr exp))
12817 (define (if-alternative exp)
12818 (if (not (null? (cdddr exp)))
12819 (cadddr exp)
12820 ’false))
12821 We also provide a constructor for if expressions, to be used by cond->if to transform cond
12822 expressions into if expressions:
12823 (define (make-if predicate consequent alternative)
12824 (list ’if predicate consequent alternative))
12825 ¤ Begin packages a sequence of expressions into a single expression. We include syntax operations
12826 on begin expressions to extract the actual sequence from the begin expression, as well as selectors
12827 that return the first expression and the rest of the expressions in the sequence. 11
12828 (define
12829 (define
12830 (define
12831 (define
12832 (define
12833
12834 (begin? exp) (tagged-list? exp ’begin))
12835 (begin-actions exp) (cdr exp))
12836 (last-exp? seq) (null? (cdr seq)))
12837 (first-exp seq) (car seq))
12838 (rest-exps seq) (cdr seq))
12839
12840 We also include a constructor sequence->exp (for use by cond->if) that transforms a sequence
12841 into a single expression, using begin if necessary:
12842 (define (sequence->exp seq)
12843 (cond ((null? seq) seq)
12844 ((last-exp? seq) (first-exp seq))
12845 (else (make-begin seq))))
12846 (define (make-begin seq) (cons ’begin seq))
12847 ¤ A procedure application is any compound expression that is not one of the above expression types.
12848 The car of the expression is the operator, and the cdr is the list of operands:
12849
12850 \f(define
12851 (define
12852 (define
12853 (define
12854 (define
12855 (define
12856
12857 (application? exp) (pair? exp))
12858 (operator exp) (car exp))
12859 (operands exp) (cdr exp))
12860 (no-operands? ops) (null? ops))
12861 (first-operand ops) (car ops))
12862 (rest-operands ops) (cdr ops))
12863
12864 Derived expressions
12865 Some special forms in our language can be defined in terms of expressions involving other special
12866 forms, rather than being implemented directly. One example is cond, which can be implemented as a
12867 nest of if expressions. For example, we can reduce the problem of evaluating the expression
12868 (cond ((> x 0) x)
12869 ((= x 0) (display ’zero) 0)
12870 (else (- x)))
12871 to the problem of evaluating the following expression involving if and begin expressions:
12872 (if (> x 0)
12873 x
12874 (if (= x 0)
12875 (begin (display ’zero)
12876 0)
12877 (- x)))
12878 Implementing the evaluation of cond in this way simplifies the evaluator because it reduces the
12879 number of special forms for which the evaluation process must be explicitly specified.
12880 We include syntax procedures that extract the parts of a cond expression, and a procedure
12881 cond->if that transforms cond expressions into if expressions. A case analysis begins with cond
12882 and has a list of predicate-action clauses. A clause is an else clause if its predicate is the symbol
12883 else. 12
12884 (define (cond? exp) (tagged-list? exp ’cond))
12885 (define (cond-clauses exp) (cdr exp))
12886 (define (cond-else-clause? clause)
12887 (eq? (cond-predicate clause) ’else))
12888 (define (cond-predicate clause) (car clause))
12889 (define (cond-actions clause) (cdr clause))
12890 (define (cond->if exp)
12891 (expand-clauses (cond-clauses exp)))
12892 (define (expand-clauses clauses)
12893 (if (null? clauses)
12894 ’false
12895 ; no else clause
12896 (let ((first (car clauses))
12897 (rest (cdr clauses)))
12898 (if (cond-else-clause? first)
12899 (if (null? rest)
12900 (sequence->exp (cond-actions first))
12901 (error "ELSE clause isn’t last -- COND->IF"
12902 clauses))
12903
12904 \f(make-if (cond-predicate first)
12905 (sequence->exp (cond-actions first))
12906 (expand-clauses rest))))))
12907 Expressions (such as cond) that we choose to implement as syntactic transformations are called
12908 derived expressions. Let expressions are also derived expressions (see exercise 4.6). 13
12909 Exercise 4.2. Louis Reasoner plans to reorder the cond clauses in eval so that the clause for
12910 procedure applications appears before the clause for assignments. He argues that this will make the
12911 interpreter more efficient: Since programs usually contain more applications than assignments,
12912 definitions, and so on, his modified eval will usually check fewer clauses than the original eval
12913 before identifying the type of an expression.
12914 a. What is wrong with Louis’s plan? (Hint: What will Louis’s evaluator do with the expression
12915 (define x 3)?)
12916 b. Louis is upset that his plan didn’t work. He is willing to go to any lengths to make his evaluator
12917 recognize procedure applications before it checks for most other kinds of expressions. Help him by
12918 changing the syntax of the evaluated language so that procedure applications start with call. For
12919 example, instead of (factorial 3) we will now have to write (call factorial 3) and
12920 instead of (+ 1 2) we will have to write (call + 1 2).
12921 Exercise 4.3. Rewrite eval so that the dispatch is done in data-directed style. Compare this with the
12922 data-directed differentiation procedure of exercise 2.73. (You may use the car of a compound
12923 expression as the type of the expression, as is appropriate for the syntax implemented in this section.) .
12924 Exercise 4.4. Recall the definitions of the special forms and and or from chapter 1:
12925 and: The expressions are evaluated from left to right. If any expression evaluates to false, false is
12926 returned; any remaining expressions are not evaluated. If all the expressions evaluate to true
12927 values, the value of the last expression is returned. If there are no expressions then true is
12928 returned.
12929 or: The expressions are evaluated from left to right. If any expression evaluates to a true value,
12930 that value is returned; any remaining expressions are not evaluated. If all expressions evaluate to
12931 false, or if there are no expressions, then false is returned.
12932 Install and and or as new special forms for the evaluator by defining appropriate syntax procedures
12933 and evaluation procedures eval-and and eval-or. Alternatively, show how to implement and
12934 and or as derived expressions.
12935 Exercise 4.5. Scheme allows an additional syntax for cond clauses, (<test> =>
12936 <recipient>). If <test> evaluates to a true value, then <recipient> is evaluated. Its value must be a
12937 procedure of one argument; this procedure is then invoked on the value of the <test>, and the result is
12938 returned as the value of the cond expression. For example
12939 (cond ((assoc ’b ’((a 1) (b 2))) => cadr)
12940 (else false))
12941 returns 2. Modify the handling of cond so that it supports this extended syntax.
12942
12943 \fExercise 4.6. Let expressions are derived expressions, because
12944 (let ((<var 1 > <exp 1 >) ... (<var n > <exp n >))
12945 <body>)
12946 is equivalent to
12947 ((lambda (<var 1 > ... <var n >)
12948 <body>)
12949 <exp 1 >
12950 <exp n >)
12951 Implement a syntactic transformation let->combination that reduces evaluating let expressions
12952 to evaluating combinations of the type shown above, and add the appropriate clause to eval to handle
12953 let expressions.
12954 Exercise 4.7. Let* is similar to let, except that the bindings of the let variables are performed
12955 sequentially from left to right, and each binding is made in an environment in which all of the
12956 preceding bindings are visible. For example
12957 (let* ((x 3)
12958 (y (+ x 2))
12959 (z (+ x y 5)))
12960 (* x z))
12961 returns 39. Explain how a let* expression can be rewritten as a set of nested let expressions, and
12962 write a procedure let*->nested-lets that performs this transformation. If we have already
12963 implemented let (exercise 4.6) and we want to extend the evaluator to handle let*, is it sufficient
12964 to add a clause to eval whose action is
12965 (eval (let*->nested-lets exp) env)
12966 or must we explicitly expand let* in terms of non-derived expressions?
12967 Exercise 4.8. ‘‘Named let’’ is a variant of let that has the form
12968 (let <var> <bindings> <body>)
12969 The <bindings> and <body> are just as in ordinary let, except that <var> is bound within <body> to
12970 a procedure whose body is <body> and whose parameters are the variables in the <bindings>. Thus,
12971 one can repeatedly execute the <body> by invoking the procedure named <var>. For example, the
12972 iterative Fibonacci procedure (section 1.2.2) can be rewritten using named let as follows:
12973 (define (fib n)
12974 (let fib-iter ((a 1)
12975 (b 0)
12976 (count n))
12977 (if (= count 0)
12978 b
12979 (fib-iter (+ a b) a (- count 1)))))
12980
12981 \fModify let->combination of exercise 4.6 to also support named let.
12982 Exercise 4.9. Many languages support a variety of iteration constructs, such as do, for, while, and
12983 until. In Scheme, iterative processes can be expressed in terms of ordinary procedure calls, so
12984 special iteration constructs provide no essential gain in computational power. On the other hand, such
12985 constructs are often convenient. Design some iteration constructs, give examples of their use, and
12986 show how to implement them as derived expressions.
12987 Exercise 4.10. By using data abstraction, we were able to write an eval procedure that is
12988 independent of the particular syntax of the language to be evaluated. To illustrate this, design and
12989 implement a new syntax for Scheme by modifying the procedures in this section, without changing
12990 eval or apply.
12991
12992 4.1.3 Evaluator Data Structures
12993 In addition to defining the external syntax of expressions, the evaluator implementation must also
12994 define the data structures that the evaluator manipulates internally, as part of the execution of a
12995 program, such as the representation of procedures and environments and the representation of true and
12996 false.
12997
12998 Testing of predicates
12999 For conditionals, we accept anything to be true that is not the explicit false object.
13000 (define (true? x)
13001 (not (eq? x false)))
13002 (define (false? x)
13003 (eq? x false))
13004
13005 Representing procedures
13006 To handle primitives, we assume that we have available the following procedures:
13007 (apply-primitive-procedure <proc> <args>)
13008 applies the given primitive procedure to the argument values in the list <args> and returns the
13009 result of the application.
13010 (primitive-procedure? <proc>)
13011 tests whether <proc> is a primitive procedure.
13012 These mechanisms for handling primitives are further described in section 4.1.4.
13013 Compound procedures are constructed from parameters, procedure bodies, and environments using the
13014 constructor make-procedure:
13015 (define (make-procedure parameters body env)
13016 (list ’procedure parameters body env))
13017 (define (compound-procedure? p)
13018 (tagged-list? p ’procedure))
13019 (define (procedure-parameters p) (cadr p))
13020 (define (procedure-body p) (caddr p))
13021 (define (procedure-environment p) (cadddr p))
13022
13023 \fOperations on Environments
13024 The evaluator needs operations for manipulating environments. As explained in section 3.2, an
13025 environment is a sequence of frames, where each frame is a table of bindings that associate variables
13026 with their corresponding values. We use the following operations for manipulating environments:
13027 (lookup-variable-value <var> <env>)
13028 returns the value that is bound to the symbol <var> in the environment <env>, or signals an error
13029 if the variable is unbound.
13030 (extend-environment <variables> <values> <base-env>)
13031 returns a new environment, consisting of a new frame in which the symbols in the list
13032 <variables> are bound to the corresponding elements in the list <values>, where the enclosing
13033 environment is the environment <base-env>.
13034 (define-variable! <var> <value> <env>)
13035 adds to the first frame in the environment <env> a new binding that associates the variable <var>
13036 with the value <value>.
13037 (set-variable-value! <var> <value> <env>)
13038 changes the binding of the variable <var> in the environment <env> so that the variable is now
13039 bound to the value <value>, or signals an error if the variable is unbound.
13040 To implement these operations we represent an environment as a list of frames. The enclosing
13041 environment of an environment is the cdr of the list. The empty environment is simply the empty list.
13042 (define (enclosing-environment env) (cdr env))
13043 (define (first-frame env) (car env))
13044 (define the-empty-environment ’())
13045 Each frame of an environment is represented as a pair of lists: a list of the variables bound in that
13046 frame and a list of the associated values. 14
13047 (define (make-frame variables values)
13048 (cons variables values))
13049 (define (frame-variables frame) (car frame))
13050 (define (frame-values frame) (cdr frame))
13051 (define (add-binding-to-frame! var val frame)
13052 (set-car! frame (cons var (car frame)))
13053 (set-cdr! frame (cons val (cdr frame))))
13054 To extend an environment by a new frame that associates variables with values, we make a frame
13055 consisting of the list of variables and the list of values, and we adjoin this to the environment. We
13056 signal an error if the number of variables does not match the number of values.
13057 (define (extend-environment vars vals base-env)
13058 (if (= (length vars) (length vals))
13059 (cons (make-frame vars vals) base-env)
13060 (if (< (length vars) (length vals))
13061 (error "Too many arguments supplied" vars vals)
13062 (error "Too few arguments supplied" vars vals))))
13063
13064 \fTo look up a variable in an environment, we scan the list of variables in the first frame. If we find the
13065 desired variable, we return the corresponding element in the list of values. If we do not find the
13066 variable in the current frame, we search the enclosing environment, and so on. If we reach the empty
13067 environment, we signal an ‘‘unbound variable’’ error.
13068 (define (lookup-variable-value var env)
13069 (define (env-loop env)
13070 (define (scan vars vals)
13071 (cond ((null? vars)
13072 (env-loop (enclosing-environment env)))
13073 ((eq? var (car vars))
13074 (car vals))
13075 (else (scan (cdr vars) (cdr vals)))))
13076 (if (eq? env the-empty-environment)
13077 (error "Unbound variable" var)
13078 (let ((frame (first-frame env)))
13079 (scan (frame-variables frame)
13080 (frame-values frame)))))
13081 (env-loop env))
13082 To set a variable to a new value in a specified environment, we scan for the variable, just as in
13083 lookup-variable-value, and change the corresponding value when we find it.
13084 (define (set-variable-value! var val env)
13085 (define (env-loop env)
13086 (define (scan vars vals)
13087 (cond ((null? vars)
13088 (env-loop (enclosing-environment env)))
13089 ((eq? var (car vars))
13090 (set-car! vals val))
13091 (else (scan (cdr vars) (cdr vals)))))
13092 (if (eq? env the-empty-environment)
13093 (error "Unbound variable -- SET!" var)
13094 (let ((frame (first-frame env)))
13095 (scan (frame-variables frame)
13096 (frame-values frame)))))
13097 (env-loop env))
13098 To define a variable, we search the first frame for a binding for the variable, and change the binding if
13099 it exists (just as in set-variable-value!). If no such binding exists, we adjoin one to the first
13100 frame.
13101 (define (define-variable! var val env)
13102 (let ((frame (first-frame env)))
13103 (define (scan vars vals)
13104 (cond ((null? vars)
13105 (add-binding-to-frame! var val frame))
13106 ((eq? var (car vars))
13107 (set-car! vals val))
13108 (else (scan (cdr vars) (cdr vals)))))
13109 (scan (frame-variables frame)
13110
13111 \f(frame-values frame))))
13112 The method described here is only one of many plausible ways to represent environments. Since we
13113 used data abstraction to isolate the rest of the evaluator from the detailed choice of representation, we
13114 could change the environment representation if we wanted to. (See exercise 4.11.) In a
13115 production-quality Lisp system, the speed of the evaluator’s environment operations -- especially that
13116 of variable lookup -- has a major impact on the performance of the system. The representation
13117 described here, although conceptually simple, is not efficient and would not ordinarily be used in a
13118 production system. 15
13119 Exercise 4.11. Instead of representing a frame as a pair of lists, we can represent a frame as a list of
13120 bindings, where each binding is a name-value pair. Rewrite the environment operations to use this
13121 alternative representation.
13122 Exercise 4.12. The procedures set-variable-value!, define-variable!, and
13123 lookup-variable-value can be expressed in terms of more abstract procedures for traversing
13124 the environment structure. Define abstractions that capture the common patterns and redefine the three
13125 procedures in terms of these abstractions.
13126 Exercise 4.13. Scheme allows us to create new bindings for variables by means of define, but
13127 provides no way to get rid of bindings. Implement for the evaluator a special form make-unbound!
13128 that removes the binding of a given symbol from the environment in which the make-unbound!
13129 expression is evaluated. This problem is not completely specified. For example, should we remove
13130 only the binding in the first frame of the environment? Complete the specification and justify any
13131 choices you make.
13132
13133 4.1.4 Running the Evaluator as a Program
13134 Given the evaluator, we have in our hands a description (expressed in Lisp) of the process by which
13135 Lisp expressions are evaluated. One advantage of expressing the evaluator as a program is that we can
13136 run the program. This gives us, running within Lisp, a working model of how Lisp itself evaluates
13137 expressions. This can serve as a framework for experimenting with evaluation rules, as we shall do
13138 later in this chapter.
13139 Our evaluator program reduces expressions ultimately to the application of primitive procedures.
13140 Therefore, all that we need to run the evaluator is to create a mechanism that calls on the underlying
13141 Lisp system to model the application of primitive procedures.
13142 There must be a binding for each primitive procedure name, so that when eval evaluates the operator
13143 of an application of a primitive, it will find an object to pass to apply. We thus set up a global
13144 environment that associates unique objects with the names of the primitive procedures that can appear
13145 in the expressions we will be evaluating. The global environment also includes bindings for the
13146 symbols true and false, so that they can be used as variables in expressions to be evaluated.
13147 (define (setup-environment)
13148 (let ((initial-env
13149 (extend-environment (primitive-procedure-names)
13150 (primitive-procedure-objects)
13151 the-empty-environment)))
13152 (define-variable! ’true true initial-env)
13153 (define-variable! ’false false initial-env)
13154
13155 \finitial-env))
13156 (define the-global-environment (setup-environment))
13157 It does not matter how we represent the primitive procedure objects, so long as apply can identify
13158 and apply them by using the procedures primitive-procedure? and
13159 apply-primitive-procedure. We have chosen to represent a primitive procedure as a list
13160 beginning with the symbol primitive and containing a procedure in the underlying Lisp that
13161 implements that primitive.
13162 (define (primitive-procedure? proc)
13163 (tagged-list? proc ’primitive))
13164 (define (primitive-implementation proc) (cadr proc))
13165 Setup-environment will get the primitive names and implementation procedures from a list: 16
13166 (define primitive-procedures
13167 (list (list ’car car)
13168 (list ’cdr cdr)
13169 (list ’cons cons)
13170 (list ’null? null?)
13171 <more primitives>
13172 ))
13173 (define (primitive-procedure-names)
13174 (map car
13175 primitive-procedures))
13176 (define (primitive-procedure-objects)
13177 (map (lambda (proc) (list ’primitive (cadr proc)))
13178 primitive-procedures))
13179 To apply a primitive procedure, we simply apply the implementation procedure to the arguments,
13180 using the underlying Lisp system: 17
13181 (define (apply-primitive-procedure proc args)
13182 (apply-in-underlying-scheme
13183 (primitive-implementation proc) args))
13184 For convenience in running the metacircular evaluator, we provide a driver loop that models the
13185 read-eval-print loop of the underlying Lisp system. It prints a prompt, reads an input expression,
13186 evaluates this expression in the global environment, and prints the result. We precede each printed
13187 result by an output prompt so as to distinguish the value of the expression from other output that may
13188 be printed. 18
13189 (define input-prompt ";;; M-Eval input:")
13190 (define output-prompt ";;; M-Eval value:")
13191 (define (driver-loop)
13192 (prompt-for-input input-prompt)
13193 (let ((input (read)))
13194 (let ((output (eval input the-global-environment)))
13195 (announce-output output-prompt)
13196 (user-print output)))
13197 (driver-loop))
13198 (define (prompt-for-input string)
13199
13200 \f(newline) (newline) (display string) (newline))
13201 (define (announce-output string)
13202 (newline) (display string) (newline))
13203 We use a special printing procedure, user-print, to avoid printing the environment part of a
13204 compound procedure, which may be a very long list (or may even contain cycles).
13205 (define (user-print object)
13206 (if (compound-procedure? object)
13207 (display (list ’compound-procedure
13208 (procedure-parameters object)
13209 (procedure-body object)
13210 ’<procedure-env>))
13211 (display object)))
13212 Now all we need to do to run the evaluator is to initialize the global environment and start the driver
13213 loop. Here is a sample interaction:
13214 (define the-global-environment (setup-environment))
13215 (driver-loop)
13216 ;;; M-Eval input:
13217 (define (append x y)
13218 (if (null? x)
13219 y
13220 (cons (car x)
13221 (append (cdr x) y))))
13222 ;;; M-Eval value:
13223 ok
13224 ;;; M-Eval input:
13225 (append ’(a b c) ’(d e f))
13226 ;;; M-Eval value:
13227 (a b c d e f)
13228 Exercise 4.14. Eva Lu Ator and Louis Reasoner are each experimenting with the metacircular
13229 evaluator. Eva types in the definition of map, and runs some test programs that use it. They work fine.
13230 Louis, in contrast, has installed the system version of map as a primitive for the metacircular
13231 evaluator. When he tries it, things go terribly wrong. Explain why Louis’s map fails even though
13232 Eva’s works.
13233
13234 4.1.5 Data as Programs
13235 In thinking about a Lisp program that evaluates Lisp expressions, an analogy might be helpful. One
13236 operational view of the meaning of a program is that a program is a description of an abstract (perhaps
13237 infinitely large) machine. For example, consider the familiar program to compute factorials:
13238 (define (factorial n)
13239 (if (= n 1)
13240 1
13241 (* (factorial (- n 1)) n)))
13242
13243 \fWe may regard this program as the description of a machine containing parts that decrement, multiply,
13244 and test for equality, together with a two-position switch and another factorial machine. (The factorial
13245 machine is infinite because it contains another factorial machine within it.) Figure 4.2 is a flow
13246 diagram for the factorial machine, showing how the parts are wired together.
13247
13248 Figure 4.2: The factorial program, viewed as an abstract machine.
13249 Figure 4.2: The factorial program, viewed as an abstract machine.
13250 In a similar way, we can regard the evaluator as a very special machine that takes as input a
13251 description of a machine. Given this input, the evaluator configures itself to emulate the machine
13252 described. For example, if we feed our evaluator the definition of factorial, as shown in
13253 figure 4.3, the evaluator will be able to compute factorials.
13254
13255 Figure 4.3: The evaluator emulating a factorial machine.
13256 Figure 4.3: The evaluator emulating a factorial machine.
13257 From this perspective, our evaluator is seen to be a universal machine. It mimics other machines when
13258 these are described as Lisp programs. 19 This is striking. Try to imagine an analogous evaluator for
13259 electrical circuits. This would be a circuit that takes as input a signal encoding the plans for some other
13260 circuit, such as a filter. Given this input, the circuit evaluator would then behave like a filter with the
13261 same description. Such a universal electrical circuit is almost unimaginably complex. It is remarkable
13262 that the program evaluator is a rather simple program. 20
13263
13264 \fAnother striking aspect of the evaluator is that it acts as a bridge between the data objects that are
13265 manipulated by our programming language and the programming language itself. Imagine that the
13266 evaluator program (implemented in Lisp) is running, and that a user is typing expressions to the
13267 evaluator and observing the results. From the perspective of the user, an input expression such as (*
13268 x x) is an expression in the programming language, which the evaluator should execute. From the
13269 perspective of the evaluator, however, the expression is simply a list (in this case, a list of three
13270 symbols: *, x, and x) that is to be manipulated according to a well-defined set of rules.
13271 That the user’s programs are the evaluator’s data need not be a source of confusion. In fact, it is
13272 sometimes convenient to ignore this distinction, and to give the user the ability to explicitly evaluate a
13273 data object as a Lisp expression, by making eval available for use in programs. Many Lisp dialects
13274 provide a primitive eval procedure that takes as arguments an expression and an environment and
13275 evaluates the expression relative to the environment. 21 Thus,
13276 (eval ’(* 5 5) user-initial-environment)
13277 and
13278 (eval (cons ’* (list 5 5)) user-initial-environment)
13279 will both return 25. 22
13280 Exercise 4.15. Given a one-argument procedure p and an object a, p is said to ‘‘halt’’ on a if
13281 evaluating the expression (p a) returns a value (as opposed to terminating with an error message or
13282 running forever). Show that it is impossible to write a procedure halts? that correctly determines
13283 whether p halts on a for any procedure p and object a. Use the following reasoning: If you had such a
13284 procedure halts?, you could implement the following program:
13285 (define (run-forever) (run-forever))
13286 (define (try p)
13287 (if (halts? p p)
13288 (run-forever)
13289 ’halted))
13290 Now consider evaluating the expression (try try) and show that any possible outcome (either
13291 halting or running forever) violates the intended behavior of halts?. 23
13292
13293 4.1.6 Internal Definitions
13294 Our environment model of evaluation and our metacircular evaluator execute definitions in sequence,
13295 extending the environment frame one definition at a time. This is particularly convenient for
13296 interactive program development, in which the programmer needs to freely mix the application of
13297 procedures with the definition of new procedures. However, if we think carefully about the internal
13298 definitions used to implement block structure (introduced in section 1.1.8), we will find that
13299 name-by-name extension of the environment may not be the best way to define local variables.
13300 Consider a procedure with internal definitions, such as
13301 (define (f x)
13302 (define (even? n)
13303 (if (= n 0)
13304 true
13305
13306 \f(odd? (- n 1))))
13307 (define (odd? n)
13308 (if (= n 0)
13309 false
13310 (even? (- n 1))))
13311 <rest of body of f>)
13312 Our intention here is that the name odd? in the body of the procedure even? should refer to the
13313 procedure odd? that is defined after even?. The scope of the name odd? is the entire body of f, not
13314 just the portion of the body of f starting at the point where the define for odd? occurs. Indeed,
13315 when we consider that odd? is itself defined in terms of even? -- so that even? and odd? are
13316 mutually recursive procedures -- we see that the only satisfactory interpretation of the two defines is
13317 to regard them as if the names even? and odd? were being added to the environment
13318 simultaneously. More generally, in block structure, the scope of a local name is the entire procedure
13319 body in which the define is evaluated.
13320 As it happens, our interpreter will evaluate calls to f correctly, but for an ‘‘accidental’’ reason: Since
13321 the definitions of the internal procedures come first, no calls to these procedures will be evaluated until
13322 all of them have been defined. Hence, odd? will have been defined by the time even? is executed. In
13323 fact, our sequential evaluation mechanism will give the same result as a mechanism that directly
13324 implements simultaneous definition for any procedure in which the internal definitions come first in a
13325 body and evaluation of the value expressions for the defined variables doesn’t actually use any of the
13326 defined variables. (For an example of a procedure that doesn’t obey these restrictions, so that
13327 sequential definition isn’t equivalent to simultaneous definition, see exercise 4.19.) 24
13328 There is, however, a simple way to treat definitions so that internally defined names have truly
13329 simultaneous scope -- just create all local variables that will be in the current environment before
13330 evaluating any of the value expressions. One way to do this is by a syntax transformation on lambda
13331 expressions. Before evaluating the body of a lambda expression, we ‘‘scan out’’ and eliminate all the
13332 internal definitions in the body. The internally defined variables will be created with a let and then
13333 set to their values by assignment. For example, the procedure
13334 (lambda <vars>
13335 (define u <e1>)
13336 (define v <e2>)
13337 <e3>)
13338 would be transformed into
13339 (lambda <vars>
13340 (let ((u ’*unassigned*)
13341 (v ’*unassigned*))
13342 (set! u <e1>)
13343 (set! v <e2>)
13344 <e3>))
13345 where *unassigned* is a special symbol that causes looking up a variable to signal an error if an
13346 attempt is made to use the value of the not-yet-assigned variable.
13347
13348 \fAn alternative strategy for scanning out internal definitions is shown in exercise 4.18. Unlike the
13349 transformation shown above, this enforces the restriction that the defined variables’ values can be
13350 evaluated without using any of the variables’ values. 25
13351 Exercise 4.16. In this exercise we implement the method just described for interpreting internal
13352 definitions. We assume that the evaluator supports let (see exercise 4.6).
13353 a. Change lookup-variable-value (section 4.1.3) to signal an error if the value it finds is the
13354 symbol *unassigned*.
13355 b. Write a procedure scan-out-defines that takes a procedure body and returns an equivalent
13356 one that has no internal definitions, by making the transformation described above.
13357 c. Install scan-out-defines in the interpreter, either in make-procedure or in
13358 procedure-body (see section 4.1.3). Which place is better? Why?
13359 Exercise 4.17. Draw diagrams of the environment in effect when evaluating the expression <e3> in
13360 the procedure in the text, comparing how this will be structured when definitions are interpreted
13361 sequentially with how it will be structured if definitions are scanned out as described. Why is there an
13362 extra frame in the transformed program? Explain why this difference in environment structure can
13363 never make a difference in the behavior of a correct program. Design a way to make the interpreter
13364 implement the ‘‘simultaneous’’ scope rule for internal definitions without constructing the extra frame.
13365 Exercise 4.18. Consider an alternative strategy for scanning out definitions that translates the example
13366 in the text to
13367 (lambda <vars>
13368 (let ((u ’*unassigned*)
13369 (v ’*unassigned*))
13370 (let ((a <e1>)
13371 (b <e2>))
13372 (set! u a)
13373 (set! v b))
13374 <e3>))
13375 Here a and b are meant to represent new variable names, created by the interpreter, that do not appear
13376 in the user’s program. Consider the solve procedure from section 3.5.4:
13377 (define (solve f y0 dt)
13378 (define y (integral (delay dy) y0 dt))
13379 (define dy (stream-map f y))
13380 y)
13381 Will this procedure work if internal definitions are scanned out as shown in this exercise? What if they
13382 are scanned out as shown in the text? Explain.
13383 Exercise 4.19. Ben Bitdiddle, Alyssa P. Hacker, and Eva Lu Ator are arguing about the desired result
13384 of evaluating the expression
13385 (let ((a 1))
13386 (define (f x)
13387 (define b (+ a x))
13388
13389 \f(define a 5)
13390 (+ a b))
13391 (f 10))
13392 Ben asserts that the result should be obtained using the sequential rule for define: b is defined to be
13393 11, then a is defined to be 5, so the result is 16. Alyssa objects that mutual recursion requires the
13394 simultaneous scope rule for internal procedure definitions, and that it is unreasonable to treat
13395 procedure names differently from other names. Thus, she argues for the mechanism implemented in
13396 exercise 4.16. This would lead to a being unassigned at the time that the value for b is to be computed.
13397 Hence, in Alyssa’s view the procedure should produce an error. Eva has a third opinion. She says that
13398 if the definitions of a and b are truly meant to be simultaneous, then the value 5 for a should be used
13399 in evaluating b. Hence, in Eva’s view a should be 5, b should be 15, and the result should be 20.
13400 Which (if any) of these viewpoints do you support? Can you devise a way to implement internal
13401 definitions so that they behave as Eva prefers? 26
13402 Exercise 4.20. Because internal definitions look sequential but are actually simultaneous, some
13403 people prefer to avoid them entirely, and use the special form letrec instead. Letrec looks like
13404 let, so it is not surprising that the variables it binds are bound simultaneously and have the same
13405 scope as each other. The sample procedure f above can be written without internal definitions, but
13406 with exactly the same meaning, as
13407 (define (f x)
13408 (letrec ((even?
13409 (lambda (n)
13410 (if (= n 0)
13411 true
13412 (odd? (- n 1)))))
13413 (odd?
13414 (lambda (n)
13415 (if (= n 0)
13416 false
13417 (even? (- n 1))))))
13418 <rest of body of f>))
13419 Letrec expressions, which have the form
13420 (letrec ((<var 1 > <exp 1 >) ... (<var n > <exp n >))
13421 <body>)
13422 are a variation on let in which the expressions <exp k > that provide the initial values for the variables
13423 <var k > are evaluated in an environment that includes all the letrec bindings. This permits recursion
13424 in the bindings, such as the mutual recursion of even? and odd? in the example above, or the
13425 evaluation of 10 factorial with
13426 (letrec ((fact
13427 (lambda (n)
13428 (if (= n 1)
13429 1
13430 (* n (fact (- n 1)))))))
13431 (fact 10))
13432
13433 \fa. Implement letrec as a derived expression, by transforming a letrec expression into a let
13434 expression as shown in the text above or in exercise 4.18. That is, the letrec variables should be
13435 created with a let and then be assigned their values with set!.
13436 b. Louis Reasoner is confused by all this fuss about internal definitions. The way he sees it, if you
13437 don’t like to use define inside a procedure, you can just use let. Illustrate what is loose about his
13438 reasoning by drawing an environment diagram that shows the environment in which the <rest of body
13439 of f> is evaluated during evaluation of the expression (f 5), with f defined as in this exercise. Draw
13440 an environment diagram for the same evaluation, but with let in place of letrec in the definition
13441 of f.
13442 Exercise 4.21. Amazingly, Louis’s intuition in exercise 4.20 is correct. It is indeed possible to specify
13443 recursive procedures without using letrec (or even define), although the method for
13444 accomplishing this is much more subtle than Louis imagined. The following expression computes 10
13445 factorial by applying a recursive factorial procedure: 27
13446 ((lambda (n)
13447 ((lambda (fact)
13448 (fact fact n))
13449 (lambda (ft k)
13450 (if (= k 1)
13451 1
13452 (* k (ft ft (- k 1)))))))
13453 10)
13454 a. Check (by evaluating the expression) that this really does compute factorials. Devise an analogous
13455 expression for computing Fibonacci numbers.
13456 b. Consider the following procedure, which includes mutually recursive internal definitions:
13457 (define (f x)
13458 (define (even? n)
13459 (if (= n 0)
13460 true
13461 (odd? (- n 1))))
13462 (define (odd? n)
13463 (if (= n 0)
13464 false
13465 (even? (- n 1))))
13466 (even? x))
13467 Fill in the missing expressions to complete an alternative definition of f, which uses neither internal
13468 definitions nor letrec:
13469 (define (f x)
13470 ((lambda (even? odd?)
13471 (even? even? odd? x))
13472 (lambda (ev? od? n)
13473 (if (= n 0) true (od? <??> <??> <??>)))
13474 (lambda (ev? od? n)
13475 (if (= n 0) false (ev? <??> <??> <??>)))))
13476
13477 \f4.1.7 Separating Syntactic Analysis from Execution
13478 The evaluator implemented above is simple, but it is very inefficient, because the syntactic analysis of
13479 expressions is interleaved with their execution. Thus if a program is executed many times, its syntax is
13480 analyzed many times. Consider, for example, evaluating (factorial 4) using the following
13481 definition of factorial:
13482 (define (factorial n)
13483 (if (= n 1)
13484 1
13485 (* (factorial (- n 1)) n)))
13486 Each time factorial is called, the evaluator must determine that the body is an if expression and
13487 extract the predicate. Only then can it evaluate the predicate and dispatch on its value. Each time it
13488 evaluates the expression (* (factorial (- n 1)) n), or the subexpressions (factorial
13489 (- n 1)) and (- n 1), the evaluator must perform the case analysis in eval to determine that
13490 the expression is an application, and must extract its operator and operands. This analysis is expensive.
13491 Performing it repeatedly is wasteful.
13492 We can transform the evaluator to be significantly more efficient by arranging things so that syntactic
13493 analysis is performed only once. 28 We split eval, which takes an expression and an environment,
13494 into two parts. The procedure analyze takes only the expression. It performs the syntactic analysis
13495 and returns a new procedure, the execution procedure, that encapsulates the work to be done in
13496 executing the analyzed expression. The execution procedure takes an environment as its argument and
13497 completes the evaluation. This saves work because analyze will be called only once on an
13498 expression, while the execution procedure may be called many times.
13499 With the separation into analysis and execution, eval now becomes
13500 (define (eval exp env)
13501 ((analyze exp) env))
13502 The result of calling analyze is the execution procedure to be applied to the environment. The
13503 analyze procedure is the same case analysis as performed by the original eval of section 4.1.1,
13504 except that the procedures to which we dispatch perform only analysis, not full evaluation:
13505 (define (analyze exp)
13506 (cond ((self-evaluating? exp)
13507 (analyze-self-evaluating exp))
13508 ((quoted? exp) (analyze-quoted exp))
13509 ((variable? exp) (analyze-variable exp))
13510 ((assignment? exp) (analyze-assignment exp))
13511 ((definition? exp) (analyze-definition exp))
13512 ((if? exp) (analyze-if exp))
13513 ((lambda? exp) (analyze-lambda exp))
13514 ((begin? exp) (analyze-sequence (begin-actions exp)))
13515 ((cond? exp) (analyze (cond->if exp)))
13516 ((application? exp) (analyze-application exp))
13517 (else
13518 (error "Unknown expression type -- ANALYZE" exp))))
13519
13520 \fHere is the simplest syntactic analysis procedure, which handles self-evaluating expressions. It returns
13521 an execution procedure that ignores its environment argument and just returns the expression:
13522 (define (analyze-self-evaluating exp)
13523 (lambda (env) exp))
13524 For a quoted expression, we can gain a little efficiency by extracting the text of the quotation only
13525 once, in the analysis phase, rather than in the execution phase.
13526 (define (analyze-quoted exp)
13527 (let ((qval (text-of-quotation exp)))
13528 (lambda (env) qval)))
13529 Looking up a variable value must still be done in the execution phase, since this depends upon
13530 knowing the environment. 29
13531 (define (analyze-variable exp)
13532 (lambda (env) (lookup-variable-value exp env)))
13533 Analyze-assignment also must defer actually setting the variable until the execution, when the
13534 environment has been supplied. However, the fact that the assignment-value expression can be
13535 analyzed (recursively) during analysis is a major gain in efficiency, because the
13536 assignment-value expression will now be analyzed only once. The same holds true for
13537 definitions.
13538 (define (analyze-assignment exp)
13539 (let ((var (assignment-variable exp))
13540 (vproc (analyze (assignment-value exp))))
13541 (lambda (env)
13542 (set-variable-value! var (vproc env) env)
13543 ’ok)))
13544 (define (analyze-definition exp)
13545 (let ((var (definition-variable exp))
13546 (vproc (analyze (definition-value exp))))
13547 (lambda (env)
13548 (define-variable! var (vproc env) env)
13549 ’ok)))
13550 For if expressions, we extract and analyze the predicate, consequent, and alternative at analysis time.
13551 (define (analyze-if exp)
13552 (let ((pproc (analyze (if-predicate exp)))
13553 (cproc (analyze (if-consequent exp)))
13554 (aproc (analyze (if-alternative exp))))
13555 (lambda (env)
13556 (if (true? (pproc env))
13557 (cproc env)
13558 (aproc env)))))
13559 Analyzing a lambda expression also achieves a major gain in efficiency: We analyze the lambda
13560 body only once, even though procedures resulting from evaluation of the lambda may be applied
13561 many times.
13562
13563 \f(define (analyze-lambda exp)
13564 (let ((vars (lambda-parameters exp))
13565 (bproc (analyze-sequence (lambda-body exp))))
13566 (lambda (env) (make-procedure vars bproc env))))
13567 Analysis of a sequence of expressions (as in a begin or the body of a lambda expression) is more
13568 involved. 30 Each expression in the sequence is analyzed, yielding an execution procedure. These
13569 execution procedures are combined to produce an execution procedure that takes an environment as
13570 argument and sequentially calls each individual execution procedure with the environment as
13571 argument.
13572 (define (analyze-sequence exps)
13573 (define (sequentially proc1 proc2)
13574 (lambda (env) (proc1 env) (proc2 env)))
13575 (define (loop first-proc rest-procs)
13576 (if (null? rest-procs)
13577 first-proc
13578 (loop (sequentially first-proc (car rest-procs))
13579 (cdr rest-procs))))
13580 (let ((procs (map analyze exps)))
13581 (if (null? procs)
13582 (error "Empty sequence -- ANALYZE"))
13583 (loop (car procs) (cdr procs))))
13584 To analyze an application, we analyze the operator and operands and construct an execution procedure
13585 that calls the operator execution procedure (to obtain the actual procedure to be applied) and the
13586 operand execution procedures (to obtain the actual arguments). We then pass these to
13587 execute-application, which is the analog of apply in section 4.1.1.
13588 Execute-application differs from apply in that the procedure body for a compound
13589 procedure has already been analyzed, so there is no need to do further analysis. Instead, we just call
13590 the execution procedure for the body on the extended environment.
13591 (define (analyze-application exp)
13592 (let ((fproc (analyze (operator exp)))
13593 (aprocs (map analyze (operands exp))))
13594 (lambda (env)
13595 (execute-application (fproc env)
13596 (map (lambda (aproc) (aproc env))
13597 aprocs)))))
13598 (define (execute-application proc args)
13599 (cond ((primitive-procedure? proc)
13600 (apply-primitive-procedure proc args))
13601 ((compound-procedure? proc)
13602 ((procedure-body proc)
13603 (extend-environment (procedure-parameters proc)
13604 args
13605 (procedure-environment proc))))
13606 (else
13607 (error
13608 "Unknown procedure type -- EXECUTE-APPLICATION"
13609 proc))))
13610
13611 \fOur new evaluator uses the same data structures, syntax procedures, and run-time support procedures
13612 as in sections 4.1.2, 4.1.3, and 4.1.4.
13613 Exercise 4.22. Extend the evaluator in this section to support the special form let. (See
13614 exercise 4.6.)
13615 Exercise 4.23. Alyssa P. Hacker doesn’t understand why analyze-sequence needs to be so
13616 complicated. All the other analysis procedures are straightforward transformations of the
13617 corresponding evaluation procedures (or eval clauses) in section 4.1.1. She expected
13618 analyze-sequence to look like this:
13619 (define (analyze-sequence exps)
13620 (define (execute-sequence procs env)
13621 (cond ((null? (cdr procs)) ((car procs) env))
13622 (else ((car procs) env)
13623 (execute-sequence (cdr procs) env))))
13624 (let ((procs (map analyze exps)))
13625 (if (null? procs)
13626 (error "Empty sequence -- ANALYZE"))
13627 (lambda (env) (execute-sequence procs env))))
13628 Eva Lu Ator explains to Alyssa that the version in the text does more of the work of evaluating a
13629 sequence at analysis time. Alyssa’s sequence-execution procedure, rather than having the calls to the
13630 individual execution procedures built in, loops through the procedures in order to call them: In effect,
13631 although the individual expressions in the sequence have been analyzed, the sequence itself has not
13632 been.
13633 Compare the two versions of analyze-sequence. For example, consider the common case
13634 (typical of procedure bodies) where the sequence has just one expression. What work will the
13635 execution procedure produced by Alyssa’s program do? What about the execution procedure produced
13636 by the program in the text above? How do the two versions compare for a sequence with two
13637 expressions?
13638 Exercise 4.24. Design and carry out some experiments to compare the speed of the original
13639 metacircular evaluator with the version in this section. Use your results to estimate the fraction of time
13640 that is spent in analysis versus execution for various procedures.
13641 3 Even so, there will remain important aspects of the evaluation process that are not elucidated by our
13642
13643 evaluator. The most important of these are the detailed mechanisms by which procedures call other
13644 procedures and return values to their callers. We will address these issues in chapter 5, where we take
13645 a closer look at the evaluation process by implementing the evaluator as a simple register machine.
13646 4 If we grant ourselves the ability to apply primitives, then what remains for us to implement in the
13647
13648 evaluator? The job of the evaluator is not to specify the primitives of the language, but rather to
13649 provide the connective tissue -- the means of combination and the means of abstraction -- that binds a
13650 collection of primitives to form a language. Specifically:
13651 The evaluator enables us to deal with nested expressions. For example, although simply applying
13652 primitives would suffice for evaluating the expression (+ 1 6), it is not adequate for handling
13653 (+ 1 (* 2 3)). As far as the primitive procedure + is concerned, its arguments must be
13654 numbers, and it would choke if we passed it the expression (* 2 3) as an argument. One
13655
13656 \fimportant role of the evaluator is to choreograph procedure composition so that (* 2 3) is
13657 reduced to 6 before being passed as an argument to +.
13658 The evaluator allows us to use variables. For example, the primitive procedure for addition has no
13659 way to deal with expressions such as (+ x 1). We need an evaluator to keep track of variables
13660 and obtain their values before invoking the primitive procedures.
13661 The evaluator allows us to define compound procedures. This involves keeping track of
13662 procedure definitions, knowing how to use these definitions in evaluating expressions, and providing a
13663 mechanism that enables procedures to accept arguments.
13664 The evaluator provides the special forms, which must be evaluated differently from procedure
13665 calls.
13666 5 We could have simplified the application? clause in eval by using map (and stipulating that
13667
13668 operands returns a list) rather than writing an explicit list-of-values procedure. We chose
13669 not to use map here to emphasize the fact that the evaluator can be implemented without any use of
13670 higher-order procedures (and thus could be written in a language that doesn’t have higher-order
13671 procedures), even though the language that it supports will include higher-order procedures.
13672 6 In this case, the language being implemented and the implementation language are the same.
13673
13674 Contemplation of the meaning of true? here yields expansion of consciousness without the abuse of
13675 substance.
13676 7 This implementation of define ignores a subtle issue in the handling of internal definitions,
13677
13678 although it works correctly in most cases. We will see what the problem is and how to solve it in
13679 section 4.1.6.
13680 8 As we said when we introduced define and set!, these values are implementation-dependent in
13681
13682 Scheme -- that is, the implementor can choose what value to return.
13683 9 As mentioned in section 2.3.1, the evaluator sees a quoted expression as a list beginning with
13684
13685 quote, even if the expression is typed with the quotation mark. For example, the expression ’a
13686 would be seen by the evaluator as (quote a). See exercise 2.55.
13687 10 The value of an if expression when the predicate is false and there is no alternative is unspecified
13688
13689 in Scheme; we have chosen here to make it false. We will support the use of the variables true and
13690 false in expressions to be evaluated by binding them in the global environment. See section 4.1.4.
13691 11 These selectors for a list of expressions -- and the corresponding ones for a list of operands -- are
13692
13693 not intended as a data abstraction. They are introduced as mnemonic names for the basic list
13694 operations in order to make it easier to understand the explicit-control evaluator in section 5.4.
13695 12 The value of a cond expression when all the predicates are false and there is no else clause is
13696
13697 unspecified in Scheme; we have chosen here to make it false.
13698 13 Practical Lisp systems provide a mechanism that allows a user to add new derived expressions and
13699
13700 specify their implementation as syntactic transformations without modifying the evaluator. Such a
13701 user-defined transformation is called a macro. Although it is easy to add an elementary mechanism for
13702 defining macros, the resulting language has subtle name-conflict problems. There has been much
13703 research on mechanisms for macro definition that do not cause these difficulties. See, for example,
13704 Kohlbecker 1986, Clinger and Rees 1991, and Hanson 1991.
13705
13706 \f14 Frames are not really a data abstraction in the following code: Set-variable-value! and
13707
13708 define-variable! use set-car! to directly modify the values in a frame. The purpose of the
13709 frame procedures is to make the environment-manipulation procedures easy to read.
13710 15 The drawback of this representation (as well as the variant in exercise 4.11) is that the evaluator
13711
13712 may have to search through many frames in order to find the binding for a given variable. (Such an
13713 approach is referred to as deep binding.) One way to avoid this inefficiency is to make use of a
13714 strategy called lexical addressing, which will be discussed in section 5.5.6.
13715 16 Any procedure defined in the underlying Lisp can be used as a primitive for the metacircular
13716
13717 evaluator. The name of a primitive installed in the evaluator need not be the same as the name of its
13718 implementation in the underlying Lisp; the names are the same here because the metacircular evaluator
13719 implements Scheme itself. Thus, for example, we could put (list ’first car) or (list
13720 ’square (lambda (x) (* x x))) in the list of primitive-procedures.
13721 17 Apply-in-underlying-scheme is the apply procedure we have used in earlier chapters.
13722
13723 The metacircular evaluator’s apply procedure (section 4.1.1) models the working of this primitive.
13724 Having two different things called apply leads to a technical problem in running the metacircular
13725 evaluator, because defining the metacircular evaluator’s apply will mask the definition of the
13726 primitive. One way around this is to rename the metacircular apply to avoid conflict with the name
13727 of the primitive procedure. We have assumed instead that we have saved a reference to the underlying
13728 apply by doing
13729 (define apply-in-underlying-scheme apply)
13730 before defining the metacircular apply. This allows us to access the original version of apply under
13731 a different name.
13732 18 The primitive procedure read waits for input from the user, and returns the next complete
13733
13734 expression that is typed. For example, if the user types (+ 23 x), read returns a three-element list
13735 containing the symbol +, the number 23, and the symbol x. If the user types ’x, read returns a
13736 two-element list containing the symbol quote and the symbol x.
13737 19 The fact that the machines are described in Lisp is inessential. If we give our evaluator a Lisp
13738
13739 program that behaves as an evaluator for some other language, say C, the Lisp evaluator will emulate
13740 the C evaluator, which in turn can emulate any machine described as a C program. Similarly, writing a
13741 Lisp evaluator in C produces a C program that can execute any Lisp program. The deep idea here is
13742 that any evaluator can emulate any other. Thus, the notion of ‘‘what can in principle be computed’’
13743 (ignoring practicalities of time and memory required) is independent of the language or the computer,
13744 and instead reflects an underlying notion of computability. This was first demonstrated in a clear way
13745 by Alan M. Turing (1912-1954), whose 1936 paper laid the foundations for theoretical computer
13746 science. In the paper, Turing presented a simple computational model -- now known as a Turing
13747 machine -- and argued that any ‘‘effective process’’ can be formulated as a program for such a
13748 machine. (This argument is known as the Church-Turing thesis.) Turing then implemented a universal
13749 machine, i.e., a Turing machine that behaves as an evaluator for Turing-machine programs. He used
13750 this framework to demonstrate that there are well-posed problems that cannot be computed by Turing
13751 machines (see exercise 4.15), and so by implication cannot be formulated as ‘‘effective processes.’’
13752 Turing went on to make fundamental contributions to practical computer science as well. For example,
13753 he invented the idea of structuring programs using general-purpose subroutines. See Hodges 1983 for
13754 a biography of Turing.
13755
13756 \f20 Some people find it counterintuitive that an evaluator, which is implemented by a relatively simple
13757
13758 procedure, can emulate programs that are more complex than the evaluator itself. The existence of a
13759 universal evaluator machine is a deep and wonderful property of computation. Recursion theory, a
13760 branch of mathematical logic, is concerned with logical limits of computation. Douglas Hofstadter’s
13761 beautiful book Gödel, Escher, Bach (1979) explores some of these ideas.
13762 21 Warning: This eval primitive is not identical to the eval procedure we implemented in
13763
13764 section 4.1.1, because it uses actual Scheme environments rather than the sample environment
13765 structures we built in section 4.1.3. These actual environments cannot be manipulated by the user as
13766 ordinary lists; they must be accessed via eval or other special operations. Similarly, the apply
13767 primitive we saw earlier is not identical to the metacircular apply, because it uses actual Scheme
13768 procedures rather than the procedure objects we constructed in sections 4.1.3 and 4.1.4.
13769 22 The MIT implementation of Scheme includes eval, as well as a symbol
13770
13771 user-initial-environment that is bound to the initial environment in which the user’s input
13772 expressions are evaluated.
13773 23 Although we stipulated that halts? is given a procedure object, notice that this reasoning still
13774
13775 applies even if halts? can gain access to the procedure’s text and its environment. This is Turing’s
13776 celebrated Halting Theorem, which gave the first clear example of a non-computable problem, i.e., a
13777 well-posed task that cannot be carried out as a computational procedure.
13778 24 Wanting programs to not depend on this evaluation mechanism is the reason for the ‘‘management
13779
13780 is not responsible’’ remark in footnote 28 of chapter 1. By insisting that internal definitions come first
13781 and do not use each other while the definitions are being evaluated, the IEEE standard for Scheme
13782 leaves implementors some choice in the mechanism used to evaluate these definitions. The choice of
13783 one evaluation rule rather than another here may seem like a small issue, affecting only the
13784 interpretation of ‘‘badly formed’’ programs. However, we will see in section 5.5.6 that moving to a
13785 model of simultaneous scoping for internal definitions avoids some nasty difficulties that would
13786 otherwise arise in implementing a compiler.
13787 25 The IEEE standard for Scheme allows for different implementation strategies by specifying that it
13788
13789 is up to the programmer to obey this restriction, not up to the implementation to enforce it. Some
13790 Scheme implementations, including MIT Scheme, use the transformation shown above. Thus, some
13791 programs that don’t obey this restriction will in fact run in such implementations.
13792 26 The MIT implementors of Scheme support Alyssa on the following grounds: Eva is in principle
13793
13794 correct -- the definitions should be regarded as simultaneous. But it seems difficult to implement a
13795 general, efficient mechanism that does what Eva requires. In the absence of such a mechanism, it is
13796 better to generate an error in the difficult cases of simultaneous definitions (Alyssa’s notion) than to
13797 produce an incorrect answer (as Ben would have it).
13798 27 This example illustrates a programming trick for formulating recursive procedures without using
13799
13800 define. The most general trick of this sort is the Y operator, which can be used to give a ‘‘pure
13801 -calculus’’ implementation of recursion. (See Stoy 1977 for details on the lambda calculus, and
13802 Gabriel 1988 for an exposition of the Y operator in Scheme.)
13803 28 This technique is an integral part of the compilation process, which we shall discuss in chapter 5.
13804
13805 Jonathan Rees wrote a Scheme interpreter like this in about 1982 for the T project (Rees and Adams
13806 1982). Marc Feeley (1986) (see also Feeley and Lapalme 1987) independently invented this technique
13807 in his master’s thesis.
13808
13809 \f29 There is, however, an important part of the variable search that can be done as part of the syntactic
13810
13811 analysis. As we will show in section 5.5.6, one can determine the position in the environment structure
13812 where the value of the variable will be found, thus obviating the need to scan the environment for the
13813 entry that matches the variable.
13814 30 See exercise 4.23 for some insight into the processing of sequences.
13815
13816 [Go to first, previous, next page; contents; index]
13817
13818 \f[Go to first, previous, next page; contents; index]
13819
13820 4.2 Variations on a Scheme -- Lazy Evaluation
13821 Now that we have an evaluator expressed as a Lisp program, we can experiment with alternative
13822 choices in language design simply by modifying the evaluator. Indeed, new languages are often
13823 invented by first writing an evaluator that embeds the new language within an existing high-level
13824 language. For example, if we wish to discuss some aspect of a proposed modification to Lisp with
13825 another member of the Lisp community, we can supply an evaluator that embodies the change. The
13826 recipient can then experiment with the new evaluator and send back comments as further
13827 modifications. Not only does the high-level implementation base make it easier to test and debug the
13828 evaluator; in addition, the embedding enables the designer to snarf 31 features from the underlying
13829 language, just as our embedded Lisp evaluator uses primitives and control structure from the
13830 underlying Lisp. Only later (if ever) need the designer go to the trouble of building a complete
13831 implementation in a low-level language or in hardware. In this section and the next we explore some
13832 variations on Scheme that provide significant additional expressive power.
13833
13834 4.2.1 Normal Order and Applicative Order
13835 In section 1.1, where we began our discussion of models of evaluation, we noted that Scheme is an
13836 applicative-order language, namely, that all the arguments to Scheme procedures are evaluated when
13837 the procedure is applied. In contrast, normal-order languages delay evaluation of procedure arguments
13838 until the actual argument values are needed. Delaying evaluation of procedure arguments until the last
13839 possible moment (e.g., until they are required by a primitive operation) is called lazy evaluation. 32
13840 Consider the procedure
13841 (define (try a b)
13842 (if (= a 0) 1 b))
13843 Evaluating (try 0 (/ 1 0)) generates an error in Scheme. With lazy evaluation, there would be
13844 no error. Evaluating the expression would return 1, because the argument (/ 1 0) would never be
13845 evaluated.
13846 An example that exploits lazy evaluation is the definition of a procedure unless
13847 (define (unless condition usual-value exceptional-value)
13848 (if condition exceptional-value usual-value))
13849 that can be used in expressions such as
13850 (unless (= b 0)
13851 (/ a b)
13852 (begin (display "exception: returning 0")
13853 0))
13854 This won’t work in an applicative-order language because both the usual value and the exceptional
13855 value will be evaluated before unless is called (compare exercise 1.6). An advantage of lazy
13856 evaluation is that some procedures, such as unless, can do useful computation even if evaluation of
13857 some of their arguments would produce errors or would not terminate.
13858
13859 \fIf the body of a procedure is entered before an argument has been evaluated we say that the procedure
13860 is non-strict in that argument. If the argument is evaluated before the body of the procedure is entered
13861 we say that the procedure is strict in that argument. 33 In a purely applicative-order language, all
13862 procedures are strict in each argument. In a purely normal-order language, all compound procedures
13863 are non-strict in each argument, and primitive procedures may be either strict or non-strict. There are
13864 also languages (see exercise 4.31) that give programmers detailed control over the strictness of the
13865 procedures they define.
13866 A striking example of a procedure that can usefully be made non-strict is cons (or, in general, almost
13867 any constructor for data structures). One can do useful computation, combining elements to form data
13868 structures and operating on the resulting data structures, even if the values of the elements are not
13869 known. It makes perfect sense, for instance, to compute the length of a list without knowing the values
13870 of the individual elements in the list. We will exploit this idea in section 4.2.3 to implement the
13871 streams of chapter 3 as lists formed of non-strict cons pairs.
13872 Exercise 4.25. Suppose that (in ordinary applicative-order Scheme) we define unless as shown
13873 above and then define factorial in terms of unless as
13874 (define (factorial n)
13875 (unless (= n 1)
13876 (* n (factorial (- n 1)))
13877 1))
13878 What happens if we attempt to evaluate (factorial 5)? Will our definitions work in a
13879 normal-order language?
13880 Exercise 4.26. Ben Bitdiddle and Alyssa P. Hacker disagree over the importance of lazy evaluation
13881 for implementing things such as unless. Ben points out that it’s possible to implement unless in
13882 applicative order as a special form. Alyssa counters that, if one did that, unless would be merely
13883 syntax, not a procedure that could be used in conjunction with higher-order procedures. Fill in the
13884 details on both sides of the argument. Show how to implement unless as a derived expression (like
13885 cond or let), and give an example of a situation where it might be useful to have unless available
13886 as a procedure, rather than as a special form.
13887
13888 4.2.2 An Interpreter with Lazy Evaluation
13889 In this section we will implement a normal-order language that is the same as Scheme except that
13890 compound procedures are non-strict in each argument. Primitive procedures will still be strict. It is not
13891 difficult to modify the evaluator of section 4.1.1 so that the language it interprets behaves this way.
13892 Almost all the required changes center around procedure application.
13893 The basic idea is that, when applying a procedure, the interpreter must determine which arguments are
13894 to be evaluated and which are to be delayed. The delayed arguments are not evaluated; instead, they
13895 are transformed into objects called thunks. 34 The thunk must contain the information required to
13896 produce the value of the argument when it is needed, as if it had been evaluated at the time of the
13897 application. Thus, the thunk must contain the argument expression and the environment in which the
13898 procedure application is being evaluated.
13899 The process of evaluating the expression in a thunk is called forcing. 35 In general, a thunk will be
13900 forced only when its value is needed: when it is passed to a primitive procedure that will use the value
13901 of the thunk; when it is the value of a predicate of a conditional; and when it is the value of an operator
13902
13903 \fthat is about to be applied as a procedure. One design choice we have available is whether or not to
13904 memoize thunks, as we did with delayed objects in section 3.5.1. With memoization, the first time a
13905 thunk is forced, it stores the value that is computed. Subsequent forcings simply return the stored value
13906 without repeating the computation. We’ll make our interpreter memoize, because this is more efficient
13907 for many applications. There are tricky considerations here, however. 36
13908
13909 Modifying the evaluator
13910 The main difference between the lazy evaluator and the one in section 4.1 is in the handling of
13911 procedure applications in eval and apply.
13912 The application? clause of eval becomes
13913 ((application? exp)
13914 (apply (actual-value (operator exp) env)
13915 (operands exp)
13916 env))
13917 This is almost the same as the application? clause of eval in section 4.1.1. For lazy evaluation,
13918 however, we call apply with the operand expressions, rather than the arguments produced by
13919 evaluating them. Since we will need the environment to construct thunks if the arguments are to be
13920 delayed, we must pass this as well. We still evaluate the operator, because apply needs the actual
13921 procedure to be applied in order to dispatch on its type (primitive versus compound) and apply it.
13922 Whenever we need the actual value of an expression, we use
13923 (define (actual-value exp env)
13924 (force-it (eval exp env)))
13925 instead of just eval, so that if the expression’s value is a thunk, it will be forced.
13926 Our new version of apply is also almost the same as the version in section 4.1.1. The difference is
13927 that eval has passed in unevaluated operand expressions: For primitive procedures (which are strict),
13928 we evaluate all the arguments before applying the primitive; for compound procedures (which are
13929 non-strict) we delay all the arguments before applying the procedure.
13930 (define (apply procedure arguments env)
13931 (cond ((primitive-procedure? procedure)
13932 (apply-primitive-procedure
13933 procedure
13934 (list-of-arg-values arguments env))) ; changed
13935 ((compound-procedure? procedure)
13936 (eval-sequence
13937 (procedure-body procedure)
13938 (extend-environment
13939 (procedure-parameters procedure)
13940 (list-of-delayed-args arguments env) ; changed
13941 (procedure-environment procedure))))
13942 (else
13943 (error
13944 "Unknown procedure type -- APPLY" procedure))))
13945
13946 \fThe procedures that process the arguments are just like list-of-values from section 4.1.1,
13947 except that list-of-delayed-args delays the arguments instead of evaluating them, and
13948 list-of-arg-values uses actual-value instead of eval:
13949 (define (list-of-arg-values exps env)
13950 (if (no-operands? exps)
13951 ’()
13952 (cons (actual-value (first-operand exps) env)
13953 (list-of-arg-values (rest-operands exps)
13954 env))))
13955 (define (list-of-delayed-args exps env)
13956 (if (no-operands? exps)
13957 ’()
13958 (cons (delay-it (first-operand exps) env)
13959 (list-of-delayed-args (rest-operands exps)
13960 env))))
13961 The other place we must change the evaluator is in the handling of if, where we must use
13962 actual-value instead of eval to get the value of the predicate expression before testing whether
13963 it is true or false:
13964 (define (eval-if exp env)
13965 (if (true? (actual-value (if-predicate exp) env))
13966 (eval (if-consequent exp) env)
13967 (eval (if-alternative exp) env)))
13968 Finally, we must change the driver-loop procedure (section 4.1.4) to use actual-value
13969 instead of eval, so that if a delayed value is propagated back to the read-eval-print loop, it will be
13970 forced before being printed. We also change the prompts to indicate that this is the lazy evaluator:
13971 (define input-prompt ";;; L-Eval input:")
13972 (define output-prompt ";;; L-Eval value:")
13973 (define (driver-loop)
13974 (prompt-for-input input-prompt)
13975 (let ((input (read)))
13976 (let ((output
13977 (actual-value input the-global-environment)))
13978 (announce-output output-prompt)
13979 (user-print output)))
13980 (driver-loop))
13981 With these changes made, we can start the evaluator and test it. The successful evaluation of the try
13982 expression discussed in section 4.2.1 indicates that the interpreter is performing lazy evaluation:
13983 (define the-global-environment (setup-environment))
13984 (driver-loop)
13985 ;;; L-Eval input:
13986 (define (try a b)
13987 (if (= a 0) 1 b))
13988 ;;; L-Eval value:
13989 ok
13990 ;;; L-Eval input:
13991
13992 \f(try 0 (/ 1 0))
13993 ;;; L-Eval value:
13994 1
13995
13996 Representing thunks
13997 Our evaluator must arrange to create thunks when procedures are applied to arguments and to force
13998 these thunks later. A thunk must package an expression together with the environment, so that the
13999 argument can be produced later. To force the thunk, we simply extract the expression and environment
14000 from the thunk and evaluate the expression in the environment. We use actual-value rather than
14001 eval so that in case the value of the expression is itself a thunk, we will force that, and so on, until we
14002 reach something that is not a thunk:
14003 (define (force-it obj)
14004 (if (thunk? obj)
14005 (actual-value (thunk-exp obj) (thunk-env obj))
14006 obj))
14007 One easy way to package an expression with an environment is to make a list containing the
14008 expression and the environment. Thus, we create a thunk as follows:
14009 (define (delay-it exp env)
14010 (list ’thunk exp env))
14011 (define (thunk? obj)
14012 (tagged-list? obj ’thunk))
14013 (define (thunk-exp thunk) (cadr thunk))
14014 (define (thunk-env thunk) (caddr thunk))
14015 Actually, what we want for our interpreter is not quite this, but rather thunks that have been memoized.
14016 When a thunk is forced, we will turn it into an evaluated thunk by replacing the stored expression with
14017 its value and changing the thunk tag so that it can be recognized as already evaluated. 37
14018 (define (evaluated-thunk? obj)
14019 (tagged-list? obj ’evaluated-thunk))
14020 (define (thunk-value evaluated-thunk) (cadr evaluated-thunk))
14021 (define (force-it obj)
14022 (cond ((thunk? obj)
14023 (let ((result (actual-value
14024 (thunk-exp obj)
14025 (thunk-env obj))))
14026 (set-car! obj ’evaluated-thunk)
14027 (set-car! (cdr obj) result) ; replace exp with its value
14028 (set-cdr! (cdr obj) ’())
14029 ; forget unneeded env
14030 result))
14031 ((evaluated-thunk? obj)
14032 (thunk-value obj))
14033 (else obj)))
14034 Notice that the same delay-it procedure works both with and without memoization.
14035
14036 \fExercise 4.27. Suppose we type in the following definitions to the lazy evaluator:
14037 (define count 0)
14038 (define (id x)
14039 (set! count (+ count 1))
14040 x)
14041 Give the missing values in the following sequence of interactions, and explain your answers. 38
14042 (define w (id (id 10)))
14043 ;;; L-Eval input:
14044 count
14045 ;;; L-Eval value:
14046 <response>
14047 ;;; L-Eval input:
14048 w
14049 ;;; L-Eval value:
14050 <response>
14051 ;;; L-Eval input:
14052 count
14053 ;;; L-Eval value:
14054 <response>
14055 Exercise 4.28. Eval uses actual-value rather than eval to evaluate the operator before passing
14056 it to apply, in order to force the value of the operator. Give an example that demonstrates the need
14057 for this forcing.
14058 Exercise 4.29. Exhibit a program that you would expect to run much more slowly without
14059 memoization than with memoization. Also, consider the following interaction, where the id procedure
14060 is defined as in exercise 4.27 and count starts at 0:
14061 (define (square x)
14062 (* x x))
14063 ;;; L-Eval input:
14064 (square (id 10))
14065 ;;; L-Eval value:
14066 <response>
14067 ;;; L-Eval input:
14068 count
14069 ;;; L-Eval value:
14070 <response>
14071 Give the responses both when the evaluator memoizes and when it does not.
14072 Exercise 4.30. Cy D. Fect, a reformed C programmer, is worried that some side effects may never
14073 take place, because the lazy evaluator doesn’t force the expressions in a sequence. Since the value of
14074 an expression in a sequence other than the last one is not used (the expression is there only for its
14075 effect, such as assigning to a variable or printing), there can be no subsequent use of this value (e.g., as
14076 an argument to a primitive procedure) that will cause it to be forced. Cy thus thinks that when
14077 evaluating sequences, we must force all expressions in the sequence except the final one. He proposes
14078 to modify eval-sequence from section 4.1.1 to use actual-value rather than eval:
14079
14080 \f(define (eval-sequence exps env)
14081 (cond ((last-exp? exps) (eval (first-exp exps) env))
14082 (else (actual-value (first-exp exps) env)
14083 (eval-sequence (rest-exps exps) env))))
14084 a. Ben Bitdiddle thinks Cy is wrong. He shows Cy the for-each procedure described in
14085 exercise 2.23, which gives an important example of a sequence with side effects:
14086 (define (for-each proc items)
14087 (if (null? items)
14088 ’done
14089 (begin (proc (car items))
14090 (for-each proc (cdr items)))))
14091 He claims that the evaluator in the text (with the original eval-sequence) handles this correctly:
14092 ;;; L-Eval input:
14093 (for-each (lambda (x) (newline) (display x))
14094 (list 57 321 88))
14095 57
14096 321
14097 88
14098 ;;; L-Eval value:
14099 done
14100 Explain why Ben is right about the behavior of for-each.
14101 b. Cy agrees that Ben is right about the for-each example, but says that that’s not the kind of
14102 program he was thinking about when he proposed his change to eval-sequence. He defines the
14103 following two procedures in the lazy evaluator:
14104 (define (p1 x)
14105 (set! x (cons x ’(2)))
14106 x)
14107 (define (p2 x)
14108 (define (p e)
14109 e
14110 x)
14111 (p (set! x (cons x ’(2)))))
14112 What are the values of (p1 1) and (p2 1) with the original eval-sequence? What would the
14113 values be with Cy’s proposed change to eval-sequence?
14114 c. Cy also points out that changing eval-sequence as he proposes does not affect the behavior of
14115 the example in part a. Explain why this is true.
14116 d. How do you think sequences ought to be treated in the lazy evaluator? Do you like Cy’s approach,
14117 the approach in the text, or some other approach?
14118 Exercise 4.31. The approach taken in this section is somewhat unpleasant, because it makes an
14119 incompatible change to Scheme. It might be nicer to implement lazy evaluation as an
14120 upward-compatible extension, that is, so that ordinary Scheme programs will work as before. We can
14121
14122 \fdo this by extending the syntax of procedure declarations to let the user control whether or not
14123 arguments are to be delayed. While we’re at it, we may as well also give the user the choice between
14124 delaying with and without memoization. For example, the definition
14125 (define (f a (b lazy) c (d lazy-memo))
14126 ...)
14127 would define f to be a procedure of four arguments, where the first and third arguments are evaluated
14128 when the procedure is called, the second argument is delayed, and the fourth argument is both delayed
14129 and memoized. Thus, ordinary procedure definitions will produce the same behavior as ordinary
14130 Scheme, while adding the lazy-memo declaration to each parameter of every compound procedure
14131 will produce the behavior of the lazy evaluator defined in this section. Design and implement the
14132 changes required to produce such an extension to Scheme. You will have to implement new syntax
14133 procedures to handle the new syntax for define. You must also arrange for eval or apply to
14134 determine when arguments are to be delayed, and to force or delay arguments accordingly, and you
14135 must arrange for forcing to memoize or not, as appropriate.
14136
14137 4.2.3 Streams as Lazy Lists
14138 In section 3.5.1, we showed how to implement streams as delayed lists. We introduced special forms
14139 delay and cons-stream, which allowed us to construct a ‘‘promise’’ to compute the cdr of a
14140 stream, without actually fulfilling that promise until later. We could use this general technique of
14141 introducing special forms whenever we need more control over the evaluation process, but this is
14142 awkward. For one thing, a special form is not a first-class object like a procedure, so we cannot use it
14143 together with higher-order procedures. 39 Additionally, we were forced to create streams as a new kind
14144 of data object similar but not identical to lists, and this required us to reimplement many ordinary list
14145 operations (map, append, and so on) for use with streams.
14146 With lazy evaluation, streams and lists can be identical, so there is no need for special forms or for
14147 separate list and stream operations. All we need to do is to arrange matters so that cons is non-strict.
14148 One way to accomplish this is to extend the lazy evaluator to allow for non-strict primitives, and to
14149 implement cons as one of these. An easier way is to recall (section 2.1.3) that there is no fundamental
14150 need to implement cons as a primitive at all. Instead, we can represent pairs as procedures: 40
14151 (define (cons x y)
14152 (lambda (m) (m x y)))
14153 (define (car z)
14154 (z (lambda (p q) p)))
14155 (define (cdr z)
14156 (z (lambda (p q) q)))
14157 In terms of these basic operations, the standard definitions of the list operations will work with infinite
14158 lists (streams) as well as finite ones, and the stream operations can be implemented as list operations.
14159 Here are some examples:
14160 (define (list-ref items n)
14161 (if (= n 0)
14162 (car items)
14163 (list-ref (cdr items) (- n 1))))
14164 (define (map proc items)
14165 (if (null? items)
14166
14167 \f’()
14168 (cons (proc (car items))
14169 (map proc (cdr items)))))
14170 (define (scale-list items factor)
14171 (map (lambda (x) (* x factor))
14172 items))
14173 (define (add-lists list1 list2)
14174 (cond ((null? list1) list2)
14175 ((null? list2) list1)
14176 (else (cons (+ (car list1) (car list2))
14177 (add-lists (cdr list1) (cdr list2))))))
14178 (define ones (cons 1 ones))
14179 (define integers (cons 1 (add-lists ones integers)))
14180 ;;; L-Eval input:
14181 (list-ref integers 17)
14182 ;;; L-Eval value:
14183 18
14184 Note that these lazy lists are even lazier than the streams of chapter 3: The car of the list, as well as
14185 the cdr, is delayed. 41 In fact, even accessing the car or cdr of a lazy pair need not force the value
14186 of a list element. The value will be forced only when it is really needed -- e.g., for use as the argument
14187 of a primitive, or to be printed as an answer.
14188 Lazy pairs also help with the problem that arose with streams in section 3.5.4, where we found that
14189 formulating stream models of systems with loops may require us to sprinkle our programs with
14190 explicit delay operations, beyond the ones supplied by cons-stream. With lazy evaluation, all
14191 arguments to procedures are delayed uniformly. For instance, we can implement procedures to
14192 integrate lists and solve differential equations as we originally intended in section 3.5.4:
14193 (define (integral integrand initial-value dt)
14194 (define int
14195 (cons initial-value
14196 (add-lists (scale-list integrand dt)
14197 int)))
14198 int)
14199 (define (solve f y0 dt)
14200 (define y (integral dy y0 dt))
14201 (define dy (map f y))
14202 y)
14203 ;;; L-Eval input:
14204 (list-ref (solve (lambda (x) x) 1 0.001) 1000)
14205 ;;; L-Eval value:
14206 2.716924
14207 Exercise 4.32. Give some examples that illustrate the difference between the streams of chapter 3 and
14208 the ‘‘lazier’’ lazy lists described in this section. How can you take advantage of this extra laziness?
14209 Exercise 4.33. Ben Bitdiddle tests the lazy list implementation given above by evaluating the
14210 expression
14211
14212 \f(car ’(a b c))
14213 To his surprise, this produces an error. After some thought, he realizes that the ‘‘lists’’ obtained by
14214 reading in quoted expressions are different from the lists manipulated by the new definitions of cons,
14215 car, and cdr. Modify the evaluator’s treatment of quoted expressions so that quoted lists typed at the
14216 driver loop will produce true lazy lists.
14217 Exercise 4.34. Modify the driver loop for the evaluator so that lazy pairs and lists will print in some
14218 reasonable way. (What are you going to do about infinite lists?) You may also need to modify the
14219 representation of lazy pairs so that the evaluator can identify them in order to print them.
14220 31 Snarf: ‘‘To grab, especially a large document or file for the purpose of using it either with or
14221
14222 without the owner’s permission.’’ Snarf down: ‘‘To snarf, sometimes with the connotation of
14223 absorbing, processing, or understanding.’’ (These definitions were snarfed from Steele et al. 1983. See
14224 also Raymond 1993.)
14225 32 The difference between the ‘‘lazy’’ terminology and the ‘‘normal-order’’ terminology is somewhat
14226
14227 fuzzy. Generally, ‘‘lazy’’ refers to the mechanisms of particular evaluators, while ‘‘normal-order’’
14228 refers to the semantics of languages, independent of any particular evaluation strategy. But this is not a
14229 hard-and-fast distinction, and the two terminologies are often used interchangeably.
14230 33 The ‘‘strict’’ versus ‘‘non-strict’’ terminology means essentially the same thing as
14231
14232 ‘‘applicative-order’’ versus ‘‘normal-order,’’ except that it refers to individual procedures and
14233 arguments rather than to the language as a whole. At a conference on programming languages you
14234 might hear someone say, ‘‘The normal-order language Hassle has certain strict primitives. Other
14235 procedures take their arguments by lazy evaluation.’’
14236 34 The word thunk was invented by an informal working group that was discussing the
14237
14238 implementation of call-by-name in Algol 60. They observed that most of the analysis of (‘‘thinking
14239 about’’) the expression could be done at compile time; thus, at run time, the expression would already
14240 have been ‘‘thunk’’ about (Ingerman et al. 1960).
14241 35 This is analogous to the use of force on the delayed objects that were introduced in chapter 3 to
14242
14243 represent streams. The critical difference between what we are doing here and what we did in
14244 chapter 3 is that we are building delaying and forcing into the evaluator, and thus making this uniform
14245 and automatic throughout the language.
14246 36 Lazy evaluation combined with memoization is sometimes referred to as call-by-need argument
14247
14248 passing, in contrast to call-by-name argument passing. (Call-by-name, introduced in Algol 60, is
14249 similar to non-memoized lazy evaluation.) As language designers, we can build our evaluator to
14250 memoize, not to memoize, or leave this an option for programmers (exercise 4.31). As you might
14251 expect from chapter 3, these choices raise issues that become both subtle and confusing in the presence
14252 of assignments. (See exercises 4.27 and 4.29.) An excellent article by Clinger (1982) attempts to
14253 clarify the multiple dimensions of confusion that arise here.
14254 37 Notice that we also erase the env from the thunk once the expression’s value has been computed.
14255
14256 This makes no difference in the values returned by the interpreter. It does help save space, however,
14257 because removing the reference from the thunk to the env once it is no longer needed allows this
14258 structure to be garbage-collected and its space recycled, as we will discuss in section 5.3.
14259
14260 \fSimilarly, we could have allowed unneeded environments in the memoized delayed objects of
14261 section 3.5.1 to be garbage-collected, by having memo-proc do something like (set! proc
14262 ’()) to discard the procedure proc (which includes the environment in which the delay was
14263 evaluated) after storing its value.
14264 38 This exercise demonstrates that the interaction between lazy evaluation and side effects can be very
14265
14266 confusing. This is just what you might expect from the discussion in chapter 3.
14267 39 This is precisely the issue with the unless procedure, as in exercise 4.26.
14268 40 This is the procedural representation described in exercise 2.4. Essentially any procedural
14269
14270 representation (e.g., a message-passing implementation) would do as well. Notice that we can install
14271 these definitions in the lazy evaluator simply by typing them at the driver loop. If we had originally
14272 included cons, car, and cdr as primitives in the global environment, they will be redefined. (Also
14273 see exercises 4.33 and 4.34.)
14274 41 This permits us to create delayed versions of more general kinds of list structures, not just
14275
14276 sequences. Hughes 1990 discusses some applications of ‘‘lazy trees.’’
14277 [Go to first, previous, next page; contents; index]
14278
14279 \f[Go to first, previous, next page; contents; index]
14280
14281 4.3 Variations on a Scheme -- Nondeterministic Computing
14282 In this section, we extend the Scheme evaluator to support a programming paradigm called
14283 nondeterministic computing by building into the evaluator a facility to support automatic search. This
14284 is a much more profound change to the language than the introduction of lazy evaluation in
14285 section 4.2.
14286 Nondeterministic computing, like stream processing, is useful for ‘‘generate and test’’ applications.
14287 Consider the task of starting with two lists of positive integers and finding a pair of integers -- one
14288 from the first list and one from the second list -- whose sum is prime. We saw how to handle this with
14289 finite sequence operations in section 2.2.3 and with infinite streams in section 3.5.3. Our approach was
14290 to generate the sequence of all possible pairs and filter these to select the pairs whose sum is prime.
14291 Whether we actually generate the entire sequence of pairs first as in chapter 2, or interleave the
14292 generating and filtering as in chapter 3, is immaterial to the essential image of how the computation is
14293 organized.
14294 The nondeterministic approach evokes a different image. Imagine simply that we choose (in some
14295 way) a number from the first list and a number from the second list and require (using some
14296 mechanism) that their sum be prime. This is expressed by following procedure:
14297 (define (prime-sum-pair list1 list2)
14298 (let ((a (an-element-of list1))
14299 (b (an-element-of list2)))
14300 (require (prime? (+ a b)))
14301 (list a b)))
14302 It might seem as if this procedure merely restates the problem, rather than specifying a way to solve it.
14303 Nevertheless, this is a legitimate nondeterministic program. 42
14304 The key idea here is that expressions in a nondeterministic language can have more than one possible
14305 value. For instance, an-element-of might return any element of the given list. Our
14306 nondeterministic program evaluator will work by automatically choosing a possible value and keeping
14307 track of the choice. If a subsequent requirement is not met, the evaluator will try a different choice,
14308 and it will keep trying new choices until the evaluation succeeds, or until we run out of choices. Just as
14309 the lazy evaluator freed the programmer from the details of how values are delayed and forced, the
14310 nondeterministic program evaluator will free the programmer from the details of how choices are
14311 made.
14312 It is instructive to contrast the different images of time evoked by nondeterministic evaluation and
14313 stream processing. Stream processing uses lazy evaluation to decouple the time when the stream of
14314 possible answers is assembled from the time when the actual stream elements are produced. The
14315 evaluator supports the illusion that all the possible answers are laid out before us in a timeless
14316 sequence. With nondeterministic evaluation, an expression represents the exploration of a set of
14317 possible worlds, each determined by a set of choices. Some of the possible worlds lead to dead ends,
14318 while others have useful values. The nondeterministic program evaluator supports the illusion that
14319 time branches, and that our programs have different possible execution histories. When we reach a
14320 dead end, we can revisit a previous choice point and proceed along a different branch.
14321
14322 \fThe nondeterministic program evaluator implemented below is called the amb evaluator because it is
14323 based on a new special form called amb. We can type the above definition of prime-sum-pair at
14324 the amb evaluator driver loop (along with definitions of prime?, an-element-of, and require)
14325 and run the procedure as follows:
14326 ;;; Amb-Eval input:
14327 (prime-sum-pair ’(1 3 5 8) ’(20 35 110))
14328 ;;; Starting a new problem
14329 ;;; Amb-Eval value:
14330 (3 20)
14331 The value returned was obtained after the evaluator repeatedly chose elements from each of the lists,
14332 until a successful choice was made.
14333 Section 4.3.1 introduces amb and explains how it supports nondeterminism through the evaluator’s
14334 automatic search mechanism. Section 4.3.2 presents examples of nondeterministic programs, and
14335 section 4.3.3 gives the details of how to implement the amb evaluator by modifying the ordinary
14336 Scheme evaluator.
14337
14338 4.3.1 Amb and Search
14339 To extend Scheme to support nondeterminism, we introduce a new special form called amb. 43 The
14340 expression (amb <e 1 > <e 2 > ... <e n >) returns the value of one of the n expressions <e i >
14341 ‘‘ambiguously.’’ For example, the expression
14342 (list (amb 1 2 3) (amb ’a ’b))
14343 can have six possible values:
14344 (1 a)
14345
14346 (1 b)
14347
14348 (2 a)
14349
14350 (2 b)
14351
14352 (3 a)
14353
14354 (3 b)
14355
14356 Amb with a single choice produces an ordinary (single) value.
14357 Amb with no choices -- the expression (amb) -- is an expression with no acceptable values.
14358 Operationally, we can think of (amb) as an expression that when evaluated causes the computation to
14359 ‘‘fail’’: The computation aborts and no value is produced. Using this idea, we can express the
14360 requirement that a particular predicate expression p must be true as follows:
14361 (define (require p)
14362 (if (not p) (amb)))
14363 With amb and require, we can implement the an-element-of procedure used above:
14364 (define (an-element-of items)
14365 (require (not (null? items)))
14366 (amb (car items) (an-element-of (cdr items))))
14367 An-element-of fails if the list is empty. Otherwise it ambiguously returns either the first element
14368 of the list or an element chosen from the rest of the list.
14369
14370 \fWe can also express infinite ranges of choices. The following procedure potentially returns any integer
14371 greater than or equal to some given n:
14372 (define (an-integer-starting-from n)
14373 (amb n (an-integer-starting-from (+ n 1))))
14374 This is like the stream procedure integers-starting-from described in section 3.5.2, but with
14375 an important difference: The stream procedure returns an object that represents the sequence of all
14376 integers beginning with n, whereas the amb procedure returns a single integer. 44
14377 Abstractly, we can imagine that evaluating an amb expression causes time to split into branches,
14378 where the computation continues on each branch with one of the possible values of the expression. We
14379 say that amb represents a nondeterministic choice point. If we had a machine with a sufficient number
14380 of processors that could be dynamically allocated, we could implement the search in a straightforward
14381 way. Execution would proceed as in a sequential machine, until an amb expression is encountered. At
14382 this point, more processors would be allocated and initialized to continue all of the parallel executions
14383 implied by the choice. Each processor would proceed sequentially as if it were the only choice, until it
14384 either terminates by encountering a failure, or it further subdivides, or it finishes. 45
14385 On the other hand, if we have a machine that can execute only one process (or a few concurrent
14386 processes), we must consider the alternatives sequentially. One could imagine modifying an evaluator
14387 to pick at random a branch to follow whenever it encounters a choice point. Random choice, however,
14388 can easily lead to failing values. We might try running the evaluator over and over, making random
14389 choices and hoping to find a non-failing value, but it is better to systematically search all possible
14390 execution paths. The amb evaluator that we will develop and work with in this section implements a
14391 systematic search as follows: When the evaluator encounters an application of amb, it initially selects
14392 the first alternative. This selection may itself lead to a further choice. The evaluator will always
14393 initially choose the first alternative at each choice point. If a choice results in a failure, then the
14394 evaluator automagically 46 backtracks to the most recent choice point and tries the next alternative. If
14395 it runs out of alternatives at any choice point, the evaluator will back up to the previous choice point
14396 and resume from there. This process leads to a search strategy known as depth-first search or
14397 chronological backtracking. 47
14398
14399 Driver loop
14400 The driver loop for the amb evaluator has some unusual properties. It reads an expression and prints
14401 the value of the first non-failing execution, as in the prime-sum-pair example shown above. If we
14402 want to see the value of the next successful execution, we can ask the interpreter to backtrack and
14403 attempt to generate a second non-failing execution. This is signaled by typing the symbol
14404 try-again. If any expression except try-again is given, the interpreter will start a new problem,
14405 discarding the unexplored alternatives in the previous problem. Here is a sample interaction:
14406 ;;; Amb-Eval input:
14407 (prime-sum-pair ’(1 3 5 8) ’(20 35 110))
14408 ;;; Starting a new problem
14409 ;;; Amb-Eval value:
14410 (3 20)
14411 ;;; Amb-Eval input:
14412 try-again
14413 ;;; Amb-Eval value:
14414 (3 110)
14415
14416 \f;;; Amb-Eval input:
14417 try-again
14418 ;;; Amb-Eval value:
14419 (8 35)
14420 ;;; Amb-Eval input:
14421 try-again
14422 ;;; There are no more values of
14423 (prime-sum-pair (quote (1 3 5 8)) (quote (20 35 110)))
14424 ;;; Amb-Eval input:
14425 (prime-sum-pair ’(19 27 30) ’(11 36 58))
14426 ;;; Starting a new problem
14427 ;;; Amb-Eval value:
14428 (30 11)
14429 Exercise 4.35. Write a procedure an-integer-between that returns an integer between two
14430 given bounds. This can be used to implement a procedure that finds Pythagorean triples, i.e., triples of
14431 integers (i,j,k) between the given bounds such that i < j and i 2 + j 2 = k 2 , as follows:
14432 (define (a-pythagorean-triple-between low high)
14433 (let ((i (an-integer-between low high)))
14434 (let ((j (an-integer-between i high)))
14435 (let ((k (an-integer-between j high)))
14436 (require (= (+ (* i i) (* j j)) (* k k)))
14437 (list i j k)))))
14438 Exercise 4.36. Exercise 3.69 discussed how to generate the stream of all Pythagorean triples, with no
14439 upper bound on the size of the integers to be searched. Explain why simply replacing
14440 an-integer-between by an-integer-starting-from in the procedure in exercise 4.35 is
14441 not an adequate way to generate arbitrary Pythagorean triples. Write a procedure that actually will
14442 accomplish this. (That is, write a procedure for which repeatedly typing try-again would in
14443 principle eventually generate all Pythagorean triples.)
14444 Exercise 4.37. Ben Bitdiddle claims that the following method for generating Pythagorean triples is
14445 more efficient than the one in exercise 4.35. Is he correct? (Hint: Consider the number of possibilities
14446 that must be explored.)
14447 (define (a-pythagorean-triple-between low high)
14448 (let ((i (an-integer-between low high))
14449 (hsq (* high high)))
14450 (let ((j (an-integer-between i high)))
14451 (let ((ksq (+ (* i i) (* j j))))
14452 (require (>= hsq ksq))
14453 (let ((k (sqrt ksq)))
14454 (require (integer? k))
14455 (list i j k))))))
14456
14457 4.3.2 Examples of Nondeterministic Programs
14458 Section 4.3.3 describes the implementation of the amb evaluator. First, however, we give some
14459 examples of how it can be used. The advantage of nondeterministic programming is that we can
14460 suppress the details of how search is carried out, thereby expressing our programs at a higher level of
14461
14462 \fabstraction.
14463
14464 Logic Puzzles
14465 The following puzzle (taken from Dinesman 1968) is typical of a large class of simple logic puzzles:
14466 Baker, Cooper, Fletcher, Miller, and Smith live on different floors of an apartment house that
14467 contains only five floors. Baker does not live on the top floor. Cooper does not live on the bottom
14468 floor. Fletcher does not live on either the top or the bottom floor. Miller lives on a higher floor
14469 than does Cooper. Smith does not live on a floor adjacent to Fletcher’s. Fletcher does not live on
14470 a floor adjacent to Cooper’s. Where does everyone live?
14471 We can determine who lives on each floor in a straightforward way by enumerating all the possibilities
14472 and imposing the given restrictions: 48
14473 (define (multiple-dwelling)
14474 (let ((baker (amb 1 2 3 4 5))
14475 (cooper (amb 1 2 3 4 5))
14476 (fletcher (amb 1 2 3 4 5))
14477 (miller (amb 1 2 3 4 5))
14478 (smith (amb 1 2 3 4 5)))
14479 (require
14480 (distinct? (list baker cooper fletcher miller smith)))
14481 (require (not (= baker 5)))
14482 (require (not (= cooper 1)))
14483 (require (not (= fletcher 5)))
14484 (require (not (= fletcher 1)))
14485 (require (> miller cooper))
14486 (require (not (= (abs (- smith fletcher)) 1)))
14487 (require (not (= (abs (- fletcher cooper)) 1)))
14488 (list (list ’baker baker)
14489 (list ’cooper cooper)
14490 (list ’fletcher fletcher)
14491 (list ’miller miller)
14492 (list ’smith smith))))
14493 Evaluating the expression (multiple-dwelling) produces the result
14494 ((baker 3) (cooper 2) (fletcher 4) (miller 5) (smith 1))
14495 Although this simple procedure works, it is very slow. Exercises 4.39 and 4.40 discuss some possible
14496 improvements.
14497 Exercise 4.38. Modify the multiple-dwelling procedure to omit the requirement that Smith and
14498 Fletcher do not live on adjacent floors. How many solutions are there to this modified puzzle?
14499 Exercise 4.39. Does the order of the restrictions in the multiple-dwelling procedure affect the answer?
14500 Does it affect the time to find an answer? If you think it matters, demonstrate a faster program
14501 obtained from the given one by reordering the restrictions. If you think it does not matter, argue your
14502 case.
14503
14504 \fExercise 4.40. In the multiple dwelling problem, how many sets of assignments are there of people to
14505 floors, both before and after the requirement that floor assignments be distinct? It is very inefficient to
14506 generate all possible assignments of people to floors and then leave it to backtracking to eliminate
14507 them. For example, most of the restrictions depend on only one or two of the person-floor variables,
14508 and can thus be imposed before floors have been selected for all the people. Write and demonstrate a
14509 much more efficient nondeterministic procedure that solves this problem based upon generating only
14510 those possibilities that are not already ruled out by previous restrictions. (Hint: This will require a nest
14511 of let expressions.)
14512 Exercise 4.41. Write an ordinary Scheme program to solve the multiple dwelling puzzle.
14513 Exercise 4.42. Solve the following ‘‘Liars’’ puzzle (from Phillips 1934):
14514 Five schoolgirls sat for an examination. Their parents -- so they thought -- showed an undue
14515 degree of interest in the result. They therefore agreed that, in writing home about the examination,
14516 each girl should make one true statement and one untrue one. The following are the relevant
14517 passages from their letters:
14518 Betty: ‘‘Kitty was second in the examination. I was only third.’’
14519 Ethel: ‘‘You’ll be glad to hear that I was on top. Joan was second.’’
14520 Joan: ‘‘I was third, and poor old Ethel was bottom.’’
14521 Kitty: ‘‘I came out second. Mary was only fourth.’’
14522 Mary: ‘‘I was fourth. Top place was taken by Betty.’’
14523 What in fact was the order in which the five girls were placed?
14524 Exercise 4.43. Use the amb evaluator to solve the following puzzle: 49
14525 Mary Ann Moore’s father has a yacht and so has each of his four friends: Colonel Downing, Mr.
14526 Hall, Sir Barnacle Hood, and Dr. Parker. Each of the five also has one daughter and each has
14527 named his yacht after a daughter of one of the others. Sir Barnacle’s yacht is the Gabrielle, Mr.
14528 Moore owns the Lorna; Mr. Hall the Rosalind. The Melissa, owned by Colonel Downing, is
14529 named after Sir Barnacle’s daughter. Gabrielle’s father owns the yacht that is named after Dr.
14530 Parker’s daughter. Who is Lorna’s father?
14531 Try to write the program so that it runs efficiently (see exercise 4.40). Also determine how many
14532 solutions there are if we are not told that Mary Ann’s last name is Moore.
14533 Exercise 4.44. Exercise 2.42 described the ‘‘eight-queens puzzle’’ of placing queens on a chessboard
14534 so that no two attack each other. Write a nondeterministic program to solve this puzzle.
14535
14536 Parsing natural language
14537 Programs designed to accept natural language as input usually start by attempting to parse the input,
14538 that is, to match the input against some grammatical structure. For example, we might try to recognize
14539 simple sentences consisting of an article followed by a noun followed by a verb, such as ‘‘The cat
14540 eats.’’ To accomplish such an analysis, we must be able to identify the parts of speech of individual
14541 words. We could start with some lists that classify various words: 50
14542 (define nouns ’(noun student professor cat class))
14543 (define verbs ’(verb studies lectures eats sleeps))
14544 (define articles ’(article the a))
14545
14546 \fWe also need a grammar, that is, a set of rules describing how grammatical elements are composed
14547 from simpler elements. A very simple grammar might stipulate that a sentence always consists of two
14548 pieces -- a noun phrase followed by a verb -- and that a noun phrase consists of an article followed by
14549 a noun. With this grammar, the sentence ‘‘The cat eats’’ is parsed as follows:
14550 (sentence (noun-phrase (article the) (noun cat))
14551 (verb eats))
14552 We can generate such a parse with a simple program that has separate procedures for each of the
14553 grammatical rules. To parse a sentence, we identify its two constituent pieces and return a list of these
14554 two elements, tagged with the symbol sentence:
14555 (define (parse-sentence)
14556 (list ’sentence
14557 (parse-noun-phrase)
14558 (parse-word verbs)))
14559 A noun phrase, similarly, is parsed by finding an article followed by a noun:
14560 (define (parse-noun-phrase)
14561 (list ’noun-phrase
14562 (parse-word articles)
14563 (parse-word nouns)))
14564 At the lowest level, parsing boils down to repeatedly checking that the next unparsed word is a
14565 member of the list of words for the required part of speech. To implement this, we maintain a global
14566 variable *unparsed*, which is the input that has not yet been parsed. Each time we check a word,
14567 we require that *unparsed* must be non-empty and that it should begin with a word from the
14568 designated list. If so, we remove that word from *unparsed* and return the word together with its
14569 part of speech (which is found at the head of the list): 51
14570 (define (parse-word word-list)
14571 (require (not (null? *unparsed*)))
14572 (require (memq (car *unparsed*) (cdr word-list)))
14573 (let ((found-word (car *unparsed*)))
14574 (set! *unparsed* (cdr *unparsed*))
14575 (list (car word-list) found-word)))
14576 To start the parsing, all we need to do is set *unparsed* to be the entire input, try to parse a
14577 sentence, and check that nothing is left over:
14578 (define *unparsed* ’())
14579 (define (parse input)
14580 (set! *unparsed* input)
14581 (let ((sent (parse-sentence)))
14582 (require (null? *unparsed*))
14583 sent))
14584 We can now try the parser and verify that it works for our simple test sentence:
14585
14586 \f;;; Amb-Eval input:
14587 (parse ’(the cat eats))
14588 ;;; Starting a new problem
14589 ;;; Amb-Eval value:
14590 (sentence (noun-phrase (article the) (noun cat)) (verb eats))
14591 The amb evaluator is useful here because it is convenient to express the parsing constraints with the
14592 aid of require. Automatic search and backtracking really pay off, however, when we consider more
14593 complex grammars where there are choices for how the units can be decomposed.
14594 Let’s add to our grammar a list of prepositions:
14595 (define prepositions ’(prep for to in by with))
14596 and define a prepositional phrase (e.g., ‘‘for the cat’’) to be a preposition followed by a noun phrase:
14597 (define (parse-prepositional-phrase)
14598 (list ’prep-phrase
14599 (parse-word prepositions)
14600 (parse-noun-phrase)))
14601 Now we can define a sentence to be a noun phrase followed by a verb phrase, where a verb phrase can
14602 be either a verb or a verb phrase extended by a prepositional phrase: 52
14603 (define (parse-sentence)
14604 (list ’sentence
14605 (parse-noun-phrase)
14606 (parse-verb-phrase)))
14607 (define (parse-verb-phrase)
14608 (define (maybe-extend verb-phrase)
14609 (amb verb-phrase
14610 (maybe-extend (list ’verb-phrase
14611 verb-phrase
14612 (parse-prepositional-phrase)))))
14613 (maybe-extend (parse-word verbs)))
14614 While we’re at it, we can also elaborate the definition of noun phrases to permit such things as ‘‘a cat
14615 in the class.’’ What we used to call a noun phrase, we’ll now call a simple noun phrase, and a noun
14616 phrase will now be either a simple noun phrase or a noun phrase extended by a prepositional phrase:
14617 (define (parse-simple-noun-phrase)
14618 (list ’simple-noun-phrase
14619 (parse-word articles)
14620 (parse-word nouns)))
14621 (define (parse-noun-phrase)
14622 (define (maybe-extend noun-phrase)
14623 (amb noun-phrase
14624 (maybe-extend (list ’noun-phrase
14625 noun-phrase
14626 (parse-prepositional-phrase)))))
14627 (maybe-extend (parse-simple-noun-phrase)))
14628
14629 \fOur new grammar lets us parse more complex sentences. For example
14630 (parse ’(the student with the cat sleeps in the class))
14631 produces
14632 (sentence
14633 (noun-phrase
14634 (simple-noun-phrase (article the) (noun student))
14635 (prep-phrase (prep with)
14636 (simple-noun-phrase
14637 (article the) (noun cat))))
14638 (verb-phrase
14639 (verb sleeps)
14640 (prep-phrase (prep in)
14641 (simple-noun-phrase
14642 (article the) (noun class)))))
14643 Observe that a given input may have more than one legal parse. In the sentence ‘‘The professor
14644 lectures to the student with the cat,’’ it may be that the professor is lecturing with the cat, or that the
14645 student has the cat. Our nondeterministic program finds both possibilities:
14646 (parse ’(the professor lectures to the student with the cat))
14647 produces
14648 (sentence
14649 (simple-noun-phrase (article the) (noun professor))
14650 (verb-phrase
14651 (verb-phrase
14652 (verb lectures)
14653 (prep-phrase (prep to)
14654 (simple-noun-phrase
14655 (article the) (noun student))))
14656 (prep-phrase (prep with)
14657 (simple-noun-phrase
14658 (article the) (noun cat)))))
14659 Asking the evaluator to try again yields
14660 (sentence
14661 (simple-noun-phrase (article the) (noun professor))
14662 (verb-phrase
14663 (verb lectures)
14664 (prep-phrase (prep to)
14665 (noun-phrase
14666 (simple-noun-phrase
14667 (article the) (noun student))
14668 (prep-phrase (prep with)
14669 (simple-noun-phrase
14670 (article the) (noun cat)))))))
14671
14672 \fExercise 4.45. With the grammar given above, the following sentence can be parsed in five different
14673 ways: ‘‘The professor lectures to the student in the class with the cat.’’ Give the five parses and
14674 explain the differences in shades of meaning among them.
14675 Exercise 4.46. The evaluators in sections 4.1 and 4.2 do not determine what order operands are
14676 evaluated in. We will see that the amb evaluator evaluates them from left to right. Explain why our
14677 parsing program wouldn’t work if the operands were evaluated in some other order.
14678 Exercise 4.47. Louis Reasoner suggests that, since a verb phrase is either a verb or a verb phrase
14679 followed by a prepositional phrase, it would be much more straightforward to define the procedure
14680 parse-verb-phrase as follows (and similarly for noun phrases):
14681 (define (parse-verb-phrase)
14682 (amb (parse-word verbs)
14683 (list ’verb-phrase
14684 (parse-verb-phrase)
14685 (parse-prepositional-phrase))))
14686 Does this work? Does the program’s behavior change if we interchange the order of expressions in the
14687 amb?
14688 Exercise 4.48. Extend the grammar given above to handle more complex sentences. For example, you
14689 could extend noun phrases and verb phrases to include adjectives and adverbs, or you could handle
14690 compound sentences. 53
14691 Exercise 4.49. Alyssa P. Hacker is more interested in generating interesting sentences than in parsing
14692 them. She reasons that by simply changing the procedure parse-word so that it ignores the ‘‘input
14693 sentence’’ and instead always succeeds and generates an appropriate word, we can use the programs
14694 we had built for parsing to do generation instead. Implement Alyssa’s idea, and show the first
14695 half-dozen or so sentences generated. 54
14696
14697 4.3.3 Implementing the Amb Evaluator
14698 The evaluation of an ordinary Scheme expression may return a value, may never terminate, or may
14699 signal an error. In nondeterministic Scheme the evaluation of an expression may in addition result in
14700 the discovery of a dead end, in which case evaluation must backtrack to a previous choice point. The
14701 interpretation of nondeterministic Scheme is complicated by this extra case.
14702 We will construct the amb evaluator for nondeterministic Scheme by modifying the analyzing
14703 evaluator of section 4.1.7. 55 As in the analyzing evaluator, evaluation of an expression is
14704 accomplished by calling an execution procedure produced by analysis of that expression. The
14705 difference between the interpretation of ordinary Scheme and the interpretation of nondeterministic
14706 Scheme will be entirely in the execution procedures.
14707
14708 Execution procedures and continuations
14709 Recall that the execution procedures for the ordinary evaluator take one argument: the environment of
14710 execution. In contrast, the execution procedures in the amb evaluator take three arguments: the
14711 environment, and two procedures called continuation procedures. The evaluation of an expression will
14712 finish by calling one of these two continuations: If the evaluation results in a value, the success
14713 continuation is called with that value; if the evaluation results in the discovery of a dead end, the
14714 failure continuation is called. Constructing and calling appropriate continuations is the mechanism by
14715
14716 \fwhich the nondeterministic evaluator implements backtracking.
14717 It is the job of the success continuation to receive a value and proceed with the computation. Along
14718 with that value, the success continuation is passed another failure continuation, which is to be called
14719 subsequently if the use of that value leads to a dead end.
14720 It is the job of the failure continuation to try another branch of the nondeterministic process. The
14721 essence of the nondeterministic language is in the fact that expressions may represent choices among
14722 alternatives. The evaluation of such an expression must proceed with one of the indicated alternative
14723 choices, even though it is not known in advance which choices will lead to acceptable results. To deal
14724 with this, the evaluator picks one of the alternatives and passes this value to the success continuation.
14725 Together with this value, the evaluator constructs and passes along a failure continuation that can be
14726 called later to choose a different alternative.
14727 A failure is triggered during evaluation (that is, a failure continuation is called) when a user program
14728 explicitly rejects the current line of attack (for example, a call to require may result in execution of
14729 (amb), an expression that always fails -- see section 4.3.1). The failure continuation in hand at that
14730 point will cause the most recent choice point to choose another alternative. If there are no more
14731 alternatives to be considered at that choice point, a failure at an earlier choice point is triggered, and so
14732 on. Failure continuations are also invoked by the driver loop in response to a try-again request, to
14733 find another value of the expression.
14734 In addition, if a side-effect operation (such as assignment to a variable) occurs on a branch of the
14735 process resulting from a choice, it may be necessary, when the process finds a dead end, to undo the
14736 side effect before making a new choice. This is accomplished by having the side-effect operation
14737 produce a failure continuation that undoes the side effect and propagates the failure.
14738 In summary, failure continuations are constructed by
14739 amb expressions -- to provide a mechanism to make alternative choices if the current choice
14740 made by the amb expression leads to a dead end;
14741 the top-level driver -- to provide a mechanism to report failure when the choices are exhausted;
14742 assignments -- to intercept failures and undo assignments during backtracking.
14743 Failures are initiated only when a dead end is encountered. This occurs
14744 if the user program executes (amb);
14745 if the user types try-again at the top-level driver.
14746 Failure continuations are also called during processing of a failure:
14747 When the failure continuation created by an assignment finishes undoing a side effect, it calls the
14748 failure continuation it intercepted, in order to propagate the failure back to the choice point that
14749 led to this assignment or to the top level.
14750 When the failure continuation for an amb runs out of choices, it calls the failure continuation that
14751 was originally given to the amb, in order to propagate the failure back to the previous choice
14752 point or to the top level.
14753
14754 \fStructure of the evaluator
14755 The syntax- and data-representation procedures for the amb evaluator, and also the basic analyze
14756 procedure, are identical to those in the evaluator of section 4.1.7, except for the fact that we need
14757 additional syntax procedures to recognize the amb special form: 56
14758 (define (amb? exp) (tagged-list? exp ’amb))
14759 (define (amb-choices exp) (cdr exp))
14760 We must also add to the dispatch in analyze a clause that will recognize this special form and
14761 generate an appropriate execution procedure:
14762 ((amb? exp) (analyze-amb exp))
14763 The top-level procedure ambeval (similar to the version of eval given in section 4.1.7) analyzes the
14764 given expression and applies the resulting execution procedure to the given environment, together with
14765 two given continuations:
14766 (define (ambeval exp env succeed fail)
14767 ((analyze exp) env succeed fail))
14768 A success continuation is a procedure of two arguments: the value just obtained and another failure
14769 continuation to be used if that value leads to a subsequent failure. A failure continuation is a procedure
14770 of no arguments. So the general form of an execution procedure is
14771 (lambda (env succeed fail)
14772 ;; succeed is (lambda (value fail) ...)
14773 ;; fail is (lambda () ...)
14774 ...)
14775 For example, executing
14776 (ambeval <exp>
14777 the-global-environment
14778 (lambda (value fail) value)
14779 (lambda () ’failed))
14780 will attempt to evaluate the given expression and will return either the expression’s value (if the
14781 evaluation succeeds) or the symbol failed (if the evaluation fails). The call to ambeval in the
14782 driver loop shown below uses much more complicated continuation procedures, which continue the
14783 loop and support the try-again request.
14784 Most of the complexity of the amb evaluator results from the mechanics of passing the continuations
14785 around as the execution procedures call each other. In going through the following code, you should
14786 compare each of the execution procedures with the corresponding procedure for the ordinary evaluator
14787 given in section 4.1.7.
14788
14789 Simple expressions
14790 The execution procedures for the simplest kinds of expressions are essentially the same as those for the
14791 ordinary evaluator, except for the need to manage the continuations. The execution procedures simply
14792 succeed with the value of the expression, passing along the failure continuation that was passed to
14793
14794 \fthem.
14795 (define (analyze-self-evaluating exp)
14796 (lambda (env succeed fail)
14797 (succeed exp fail)))
14798 (define (analyze-quoted exp)
14799 (let ((qval (text-of-quotation exp)))
14800 (lambda (env succeed fail)
14801 (succeed qval fail))))
14802 (define (analyze-variable exp)
14803 (lambda (env succeed fail)
14804 (succeed (lookup-variable-value exp env)
14805 fail)))
14806 (define (analyze-lambda exp)
14807 (let ((vars (lambda-parameters exp))
14808 (bproc (analyze-sequence (lambda-body exp))))
14809 (lambda (env succeed fail)
14810 (succeed (make-procedure vars bproc env)
14811 fail))))
14812 Notice that looking up a variable always ‘‘succeeds.’’ If lookup-variable-value fails to find
14813 the variable, it signals an error, as usual. Such a ‘‘failure’’ indicates a program bug -- a reference to an
14814 unbound variable; it is not an indication that we should try another nondeterministic choice instead of
14815 the one that is currently being tried.
14816
14817 Conditionals and sequences
14818 Conditionals are also handled in a similar way as in the ordinary evaluator. The execution procedure
14819 generated by analyze-if invokes the predicate execution procedure pproc with a success
14820 continuation that checks whether the predicate value is true and goes on to execute either the
14821 consequent or the alternative. If the execution of pproc fails, the original failure continuation for the
14822 if expression is called.
14823 (define (analyze-if exp)
14824 (let ((pproc (analyze (if-predicate exp)))
14825 (cproc (analyze (if-consequent exp)))
14826 (aproc (analyze (if-alternative exp))))
14827 (lambda (env succeed fail)
14828 (pproc env
14829 ;; success continuation for evaluating the predicate
14830 ;; to obtain pred-value
14831 (lambda (pred-value fail2)
14832 (if (true? pred-value)
14833 (cproc env succeed fail2)
14834 (aproc env succeed fail2)))
14835 ;; failure continuation for evaluating the predicate
14836 fail))))
14837 Sequences are also handled in the same way as in the previous evaluator, except for the machinations
14838 in the subprocedure sequentially that are required for passing the continuations. Namely, to
14839 sequentially execute a and then b, we call a with a success continuation that calls b.
14840
14841 \f(define (analyze-sequence exps)
14842 (define (sequentially a b)
14843 (lambda (env succeed fail)
14844 (a env
14845 ;; success continuation for calling a
14846 (lambda (a-value fail2)
14847 (b env succeed fail2))
14848 ;; failure continuation for calling a
14849 fail)))
14850 (define (loop first-proc rest-procs)
14851 (if (null? rest-procs)
14852 first-proc
14853 (loop (sequentially first-proc (car rest-procs))
14854 (cdr rest-procs))))
14855 (let ((procs (map analyze exps)))
14856 (if (null? procs)
14857 (error "Empty sequence -- ANALYZE"))
14858 (loop (car procs) (cdr procs))))
14859
14860 Definitions and assignments
14861 Definitions are another case where we must go to some trouble to manage the continuations, because it
14862 is necessary to evaluate the definition-value expression before actually defining the new variable. To
14863 accomplish this, the definition-value execution procedure vproc is called with the environment, a
14864 success continuation, and the failure continuation. If the execution of vproc succeeds, obtaining a
14865 value val for the defined variable, the variable is defined and the success is propagated:
14866 (define (analyze-definition exp)
14867 (let ((var (definition-variable exp))
14868 (vproc (analyze (definition-value exp))))
14869 (lambda (env succeed fail)
14870 (vproc env
14871 (lambda (val fail2)
14872 (define-variable! var val env)
14873 (succeed ’ok fail2))
14874 fail))))
14875 Assignments are more interesting. This is the first place where we really use the continuations, rather
14876 than just passing them around. The execution procedure for assignments starts out like the one for
14877 definitions. It first attempts to obtain the new value to be assigned to the variable. If this evaluation of
14878 vproc fails, the assignment fails.
14879 If vproc succeeds, however, and we go on to make the assignment, we must consider the possibility
14880 that this branch of the computation might later fail, which will require us to backtrack out of the
14881 assignment. Thus, we must arrange to undo the assignment as part of the backtracking process. 57
14882 This is accomplished by giving vproc a success continuation (marked with the comment ‘‘*1*’’
14883 below) that saves the old value of the variable before assigning the new value to the variable and
14884 proceeding from the assignment. The failure continuation that is passed along with the value of the
14885 assignment (marked with the comment ‘‘*2*’’ below) restores the old value of the variable before
14886 continuing the failure. That is, a successful assignment provides a failure continuation that will
14887
14888 \fintercept a subsequent failure; whatever failure would otherwise have called fail2 calls this
14889 procedure instead, to undo the assignment before actually calling fail2.
14890 (define (analyze-assignment exp)
14891 (let ((var (assignment-variable exp))
14892 (vproc (analyze (assignment-value exp))))
14893 (lambda (env succeed fail)
14894 (vproc env
14895 (lambda (val fail2)
14896 ; *1*
14897 (let ((old-value
14898 (lookup-variable-value var env)))
14899 (set-variable-value! var val env)
14900 (succeed ’ok
14901 (lambda ()
14902 ; *2*
14903 (set-variable-value! var
14904 old-value
14905 env)
14906 (fail2)))))
14907 fail))))
14908
14909 Procedure applications
14910 The execution procedure for applications contains no new ideas except for the technical complexity of
14911 managing the continuations. This complexity arises in analyze-application, due to the need to
14912 keep track of the success and failure continuations as we evaluate the operands. We use a procedure
14913 get-args to evaluate the list of operands, rather than a simple map as in the ordinary evaluator.
14914 (define (analyze-application exp)
14915 (let ((fproc (analyze (operator exp)))
14916 (aprocs (map analyze (operands exp))))
14917 (lambda (env succeed fail)
14918 (fproc env
14919 (lambda (proc fail2)
14920 (get-args aprocs
14921 env
14922 (lambda (args fail3)
14923 (execute-application
14924 proc args succeed fail3))
14925 fail2))
14926 fail))))
14927 In get-args, notice how cdring down the list of aproc execution procedures and consing up the
14928 resulting list of args is accomplished by calling each aproc in the list with a success continuation
14929 that recursively calls get-args. Each of these recursive calls to get-args has a success
14930 continuation whose value is the cons of the newly obtained argument onto the list of accumulated
14931 arguments:
14932 (define (get-args aprocs env succeed fail)
14933 (if (null? aprocs)
14934 (succeed ’() fail)
14935 ((car aprocs) env
14936
14937 \f;; success continuation for this aproc
14938 (lambda (arg fail2)
14939 (get-args (cdr aprocs)
14940 env
14941 ;; success continuation for recursive
14942 ;; call to get-args
14943 (lambda (args fail3)
14944 (succeed (cons arg args)
14945 fail3))
14946 fail2))
14947 fail)))
14948 The actual procedure application, which is performed by execute-application, is accomplished
14949 in the same way as for the ordinary evaluator, except for the need to manage the continuations.
14950 (define (execute-application proc args succeed fail)
14951 (cond ((primitive-procedure? proc)
14952 (succeed (apply-primitive-procedure proc args)
14953 fail))
14954 ((compound-procedure? proc)
14955 ((procedure-body proc)
14956 (extend-environment (procedure-parameters proc)
14957 args
14958 (procedure-environment proc))
14959 succeed
14960 fail))
14961 (else
14962 (error
14963 "Unknown procedure type -- EXECUTE-APPLICATION"
14964 proc))))
14965
14966 Evaluating amb expressions
14967 The amb special form is the key element in the nondeterministic language. Here we see the essence of
14968 the interpretation process and the reason for keeping track of the continuations. The execution
14969 procedure for amb defines a loop try-next that cycles through the execution procedures for all the
14970 possible values of the amb expression. Each execution procedure is called with a failure continuation
14971 that will try the next one. When there are no more alternatives to try, the entire amb expression fails.
14972 (define (analyze-amb exp)
14973 (let ((cprocs (map analyze (amb-choices exp))))
14974 (lambda (env succeed fail)
14975 (define (try-next choices)
14976 (if (null? choices)
14977 (fail)
14978 ((car choices) env
14979 succeed
14980 (lambda ()
14981 (try-next (cdr choices))))))
14982 (try-next cprocs))))
14983
14984 \fDriver loop
14985 The driver loop for the amb evaluator is complex, due to the mechanism that permits the user to try
14986 again in evaluating an expression. The driver uses a procedure called internal-loop, which takes
14987 as argument a procedure try-again. The intent is that calling try-again should go on to the next
14988 untried alternative in the nondeterministic evaluation. Internal-loop either calls try-again in
14989 response to the user typing try-again at the driver loop, or else starts a new evaluation by calling
14990 ambeval.
14991 The failure continuation for this call to ambeval informs the user that there are no more values and
14992 re-invokes the driver loop.
14993 The success continuation for the call to ambeval is more subtle. We print the obtained value and then
14994 invoke the internal loop again with a try-again procedure that will be able to try the next
14995 alternative. This next-alternative procedure is the second argument that was passed to the
14996 success continuation. Ordinarily, we think of this second argument as a failure continuation to be used
14997 if the current evaluation branch later fails. In this case, however, we have completed a successful
14998 evaluation, so we can invoke the ‘‘failure’’ alternative branch in order to search for additional
14999 successful evaluations.
15000 (define input-prompt ";;; Amb-Eval input:")
15001 (define output-prompt ";;; Amb-Eval value:")
15002 (define (driver-loop)
15003 (define (internal-loop try-again)
15004 (prompt-for-input input-prompt)
15005 (let ((input (read)))
15006 (if (eq? input ’try-again)
15007 (try-again)
15008 (begin
15009 (newline)
15010 (display ";;; Starting a new problem ")
15011 (ambeval input
15012 the-global-environment
15013 ;; ambeval success
15014 (lambda (val next-alternative)
15015 (announce-output output-prompt)
15016 (user-print val)
15017 (internal-loop next-alternative))
15018 ;; ambeval failure
15019 (lambda ()
15020 (announce-output
15021 ";;; There are no more values of")
15022 (user-print input)
15023 (driver-loop)))))))
15024 (internal-loop
15025 (lambda ()
15026 (newline)
15027 (display ";;; There is no current problem")
15028 (driver-loop))))
15029
15030 \fThe initial call to internal-loop uses a try-again procedure that complains that there is no
15031 current problem and restarts the driver loop. This is the behavior that will happen if the user types
15032 try-again when there is no evaluation in progress.
15033 Exercise 4.50. Implement a new special form ramb that is like amb except that it searches
15034 alternatives in a random order, rather than from left to right. Show how this can help with Alyssa’s
15035 problem in exercise 4.49.
15036 Exercise 4.51. Implement a new kind of assignment called permanent-set! that is not undone
15037 upon failure. For example, we can choose two distinct elements from a list and count the number of
15038 trials required to make a successful choice as follows:
15039 (define count 0)
15040 (let ((x (an-element-of ’(a b c)))
15041 (y (an-element-of ’(a b c))))
15042 (permanent-set! count (+ count 1))
15043 (require (not (eq? x y)))
15044 (list x y count))
15045 ;;; Starting a new problem
15046 ;;; Amb-Eval value:
15047 (a b 2)
15048 ;;; Amb-Eval input:
15049 try-again
15050 ;;; Amb-Eval value:
15051 (a c 3)
15052 What values would have been displayed if we had used set! here rather than permanent-set! ?
15053 Exercise 4.52. Implement a new construct called if-fail that permits the user to catch the failure
15054 of an expression. If-fail takes two expressions. It evaluates the first expression as usual and
15055 returns as usual if the evaluation succeeds. If the evaluation fails, however, the value of the second
15056 expression is returned, as in the following example:
15057 ;;; Amb-Eval input:
15058 (if-fail (let ((x (an-element-of ’(1 3 5))))
15059 (require (even? x))
15060 x)
15061 ’all-odd)
15062 ;;; Starting a new problem
15063 ;;; Amb-Eval value:
15064 all-odd
15065 ;;; Amb-Eval input:
15066 (if-fail (let ((x (an-element-of ’(1 3 5 8))))
15067 (require (even? x))
15068 x)
15069 ’all-odd)
15070 ;;; Starting a new problem
15071 ;;; Amb-Eval value:
15072 8
15073
15074 \fExercise 4.53. With permanent-set! as described in exercise 4.51 and if-fail as in
15075 exercise 4.52, what will be the result of evaluating
15076 (let ((pairs ’()))
15077 (if-fail (let ((p (prime-sum-pair ’(1 3 5 8) ’(20 35 110))))
15078 (permanent-set! pairs (cons p pairs))
15079 (amb))
15080 pairs))
15081 Exercise 4.54. If we had not realized that require could be implemented as an ordinary procedure
15082 that uses amb, to be defined by the user as part of a nondeterministic program, we would have had to
15083 implement it as a special form. This would require syntax procedures
15084 (define (require? exp) (tagged-list? exp ’require))
15085 (define (require-predicate exp) (cadr exp))
15086 and a new clause in the dispatch in analyze
15087 ((require? exp) (analyze-require exp))
15088 as well the procedure analyze-require that handles require expressions. Complete the
15089 following definition of analyze-require.
15090 (define (analyze-require exp)
15091 (let ((pproc (analyze (require-predicate exp))))
15092 (lambda (env succeed fail)
15093 (pproc env
15094 (lambda (pred-value fail2)
15095 (if <??>
15096 <??>
15097 (succeed ’ok fail2)))
15098 fail))))
15099 42 We assume that we have previously defined a procedure prime? that tests whether numbers are
15100
15101 prime. Even with prime? defined, the prime-sum-pair procedure may look suspiciously like the
15102 unhelpful ‘‘pseudo-Lisp’’ attempt to define the square-root function, which we described at the
15103 beginning of section 1.1.7. In fact, a square-root procedure along those lines can actually be
15104 formulated as a nondeterministic program. By incorporating a search mechanism into the evaluator,
15105 we are eroding the distinction between purely declarative descriptions and imperative specifications of
15106 how to compute answers. We’ll go even farther in this direction in section 4.4.
15107 43 The idea of amb for nondeterministic programming was first described in 1961 by John McCarthy
15108
15109 (see McCarthy 1967).
15110 44 In actuality, the distinction between nondeterministically returning a single choice and returning all
15111
15112 choices depends somewhat on our point of view. From the perspective of the code that uses the value,
15113 the nondeterministic choice returns a single value. From the perspective of the programmer designing
15114 the code, the nondeterministic choice potentially returns all possible values, and the computation
15115 branches so that each value is investigated separately.
15116
15117 \f45 One might object that this is a hopelessly inefficient mechanism. It might require millions of
15118
15119 processors to solve some easily stated problem this way, and most of the time most of those processors
15120 would be idle. This objection should be taken in the context of history. Memory used to be considered
15121 just such an expensive commodity. In 1964 a megabyte of RAM cost about $400,000. Now every
15122 personal computer has many megabytes of RAM, and most of the time most of that RAM is unused. It
15123 is hard to underestimate the cost of mass-produced electronics.
15124 46 Automagically: ‘‘Automatically, but in a way which, for some reason (typically because it is too
15125
15126 complicated, or too ugly, or perhaps even too trivial), the speaker doesn’t feel like explaining.’’ (Steele
15127 1983, Raymond 1993)
15128 47 The integration of automatic search strategies into programming languages has had a long and
15129
15130 checkered history. The first suggestions that nondeterministic algorithms might be elegantly encoded
15131 in a programming language with search and automatic backtracking came from Robert Floyd (1967).
15132 Carl Hewitt (1969) invented a programming language called Planner that explicitly supported
15133 automatic chronological backtracking, providing for a built-in depth-first search strategy. Sussman,
15134 Winograd, and Charniak (1971) implemented a subset of this language, called MicroPlanner, which
15135 was used to support work in problem solving and robot planning. Similar ideas, arising from logic and
15136 theorem proving, led to the genesis in Edinburgh and Marseille of the elegant language Prolog (which
15137 we will discuss in section 4.4). After sufficient frustration with automatic search, McDermott and
15138 Sussman (1972) developed a language called Conniver, which included mechanisms for placing the
15139 search strategy under programmer control. This proved unwieldy, however, and Sussman and Stallman
15140 (1975) found a more tractable approach while investigating methods of symbolic analysis for electrical
15141 circuits. They developed a non-chronological backtracking scheme that was based on tracing out the
15142 logical dependencies connecting facts, a technique that has come to be known as dependency-directed
15143 backtracking. Although their method was complex, it produced reasonably efficient programs because
15144 it did little redundant search. Doyle (1979) and McAllester (1978, 1980) generalized and clarified the
15145 methods of Stallman and Sussman, developing a new paradigm for formulating search that is now
15146 called truth maintenance. Modern problem-solving systems all use some form of truth-maintenance
15147 system as a substrate. See Forbus and deKleer 1993 for a discussion of elegant ways to build
15148 truth-maintenance systems and applications using truth maintenance. Zabih, McAllester, and Chapman
15149 1987 describes a nondeterministic extension to Scheme that is based on amb; it is similar to the
15150 interpreter described in this section, but more sophisticated, because it uses dependency-directed
15151 backtracking rather than chronological backtracking. Winston 1992 gives an introduction to both kinds
15152 of backtracking.
15153 48 Our program uses the following procedure to determine if the elements of a list are distinct:
15154
15155 (define (distinct? items)
15156 (cond ((null? items) true)
15157 ((null? (cdr items)) true)
15158 ((member (car items) (cdr items)) false)
15159 (else (distinct? (cdr items)))))
15160 Member is like memq except that it uses equal? instead of eq? to test for equality.
15161 49 This is taken from a booklet called ‘‘Problematical Recreations,’’ published in the 1960s by Litton
15162
15163 Industries, where it is attributed to the Kansas State Engineer.
15164
15165 \f50 Here we use the convention that the first element of each list designates the part of speech for the
15166
15167 rest of the words in the list.
15168 51 Notice that parse-word uses set! to modify the unparsed input list. For this to work, our amb
15169
15170 evaluator must undo the effects of set! operations when it backtracks.
15171 52 Observe that this definition is recursive -- a verb may be followed by any number of prepositional
15172
15173 phrases.
15174 53 This kind of grammar can become arbitrarily complex, but it is only a toy as far as real language
15175
15176 understanding is concerned. Real natural-language understanding by computer requires an elaborate
15177 mixture of syntactic analysis and interpretation of meaning. On the other hand, even toy parsers can be
15178 useful in supporting flexible command languages for programs such as information-retrieval systems.
15179 Winston 1992 discusses computational approaches to real language understanding and also the
15180 applications of simple grammars to command languages.
15181 54 Although Alyssa’s idea works just fine (and is surprisingly simple), the sentences that it generates
15182
15183 are a bit boring -- they don’t sample the possible sentences of this language in a very interesting way.
15184 In fact, the grammar is highly recursive in many places, and Alyssa’s technique ‘‘falls into’’ one of
15185 these recursions and gets stuck. See exercise 4.50 for a way to deal with this.
15186 55 We chose to implement the lazy evaluator in section 4.2 as a modification of the ordinary
15187
15188 metacircular evaluator of section 4.1.1. In contrast, we will base the amb evaluator on the analyzing
15189 evaluator of section 4.1.7, because the execution procedures in that evaluator provide a convenient
15190 framework for implementing backtracking.
15191 56 We assume that the evaluator supports let (see exercise 4.22), which we have used in our
15192
15193 nondeterministic programs.
15194 57 We didn’t worry about undoing definitions, since we can assume that internal definitions are
15195
15196 scanned out (section 4.1.6).
15197 [Go to first, previous, next page; contents; index]
15198
15199 \f[Go to first, previous, next page; contents; index]
15200
15201 4.4 Logic Programming
15202 In chapter 1 we stressed that computer science deals with imperative (how to) knowledge, whereas
15203 mathematics deals with declarative (what is) knowledge. Indeed, programming languages require that
15204 the programmer express knowledge in a form that indicates the step-by-step methods for solving
15205 particular problems. On the other hand, high-level languages provide, as part of the language
15206 implementation, a substantial amount of methodological knowledge that frees the user from concern
15207 with numerous details of how a specified computation will progress.
15208 Most programming languages, including Lisp, are organized around computing the values of
15209 mathematical functions. Expression-oriented languages (such as Lisp, Fortran, and Algol) capitalize on
15210 the ‘‘pun’’ that an expression that describes the value of a function may also be interpreted as a means
15211 of computing that value. Because of this, most programming languages are strongly biased toward
15212 unidirectional computations (computations with well-defined inputs and outputs). There are, however,
15213 radically different programming languages that relax this bias. We saw one such example in
15214 section 3.3.5, where the objects of computation were arithmetic constraints. In a constraint system the
15215 direction and the order of computation are not so well specified; in carrying out a computation the
15216 system must therefore provide more detailed ‘‘how to’’ knowledge than would be the case with an
15217 ordinary arithmetic computation. This does not mean, however, that the user is released altogether
15218 from the responsibility of providing imperative knowledge. There are many constraint networks that
15219 implement the same set of constraints, and the user must choose from the set of mathematically
15220 equivalent networks a suitable network to specify a particular computation.
15221 The nondeterministic program evaluator of section 4.3 also moves away from the view that
15222 programming is about constructing algorithms for computing unidirectional functions. In a
15223 nondeterministic language, expressions can have more than one value, and, as a result, the
15224 computation is dealing with relations rather than with single-valued functions. Logic programming
15225 extends this idea by combining a relational vision of programming with a powerful kind of symbolic
15226 pattern matching called unification. 58
15227 This approach, when it works, can be a very powerful way to write programs. Part of the power comes
15228 from the fact that a single ‘‘what is’’ fact can be used to solve a number of different problems that
15229 would have different ‘‘how to’’ components. As an example, consider the append operation, which
15230 takes two lists as arguments and combines their elements to form a single list. In a procedural language
15231 such as Lisp, we could define append in terms of the basic list constructor cons, as we did in
15232 section 2.2.1:
15233 (define (append x y)
15234 (if (null? x)
15235 y
15236 (cons (car x) (append (cdr x) y))))
15237 This procedure can be regarded as a translation into Lisp of the following two rules, the first of which
15238 covers the case where the first list is empty and the second of which handles the case of a nonempty
15239 list, which is a cons of two parts:
15240 For any list y, the empty list and y append to form y.
15241
15242 \fFor any u, v, y, and z, (cons u v) and y append to form (cons u z) if v and y
15243 append to form z. 59
15244 Using the append procedure, we can answer questions such as
15245 Find the append of (a b) and (c d).
15246 But the same two rules are also sufficient for answering the following sorts of questions, which the
15247 procedure can’t answer:
15248 Find a list y that appends with (a b) to produce (a b c d).
15249 Find all x and y that append to form (a b c d).
15250 In a logic programming language, the programmer writes an append ‘‘procedure’’ by stating the two
15251 rules about append given above. ‘‘How to’’ knowledge is provided automatically by the interpreter
15252 to allow this single pair of rules to be used to answer all three types of questions about append. 60
15253 Contemporary logic programming languages (including the one we implement here) have substantial
15254 deficiencies, in that their general ‘‘how to’’ methods can lead them into spurious infinite loops or other
15255 undesirable behavior. Logic programming is an active field of research in computer science. 61
15256 Earlier in this chapter we explored the technology of implementing interpreters and described the
15257 elements that are essential to an interpreter for a Lisp-like language (indeed, to an interpreter for any
15258 conventional language). Now we will apply these ideas to discuss an interpreter for a logic
15259 programming language. We call this language the query language, because it is very useful for
15260 retrieving information from data bases by formulating queries, or questions, expressed in the language.
15261 Even though the query language is very different from Lisp, we will find it convenient to describe the
15262 language in terms of the same general framework we have been using all along: as a collection of
15263 primitive elements, together with means of combination that enable us to combine simple elements to
15264 create more complex elements and means of abstraction that enable us to regard complex elements as
15265 single conceptual units. An interpreter for a logic programming language is considerably more
15266 complex than an interpreter for a language like Lisp. Nevertheless, we will see that our query-language
15267 interpreter contains many of the same elements found in the interpreter of section 4.1. In particular,
15268 there will be an ‘‘eval’’ part that classifies expressions according to type and an ‘‘apply’’ part that
15269 implements the language’s abstraction mechanism (procedures in the case of Lisp, and rules in the
15270 case of logic programming). Also, a central role is played in the implementation by a frame data
15271 structure, which determines the correspondence between symbols and their associated values. One
15272 additional interesting aspect of our query-language implementation is that we make substantial use of
15273 streams, which were introduced in chapter 3.
15274
15275 4.4.1 Deductive Information Retrieval
15276 Logic programming excels in providing interfaces to data bases for information retrieval. The query
15277 language we shall implement in this chapter is designed to be used in this way.
15278 In order to illustrate what the query system does, we will show how it can be used to manage the data
15279 base of personnel records for Microshaft, a thriving high-technology company in the Boston area. The
15280 language provides pattern-directed access to personnel information and can also take advantage of
15281 general rules in order to make logical deductions.
15282
15283 \fA sample data base
15284 The personnel data base for Microshaft contains assertions about company personnel. Here is the
15285 information about Ben Bitdiddle, the resident computer wizard:
15286 (address (Bitdiddle Ben) (Slumerville (Ridge Road) 10))
15287 (job (Bitdiddle Ben) (computer wizard))
15288 (salary (Bitdiddle Ben) 60000)
15289 Each assertion is a list (in this case a triple) whose elements can themselves be lists.
15290 As resident wizard, Ben is in charge of the company’s computer division, and he supervises two
15291 programmers and one technician. Here is the information about them:
15292 (address (Hacker Alyssa P) (Cambridge (Mass Ave) 78))
15293 (job (Hacker Alyssa P) (computer programmer))
15294 (salary (Hacker Alyssa P) 40000)
15295 (supervisor (Hacker Alyssa P) (Bitdiddle Ben))
15296 (address (Fect Cy D) (Cambridge (Ames Street) 3))
15297 (job (Fect Cy D) (computer programmer))
15298 (salary (Fect Cy D) 35000)
15299 (supervisor (Fect Cy D) (Bitdiddle Ben))
15300 (address (Tweakit Lem E) (Boston (Bay State Road) 22))
15301 (job (Tweakit Lem E) (computer technician))
15302 (salary (Tweakit Lem E) 25000)
15303 (supervisor (Tweakit Lem E) (Bitdiddle Ben))
15304 There is also a programmer trainee, who is supervised by Alyssa:
15305 (address (Reasoner Louis) (Slumerville (Pine Tree Road) 80))
15306 (job (Reasoner Louis) (computer programmer trainee))
15307 (salary (Reasoner Louis) 30000)
15308 (supervisor (Reasoner Louis) (Hacker Alyssa P))
15309 All of these people are in the computer division, as indicated by the word computer as the first item
15310 in their job descriptions.
15311 Ben is a high-level employee. His supervisor is the company’s big wheel himself:
15312 (supervisor (Bitdiddle Ben) (Warbucks Oliver))
15313 (address (Warbucks Oliver) (Swellesley (Top Heap Road)))
15314 (job (Warbucks Oliver) (administration big wheel))
15315 (salary (Warbucks Oliver) 150000)
15316 Besides the computer division supervised by Ben, the company has an accounting division, consisting
15317 of a chief accountant and his assistant:
15318 (address (Scrooge Eben) (Weston (Shady Lane) 10))
15319 (job (Scrooge Eben) (accounting chief accountant))
15320 (salary (Scrooge Eben) 75000)
15321 (supervisor (Scrooge Eben) (Warbucks Oliver))
15322 (address (Cratchet Robert) (Allston (N Harvard Street) 16))
15323
15324 \f(job (Cratchet Robert) (accounting scrivener))
15325 (salary (Cratchet Robert) 18000)
15326 (supervisor (Cratchet Robert) (Scrooge Eben))
15327 There is also a secretary for the big wheel:
15328 (address (Aull DeWitt) (Slumerville (Onion Square) 5))
15329 (job (Aull DeWitt) (administration secretary))
15330 (salary (Aull DeWitt) 25000)
15331 (supervisor (Aull DeWitt) (Warbucks Oliver))
15332 The data base also contains assertions about which kinds of jobs can be done by people holding other
15333 kinds of jobs. For instance, a computer wizard can do the jobs of both a computer programmer and a
15334 computer technician:
15335 (can-do-job (computer wizard) (computer programmer))
15336 (can-do-job (computer wizard) (computer technician))
15337 A computer programmer could fill in for a trainee:
15338 (can-do-job (computer programmer)
15339 (computer programmer trainee))
15340 Also, as is well known,
15341 (can-do-job (administration secretary)
15342 (administration big wheel))
15343
15344 Simple queries
15345 The query language allows users to retrieve information from the data base by posing queries in
15346 response to the system’s prompt. For example, to find all computer programmers one can say
15347 ;;; Query input:
15348 (job ?x (computer programmer))
15349 The system will respond with the following items:
15350 ;;; Query results:
15351 (job (Hacker Alyssa P) (computer programmer))
15352 (job (Fect Cy D) (computer programmer))
15353 The input query specifies that we are looking for entries in the data base that match a certain pattern.
15354 In this example, the pattern specifies entries consisting of three items, of which the first is the literal
15355 symbol job, the second can be anything, and the third is the literal list (computer
15356 programmer). The ‘‘anything’’ that can be the second item in the matching list is specified by a
15357 pattern variable, ?x. The general form of a pattern variable is a symbol, taken to be the name of the
15358 variable, preceded by a question mark. We will see below why it is useful to specify names for pattern
15359 variables rather than just putting ? into patterns to represent ‘‘anything.’’ The system responds to a
15360 simple query by showing all entries in the data base that match the specified pattern.
15361
15362 \fA pattern can have more than one variable. For example, the query
15363 (address ?x ?y)
15364 will list all the employees’ addresses.
15365 A pattern can have no variables, in which case the query simply determines whether that pattern is an
15366 entry in the data base. If so, there will be one match; if not, there will be no matches.
15367 The same pattern variable can appear more than once in a query, specifying that the same ‘‘anything’’
15368 must appear in each position. This is why variables have names. For example,
15369 (supervisor ?x ?x)
15370 finds all people who supervise themselves (though there are no such assertions in our sample data
15371 base).
15372 The query
15373 (job ?x (computer ?type))
15374 matches all job entries whose third item is a two-element list whose first item is computer:
15375 (job
15376 (job
15377 (job
15378 (job
15379
15380 (Bitdiddle Ben) (computer wizard))
15381 (Hacker Alyssa P) (computer programmer))
15382 (Fect Cy D) (computer programmer))
15383 (Tweakit Lem E) (computer technician))
15384
15385 This same pattern does not match
15386 (job (Reasoner Louis) (computer programmer trainee))
15387 because the third item in the entry is a list of three elements, and the pattern’s third item specifies that
15388 there should be two elements. If we wanted to change the pattern so that the third item could be any
15389 list beginning with computer, we could specify 62
15390 (job ?x (computer . ?type))
15391 For example,
15392 (computer . ?type)
15393 matches the data
15394 (computer programmer trainee)
15395 with ?type as the list (programmer trainee). It also matches the data
15396 (computer programmer)
15397 with ?type as the list (programmer), and matches the data
15398
15399 \f(computer)
15400 with ?type as the empty list ().
15401 We can describe the query language’s processing of simple queries as follows:
15402 The system finds all assignments to variables in the query pattern that satisfy the pattern -- that is,
15403 all sets of values for the variables such that if the pattern variables are instantiated with (replaced
15404 by) the values, the result is in the data base.
15405 The system responds to the query by listing all instantiations of the query pattern with the
15406 variable assignments that satisfy it.
15407 Note that if the pattern has no variables, the query reduces to a determination of whether that pattern is
15408 in the data base. If so, the empty assignment, which assigns no values to variables, satisfies that pattern
15409 for that data base.
15410 Exercise 4.55. Give simple queries that retrieve the following information from the data base:
15411 a. all people supervised by Ben Bitdiddle;
15412 b. the names and jobs of all people in the accounting division;
15413 c. the names and addresses of all people who live in Slumerville.
15414
15415 Compound queries
15416 Simple queries form the primitive operations of the query language. In order to form compound
15417 operations, the query language provides means of combination. One thing that makes the query
15418 language a logic programming language is that the means of combination mirror the means of
15419 combination used in forming logical expressions: and, or, and not. (Here and, or, and not are not
15420 the Lisp primitives, but rather operations built into the query language.)
15421 We can use and as follows to find the addresses of all the computer programmers:
15422 (and (job ?person (computer programmer))
15423 (address ?person ?where))
15424 The resulting output is
15425 (and (job (Hacker Alyssa P) (computer programmer))
15426 (address (Hacker Alyssa P) (Cambridge (Mass Ave) 78)))
15427 (and (job (Fect Cy D) (computer programmer))
15428 (address (Fect Cy D) (Cambridge (Ames Street) 3)))
15429 In general,
15430 (and <query 1 > <query 2 > ... <query n >)
15431 is satisfied by all sets of values for the pattern variables that simultaneously satisfy <query 1 > ...
15432 <query n >.
15433
15434 \fAs for simple queries, the system processes a compound query by finding all assignments to the
15435 pattern variables that satisfy the query, then displaying instantiations of the query with those values.
15436 Another means of constructing compound queries is through or. For example,
15437 (or (supervisor ?x (Bitdiddle Ben))
15438 (supervisor ?x (Hacker Alyssa P)))
15439 will find all employees supervised by Ben Bitdiddle or Alyssa P. Hacker:
15440 (or (supervisor
15441 (supervisor
15442 (or (supervisor
15443 (supervisor
15444 (or (supervisor
15445 (supervisor
15446 (or (supervisor
15447 (supervisor
15448
15449 (Hacker Alyssa P) (Bitdiddle Ben))
15450 (Hacker Alyssa P) (Hacker Alyssa P)))
15451 (Fect Cy D) (Bitdiddle Ben))
15452 (Fect Cy D) (Hacker Alyssa P)))
15453 (Tweakit Lem E) (Bitdiddle Ben))
15454 (Tweakit Lem E) (Hacker Alyssa P)))
15455 (Reasoner Louis) (Bitdiddle Ben))
15456 (Reasoner Louis) (Hacker Alyssa P)))
15457
15458 In general,
15459 (or <query 1 > <query 2 > ... <query n >)
15460 is satisfied by all sets of values for the pattern variables that satisfy at least one of <query 1 > ...
15461 <query n >.
15462 Compound queries can also be formed with not. For example,
15463 (and (supervisor ?x (Bitdiddle Ben))
15464 (not (job ?x (computer programmer))))
15465 finds all people supervised by Ben Bitdiddle who are not computer programmers. In general,
15466 (not <query 1 >)
15467 is satisfied by all assignments to the pattern variables that do not satisfy <query 1 >. 63
15468 The final combining form is called lisp-value. When lisp-value is the first element of a
15469 pattern, it specifies that the next element is a Lisp predicate to be applied to the rest of the
15470 (instantiated) elements as arguments. In general,
15471 (lisp-value <predicate> <arg 1 > ... <arg n >)
15472 will be satisfied by assignments to the pattern variables for which the <predicate> applied to the
15473 instantiated <arg 1 > ... <arg n > is true. For example, to find all people whose salary is greater than
15474 $30,000 we could write 64
15475 (and (salary ?person ?amount)
15476 (lisp-value > ?amount 30000))
15477
15478 \fExercise 4.56. Formulate compound queries that retrieve the following information:
15479 a. the names of all people who are supervised by Ben Bitdiddle, together with their addresses;
15480 b. all people whose salary is less than Ben Bitdiddle’s, together with their salary and Ben Bitdiddle’s
15481 salary;
15482 c. all people who are supervised by someone who is not in the computer division, together with the
15483 supervisor’s name and job.
15484
15485 Rules
15486 In addition to primitive queries and compound queries, the query language provides means for
15487 abstracting queries. These are given by rules. The rule
15488 (rule (lives-near ?person-1 ?person-2)
15489 (and (address ?person-1 (?town . ?rest-1))
15490 (address ?person-2 (?town . ?rest-2))
15491 (not (same ?person-1 ?person-2))))
15492 specifies that two people live near each other if they live in the same town. The final not clause
15493 prevents the rule from saying that all people live near themselves. The same relation is defined by a
15494 very simple rule: 65
15495 (rule (same ?x ?x))
15496 The following rule declares that a person is a ‘‘wheel’’ in an organization if he supervises someone
15497 who is in turn a supervisor:
15498 (rule (wheel ?person)
15499 (and (supervisor ?middle-manager ?person)
15500 (supervisor ?x ?middle-manager)))
15501 The general form of a rule is
15502 (rule <conclusion> <body>)
15503 where <conclusion> is a pattern and <body> is any query. 66 We can think of a rule as representing a
15504 large (even infinite) set of assertions, namely all instantiations of the rule conclusion with variable
15505 assignments that satisfy the rule body. When we described simple queries (patterns), we said that an
15506 assignment to variables satisfies a pattern if the instantiated pattern is in the data base. But the pattern
15507 needn’t be explicitly in the data base as an assertion. It can be an implicit assertion implied by a rule.
15508 For example, the query
15509 (lives-near ?x (Bitdiddle Ben))
15510 results in
15511 (lives-near (Reasoner Louis) (Bitdiddle Ben))
15512 (lives-near (Aull DeWitt) (Bitdiddle Ben))
15513
15514 \fTo find all computer programmers who live near Ben Bitdiddle, we can ask
15515 (and (job ?x (computer programmer))
15516 (lives-near ?x (Bitdiddle Ben)))
15517 As in the case of compound procedures, rules can be used as parts of other rules (as we saw with the
15518 lives-near rule above) or even be defined recursively. For instance, the rule
15519 (rule (outranked-by ?staff-person ?boss)
15520 (or (supervisor ?staff-person ?boss)
15521 (and (supervisor ?staff-person ?middle-manager)
15522 (outranked-by ?middle-manager ?boss))))
15523 says that a staff person is outranked by a boss in the organization if the boss is the person’s supervisor
15524 or (recursively) if the person’s supervisor is outranked by the boss.
15525 Exercise 4.57. Define a rule that says that person 1 can replace person 2 if either person 1 does the
15526 same job as person 2 or someone who does person 1’s job can also do person 2’s job, and if person 1
15527 and person 2 are not the same person. Using your rule, give queries that find the following:
15528 a. all people who can replace Cy D. Fect;
15529 b. all people who can replace someone who is being paid more than they are, together with the two
15530 salaries.
15531 Exercise 4.58. Define a rule that says that a person is a ‘‘big shot’’ in a division if the person works in
15532 the division but does not have a supervisor who works in the division.
15533 Exercise 4.59. Ben Bitdiddle has missed one meeting too many. Fearing that his habit of forgetting
15534 meetings could cost him his job, Ben decides to do something about it. He adds all the weekly
15535 meetings of the firm to the Microshaft data base by asserting the following:
15536 (meeting
15537 (meeting
15538 (meeting
15539 (meeting
15540
15541 accounting (Monday 9am))
15542 administration (Monday 10am))
15543 computer (Wednesday 3pm))
15544 administration (Friday 1pm))
15545
15546 Each of the above assertions is for a meeting of an entire division. Ben also adds an entry for the
15547 company-wide meeting that spans all the divisions. All of the company’s employees attend this
15548 meeting.
15549 (meeting whole-company (Wednesday 4pm))
15550 a. On Friday morning, Ben wants to query the data base for all the meetings that occur that day. What
15551 query should he use?
15552 b. Alyssa P. Hacker is unimpressed. She thinks it would be much more useful to be able to ask for her
15553 meetings by specifying her name. So she designs a rule that says that a person’s meetings include all
15554 whole-company meetings plus all meetings of that person’s division. Fill in the body of Alyssa’s
15555 rule.
15556
15557 \f(rule (meeting-time ?person ?day-and-time)
15558 <rule-body>)
15559 c. Alyssa arrives at work on Wednesday morning and wonders what meetings she has to attend that
15560 day. Having defined the above rule, what query should she make to find this out?
15561 Exercise 4.60. By giving the query
15562 (lives-near ?person (Hacker Alyssa P))
15563 Alyssa P. Hacker is able to find people who live near her, with whom she can ride to work. On the
15564 other hand, when she tries to find all pairs of people who live near each other by querying
15565 (lives-near ?person-1 ?person-2)
15566 she notices that each pair of people who live near each other is listed twice; for example,
15567 (lives-near (Hacker Alyssa P) (Fect Cy D))
15568 (lives-near (Fect Cy D) (Hacker Alyssa P))
15569 Why does this happen? Is there a way to find a list of people who live near each other, in which each
15570 pair appears only once? Explain.
15571
15572 Logic as programs
15573 We can regard a rule as a kind of logical implication: If an assignment of values to pattern variables
15574 satisfies the body, then it satisfies the conclusion. Consequently, we can regard the query language as
15575 having the ability to perform logical deductions based upon the rules. As an example, consider the
15576 append operation described at the beginning of section 4.4. As we said, append can be
15577 characterized by the following two rules:
15578 For any list y, the empty list and y append to form y.
15579 For any u, v, y, and z, (cons u v) and y append to form (cons u z) if v and y
15580 append to form z.
15581 To express this in our query language, we define two rules for a relation
15582 (append-to-form x y z)
15583 which we can interpret to mean ‘‘x and y append to form z’’:
15584 (rule (append-to-form () ?y ?y))
15585 (rule (append-to-form (?u . ?v) ?y (?u . ?z))
15586 (append-to-form ?v ?y ?z))
15587 The first rule has no body, which means that the conclusion holds for any value of ?y. Note how the
15588 second rule makes use of dotted-tail notation to name the car and cdr of a list.
15589 Given these two rules, we can formulate queries that compute the append of two lists:
15590
15591 \f;;; Query input:
15592 (append-to-form (a b) (c d) ?z)
15593 ;;; Query results:
15594 (append-to-form (a b) (c d) (a b c d))
15595 What is more striking, we can use the same rules to ask the question ‘‘Which list, when appended to
15596 (a b), yields (a b c d)?’’ This is done as follows:
15597 ;;; Query input:
15598 (append-to-form (a b) ?y (a b c d))
15599 ;;; Query results:
15600 (append-to-form (a b) (c d) (a b c d))
15601 We can also ask for all pairs of lists that append to form (a b c d):
15602 ;;; Query input:
15603 (append-to-form ?x ?y (a b c d))
15604 ;;; Query results:
15605 (append-to-form () (a b c d) (a b c d))
15606 (append-to-form (a) (b c d) (a b c d))
15607 (append-to-form (a b) (c d) (a b c d))
15608 (append-to-form (a b c) (d) (a b c d))
15609 (append-to-form (a b c d) () (a b c d))
15610 The query system may seem to exhibit quite a bit of intelligence in using the rules to deduce the
15611 answers to the queries above. Actually, as we will see in the next section, the system is following a
15612 well-determined algorithm in unraveling the rules. Unfortunately, although the system works
15613 impressively in the append case, the general methods may break down in more complex cases, as we
15614 will see in section 4.4.3.
15615 Exercise 4.61. The following rules implement a next-to relation that finds adjacent elements of a
15616 list:
15617 (rule (?x next-to ?y in (?x ?y . ?u)))
15618 (rule (?x next-to ?y in (?v . ?z))
15619 (?x next-to ?y in ?z))
15620 What will the response be to the following queries?
15621 (?x next-to ?y in (1 (2 3) 4))
15622 (?x next-to 1 in (2 1 3 1))
15623 Exercise 4.62. Define rules to implement the last-pair operation of exercise 2.17, which returns a
15624 list containing the last element of a nonempty list. Check your rules on queries such as (last-pair
15625 (3) ?x), (last-pair (1 2 3) ?x), and (last-pair (2 ?x) (3)). Do your rules
15626 work correctly on queries such as (last-pair ?x (3)) ?
15627 Exercise 4.63. The following data base (see Genesis 4) traces the genealogy of the descendants of
15628 Ada back to Adam, by way of Cain:
15629
15630 \f(son Adam Cain)
15631 (son Cain Enoch)
15632 (son Enoch Irad)
15633 (son Irad Mehujael)
15634 (son Mehujael Methushael)
15635 (son Methushael Lamech)
15636 (wife Lamech Ada)
15637 (son Ada Jabal)
15638 (son Ada Jubal)
15639 Formulate rules such as ‘‘If S is the son of F, and F is the son of G, then S is the grandson of G’’ and
15640 ‘‘If W is the wife of M, and S is the son of W, then S is the son of M’’ (which was supposedly more
15641 true in biblical times than today) that will enable the query system to find the grandson of Cain; the
15642 sons of Lamech; the grandsons of Methushael. (See exercise 4.69 for some rules to deduce more
15643 complicated relationships.)
15644
15645 4.4.2 How the Query System Works
15646 In section 4.4.4 we will present an implementation of the query interpreter as a collection of
15647 procedures. In this section we give an overview that explains the general structure of the system
15648 independent of low-level implementation details. After describing the implementation of the
15649 interpreter, we will be in a position to understand some of its limitations and some of the subtle ways
15650 in which the query language’s logical operations differ from the operations of mathematical logic.
15651 It should be apparent that the query evaluator must perform some kind of search in order to match
15652 queries against facts and rules in the data base. One way to do this would be to implement the query
15653 system as a nondeterministic program, using the amb evaluator of section 4.3 (see exercise 4.78).
15654 Another possibility is to manage the search with the aid of streams. Our implementation follows this
15655 second approach.
15656 The query system is organized around two central operations called pattern matching and unification.
15657 We first describe pattern matching and explain how this operation, together with the organization of
15658 information in terms of streams of frames, enables us to implement both simple and compound
15659 queries. We next discuss unification, a generalization of pattern matching needed to implement rules.
15660 Finally, we show how the entire query interpreter fits together through a procedure that classifies
15661 expressions in a manner analogous to the way eval classifies expressions for the interpreter described
15662 in section 4.1.
15663
15664 Pattern matching
15665 A pattern matcher is a program that tests whether some datum fits a specified pattern. For example,
15666 the data list ((a b) c (a b)) matches the pattern (?x c ?x) with the pattern variable ?x
15667 bound to (a b). The same data list matches the pattern (?x ?y ?z) with ?x and ?z both bound to
15668 (a b) and ?y bound to c. It also matches the pattern ((?x ?y) c (?x ?y)) with ?x bound to
15669 a and ?y bound to b. However, it does not match the pattern (?x a ?y), since that pattern specifies
15670 a list whose second element is the symbol a.
15671 The pattern matcher used by the query system takes as inputs a pattern, a datum, and a frame that
15672 specifies bindings for various pattern variables. It checks whether the datum matches the pattern in a
15673 way that is consistent with the bindings already in the frame. If so, it returns the given frame
15674 augmented by any bindings that may have been determined by the match. Otherwise, it indicates that
15675
15676 \fthe match has failed.
15677 For example, using the pattern (?x ?y ?x) to match (a b a) given an empty frame will return a
15678 frame specifying that ?x is bound to a and ?y is bound to b. Trying the match with the same pattern,
15679 the same datum, and a frame specifying that ?y is bound to a will fail. Trying the match with the same
15680 pattern, the same datum, and a frame in which ?y is bound to b and ?x is unbound will return the
15681 given frame augmented by a binding of ?x to a.
15682 The pattern matcher is all the mechanism that is needed to process simple queries that don’t involve
15683 rules. For instance, to process the query
15684 (job ?x (computer programmer))
15685 we scan through all assertions in the data base and select those that match the pattern with respect to
15686 an initially empty frame. For each match we find, we use the frame returned by the match to instantiate
15687 the pattern with a value for ?x.
15688
15689 Streams of frames
15690 The testing of patterns against frames is organized through the use of streams. Given a single frame,
15691 the matching process runs through the data-base entries one by one. For each data-base entry, the
15692 matcher generates either a special symbol indicating that the match has failed or an extension to the
15693 frame. The results for all the data-base entries are collected into a stream, which is passed through a
15694 filter to weed out the failures. The result is a stream of all the frames that extend the given frame via a
15695 match to some assertion in the data base. 67
15696 In our system, a query takes an input stream of frames and performs the above matching operation for
15697 every frame in the stream, as indicated in figure 4.4. That is, for each frame in the input stream, the
15698 query generates a new stream consisting of all extensions to that frame by matches to assertions in the
15699 data base. All these streams are then combined to form one huge stream, which contains all possible
15700 extensions of every frame in the input stream. This stream is the output of the query.
15701
15702 Figure 4.4: A query processes a stream of frames.
15703 Figure 4.4: A query processes a stream of frames.
15704 To answer a simple query, we use the query with an input stream consisting of a single empty frame.
15705 The resulting output stream contains all extensions to the empty frame (that is, all answers to our
15706 query). This stream of frames is then used to generate a stream of copies of the original query pattern
15707 with the variables instantiated by the values in each frame, and this is the stream that is finally printed.
15708
15709 \fCompound queries
15710 The real elegance of the stream-of-frames implementation is evident when we deal with compound
15711 queries. The processing of compound queries makes use of the ability of our matcher to demand that a
15712 match be consistent with a specified frame. For example, to handle the and of two queries, such as
15713 (and (can-do-job ?x (computer programmer trainee))
15714 (job ?person ?x))
15715 (informally, ‘‘Find all people who can do the job of a computer programmer trainee’’), we first find all
15716 entries that match the pattern
15717 (can-do-job ?x (computer programmer trainee))
15718 This produces a stream of frames, each of which contains a binding for ?x. Then for each frame in the
15719 stream we find all entries that match
15720 (job ?person ?x)
15721 in a way that is consistent with the given binding for ?x. Each such match will produce a frame
15722 containing bindings for ?x and ?person. The and of two queries can be viewed as a series
15723 combination of the two component queries, as shown in figure 4.5. The frames that pass through the
15724 first query filter are filtered and further extended by the second query.
15725
15726 Figure 4.5: The and combination of two queries is produced by operating on the stream of
15727 frames in series.
15728 Figure 4.5: The and combination of two queries is produced by operating on the stream of frames in
15729 series.
15730 Figure 4.6 shows the analogous method for computing the or of two queries as a parallel combination
15731 of the two component queries. The input stream of frames is extended separately by each query. The
15732 two resulting streams are then merged to produce the final output stream.
15733
15734 \fFigure 4.6: The or combination of two queries is produced by operating on the stream of frames
15735 in parallel and merging the results.
15736 Figure 4.6: The or combination of two queries is produced by operating on the stream of frames in
15737 parallel and merging the results.
15738 Even from this high-level description, it is apparent that the processing of compound queries can be
15739 slow. For example, since a query may produce more than one output frame for each input frame, and
15740 each query in an and gets its input frames from the previous query, an and query could, in the worst
15741 case, have to perform a number of matches that is exponential in the number of queries (see
15742 exercise 4.76). 68 Though systems for handling only simple queries are quite practical, dealing with
15743 complex queries is extremely difficult. 69
15744 From the stream-of-frames viewpoint, the not of some query acts as a filter that removes all frames
15745 for which the query can be satisfied. For instance, given the pattern
15746 (not (job ?x (computer programmer)))
15747 we attempt, for each frame in the input stream, to produce extension frames that satisfy (job ?x
15748 (computer programmer)). We remove from the input stream all frames for which such
15749 extensions exist. The result is a stream consisting of only those frames in which the binding for ?x
15750 does not satisfy (job ?x (computer programmer)). For example, in processing the query
15751 (and (supervisor ?x ?y)
15752 (not (job ?x (computer programmer))))
15753 the first clause will generate frames with bindings for ?x and ?y. The not clause will then filter these
15754 by removing all frames in which the binding for ?x satisfies the restriction that ?x is a computer
15755 programmer. 70
15756 The lisp-value special form is implemented as a similar filter on frame streams. We use each
15757 frame in the stream to instantiate any variables in the pattern, then apply the Lisp predicate. We
15758 remove from the input stream all frames for which the predicate fails.
15759
15760 \fUnification
15761 In order to handle rules in the query language, we must be able to find the rules whose conclusions
15762 match a given query pattern. Rule conclusions are like assertions except that they can contain
15763 variables, so we will need a generalization of pattern matching -- called unification -- in which both
15764 the ‘‘pattern’’ and the ‘‘datum’’ may contain variables.
15765 A unifier takes two patterns, each containing constants and variables, and determines whether it is
15766 possible to assign values to the variables that will make the two patterns equal. If so, it returns a frame
15767 containing these bindings. For example, unifying (?x a ?y) and (?y ?z a) will specify a frame
15768 in which ?x, ?y, and ?z must all be bound to a. On the other hand, unifying (?x ?y a) and (?x
15769 b ?y) will fail, because there is no value for ?y that can make the two patterns equal. (For the
15770 second elements of the patterns to be equal, ?y would have to be b; however, for the third elements to
15771 be equal, ?y would have to be a.) The unifier used in the query system, like the pattern matcher, takes
15772 a frame as input and performs unifications that are consistent with this frame.
15773 The unification algorithm is the most technically difficult part of the query system. With complex
15774 patterns, performing unification may seem to require deduction. To unify (?x ?x) and ((a ?y c)
15775 (a b ?z)), for example, the algorithm must infer that ?x should be (a b c), ?y should be b, and
15776 ?z should be c. We may think of this process as solving a set of equations among the pattern
15777 components. In general, these are simultaneous equations, which may require substantial manipulation
15778 to solve. 71 For example, unifying (?x ?x) and ((a ?y c) (a b ?z)) may be thought of as
15779 specifying the simultaneous equations
15780 ?x
15781 ?x
15782
15783 =
15784 =
15785
15786 (a ?y c)
15787 (a b ?z)
15788
15789 These equations imply that
15790 (a ?y c)
15791
15792 =
15793
15794 (a b ?z)
15795
15796 which in turn implies that
15797 a
15798
15799 =
15800
15801 a, ?y
15802
15803 =
15804
15805 b, c
15806
15807 =
15808
15809 ?z,
15810
15811 and hence that
15812 ?x
15813
15814 =
15815
15816 (a b c)
15817
15818 In a successful pattern match, all pattern variables become bound, and the values to which they are
15819 bound contain only constants. This is also true of all the examples of unification we have seen so far.
15820 In general, however, a successful unification may not completely determine the variable values; some
15821 variables may remain unbound and others may be bound to values that contain variables.
15822 Consider the unification of (?x a) and ((b ?y) ?z). We can deduce that ?x = (b ?y) and a
15823 = ?z, but we cannot further solve for ?x or ?y. The unification doesn’t fail, since it is certainly
15824 possible to make the two patterns equal by assigning values to ?x and ?y. Since this match in no way
15825 restricts the values ?y can take on, no binding for ?y is put into the result frame. The match does,
15826 however, restrict the value of ?x. Whatever value ?y has, ?x must be (b ?y). A binding of ?x to
15827 the pattern (b ?y) is thus put into the frame. If a value for ?y is later determined and added to the
15828 frame (by a pattern match or unification that is required to be consistent with this frame), the
15829 previously bound ?x will refer to this value. 72
15830
15831 \fApplying rules
15832 Unification is the key to the component of the query system that makes inferences from rules. To see
15833 how this is accomplished, consider processing a query that involves applying a rule, such as
15834 (lives-near ?x (Hacker Alyssa P))
15835 To process this query, we first use the ordinary pattern-match procedure described above to see if there
15836 are any assertions in the data base that match this pattern. (There will not be any in this case, since our
15837 data base includes no direct assertions about who lives near whom.) The next step is to attempt to
15838 unify the query pattern with the conclusion of each rule. We find that the pattern unifies with the
15839 conclusion of the rule
15840 (rule (lives-near ?person-1 ?person-2)
15841 (and (address ?person-1 (?town . ?rest-1))
15842 (address ?person-2 (?town . ?rest-2))
15843 (not (same ?person-1 ?person-2))))
15844 resulting in a frame specifying that ?person-2 is bound to (Hacker Alyssa P) and that ?x
15845 should be bound to (have the same value as) ?person-1. Now, relative to this frame, we evaluate
15846 the compound query given by the body of the rule. Successful matches will extend this frame by
15847 providing a binding for ?person-1, and consequently a value for ?x, which we can use to
15848 instantiate the original query pattern.
15849 In general, the query evaluator uses the following method to apply a rule when trying to establish a
15850 query pattern in a frame that specifies bindings for some of the pattern variables:
15851 Unify the query with the conclusion of the rule to form, if successful, an extension of the original
15852 frame.
15853 Relative to the extended frame, evaluate the query formed by the body of the rule.
15854 Notice how similar this is to the method for applying a procedure in the eval/apply evaluator for
15855 Lisp:
15856 Bind the procedure’s parameters to its arguments to form a frame that extends the original
15857 procedure environment.
15858 Relative to the extended environment, evaluate the expression formed by the body of the
15859 procedure.
15860 The similarity between the two evaluators should come as no surprise. Just as procedure definitions are
15861 the means of abstraction in Lisp, rule definitions are the means of abstraction in the query language. In
15862 each case, we unwind the abstraction by creating appropriate bindings and evaluating the rule or
15863 procedure body relative to these.
15864
15865 Simple queries
15866 We saw earlier in this section how to evaluate simple queries in the absence of rules. Now that we
15867 have seen how to apply rules, we can describe how to evaluate simple queries by using both rules and
15868 assertions.
15869
15870 \fGiven the query pattern and a stream of frames, we produce, for each frame in the input stream, two
15871 streams:
15872 a stream of extended frames obtained by matching the pattern against all assertions in the data
15873 base (using the pattern matcher), and
15874 a stream of extended frames obtained by applying all possible rules (using the unifier). 73
15875 Appending these two streams produces a stream that consists of all the ways that the given pattern can
15876 be satisfied consistent with the original frame. These streams (one for each frame in the input stream)
15877 are now all combined to form one large stream, which therefore consists of all the ways that any of the
15878 frames in the original input stream can be extended to produce a match with the given pattern.
15879
15880 The query evaluator and the driver loop
15881 Despite the complexity of the underlying matching operations, the system is organized much like an
15882 evaluator for any language. The procedure that coordinates the matching operations is called qeval,
15883 and it plays a role analogous to that of the eval procedure for Lisp. Qeval takes as inputs a query
15884 and a stream of frames. Its output is a stream of frames, corresponding to successful matches to the
15885 query pattern, that extend some frame in the input stream, as indicated in figure 4.4. Like eval,
15886 qeval classifies the different types of expressions (queries) and dispatches to an appropriate
15887 procedure for each. There is a procedure for each special form (and, or, not, and lisp-value)
15888 and one for simple queries.
15889 The driver loop, which is analogous to the driver-loop procedure for the other evaluators in this
15890 chapter, reads queries from the terminal. For each query, it calls qeval with the query and a stream
15891 that consists of a single empty frame. This will produce the stream of all possible matches (all possible
15892 extensions to the empty frame). For each frame in the resulting stream, it instantiates the original query
15893 using the values of the variables found in the frame. This stream of instantiated queries is then
15894 printed. 74
15895 The driver also checks for the special command assert!, which signals that the input is not a query
15896 but rather an assertion or rule to be added to the data base. For instance,
15897 (assert! (job (Bitdiddle Ben) (computer wizard)))
15898 (assert! (rule (wheel ?person)
15899 (and (supervisor ?middle-manager ?person)
15900 (supervisor ?x ?middle-manager))))
15901
15902 4.4.3 Is Logic Programming Mathematical Logic?
15903 The means of combination used in the query language may at first seem identical to the operations
15904 and, or, and not of mathematical logic, and the application of query-language rules is in fact
15905 accomplished through a legitimate method of inference. 75 This identification of the query language
15906 with mathematical logic is not really valid, though, because the query language provides a control
15907 structure that interprets the logical statements procedurally. We can often take advantage of this
15908 control structure. For example, to find all of the supervisors of programmers we could formulate a
15909 query in either of two logically equivalent forms:
15910
15911 \f(and (job ?x (computer programmer))
15912 (supervisor ?x ?y))
15913 or
15914 (and (supervisor ?x ?y)
15915 (job ?x (computer programmer)))
15916 If a company has many more supervisors than programmers (the usual case), it is better to use the first
15917 form rather than the second because the data base must be scanned for each intermediate result (frame)
15918 produced by the first clause of the and.
15919 The aim of logic programming is to provide the programmer with techniques for decomposing a
15920 computational problem into two separate problems: ‘‘what’’ is to be computed, and ‘‘how’’ this
15921 should be computed. This is accomplished by selecting a subset of the statements of mathematical
15922 logic that is powerful enough to be able to describe anything one might want to compute, yet weak
15923 enough to have a controllable procedural interpretation. The intention here is that, on the one hand, a
15924 program specified in a logic programming language should be an effective program that can be carried
15925 out by a computer. Control (‘‘how’’ to compute) is effected by using the order of evaluation of the
15926 language. We should be able to arrange the order of clauses and the order of subgoals within each
15927 clause so that the computation is done in an order deemed to be effective and efficient. At the same
15928 time, we should be able to view the result of the computation (‘‘what’’ to compute) as a simple
15929 consequence of the laws of logic.
15930 Our query language can be regarded as just such a procedurally interpretable subset of mathematical
15931 logic. An assertion represents a simple fact (an atomic proposition). A rule represents the implication
15932 that the rule conclusion holds for those cases where the rule body holds. A rule has a natural
15933 procedural interpretation: To establish the conclusion of the rule, establish the body of the rule. Rules,
15934 therefore, specify computations. However, because rules can also be regarded as statements of
15935 mathematical logic, we can justify any ‘‘inference’’ accomplished by a logic program by asserting that
15936 the same result could be obtained by working entirely within mathematical logic. 76
15937
15938 Infinite loops
15939 A consequence of the procedural interpretation of logic programs is that it is possible to construct
15940 hopelessly inefficient programs for solving certain problems. An extreme case of inefficiency occurs
15941 when the system falls into infinite loops in making deductions. As a simple example, suppose we are
15942 setting up a data base of famous marriages, including
15943 (assert! (married Minnie Mickey))
15944 If we now ask
15945 (married Mickey ?who)
15946 we will get no response, because the system doesn’t know that if A is married to B, then B is married
15947 to A. So we assert the rule
15948 (assert! (rule (married ?x ?y)
15949 (married ?y ?x)))
15950
15951 \fand again query
15952 (married Mickey ?who)
15953 Unfortunately, this will drive the system into an infinite loop, as follows:
15954 The system finds that the married rule is applicable; that is, the rule conclusion (married
15955 ?x ?y) successfully unifies with the query pattern (married Mickey ?who) to produce a
15956 frame in which ?x is bound to Mickey and ?y is bound to ?who. So the interpreter proceeds to
15957 evaluate the rule body (married ?y ?x) in this frame -- in effect, to process the query
15958 (married ?who Mickey).
15959 One answer appears directly as an assertion in the data base: (married Minnie Mickey).
15960 The married rule is also applicable, so the interpreter again evaluates the rule body, which this
15961 time is equivalent to (married Mickey ?who).
15962 The system is now in an infinite loop. Indeed, whether the system will find the simple answer
15963 (married Minnie Mickey) before it goes into the loop depends on implementation details
15964 concerning the order in which the system checks the items in the data base. This is a very simple
15965 example of the kinds of loops that can occur. Collections of interrelated rules can lead to loops that are
15966 much harder to anticipate, and the appearance of a loop can depend on the order of clauses in an and
15967 (see exercise 4.64) or on low-level details concerning the order in which the system processes
15968 queries. 77
15969
15970 Problems with not
15971 Another quirk in the query system concerns not. Given the data base of section 4.4.1, consider the
15972 following two queries:
15973 (and (supervisor ?x ?y)
15974 (not (job ?x (computer programmer))))
15975 (and (not (job ?x (computer programmer)))
15976 (supervisor ?x ?y))
15977 These two queries do not produce the same result. The first query begins by finding all entries in the
15978 data base that match (supervisor ?x ?y), and then filters the resulting frames by removing the
15979 ones in which the value of ?x satisfies (job ?x (computer programmer)). The second
15980 query begins by filtering the incoming frames to remove those that can satisfy (job ?x
15981 (computer programmer)). Since the only incoming frame is empty, it checks the data base to
15982 see if there are any patterns that satisfy (job ?x (computer programmer)). Since there
15983 generally are entries of this form, the not clause filters out the empty frame and returns an empty
15984 stream of frames. Consequently, the entire compound query returns an empty stream.
15985 The trouble is that our implementation of not really is meant to serve as a filter on values for the
15986 variables. If a not clause is processed with a frame in which some of the variables remain unbound
15987 (as does ?x in the example above), the system will produce unexpected results. Similar problems
15988 occur with the use of lisp-value -- the Lisp predicate can’t work if some of its arguments are
15989 unbound. See exercise 4.77.
15990
15991 \fThere is also a much more serious way in which the not of the query language differs from the not
15992 of mathematical logic. In logic, we interpret the statement ‘‘not P’’ to mean that P is not true. In the
15993 query system, however, ‘‘not P’’ means that P is not deducible from the knowledge in the data base.
15994 For example, given the personnel data base of section 4.4.1, the system would happily deduce all sorts
15995 of not statements, such as that Ben Bitdiddle is not a baseball fan, that it is not raining outside, and
15996 that 2 + 2 is not 4. 78 In other words, the not of logic programming languages reflects the so-called
15997 closed world assumption that all relevant information has been included in the data base. 79
15998 Exercise 4.64. Louis Reasoner mistakenly deletes the outranked-by rule (section 4.4.1) from the
15999 data base. When he realizes this, he quickly reinstalls it. Unfortunately, he makes a slight change in the
16000 rule, and types it in as
16001 (rule (outranked-by ?staff-person ?boss)
16002 (or (supervisor ?staff-person ?boss)
16003 (and (outranked-by ?middle-manager ?boss)
16004 (supervisor ?staff-person ?middle-manager))))
16005 Just after Louis types this information into the system, DeWitt Aull comes by to find out who outranks
16006 Ben Bitdiddle. He issues the query
16007 (outranked-by (Bitdiddle Ben) ?who)
16008 After answering, the system goes into an infinite loop. Explain why.
16009 Exercise 4.65. Cy D. Fect, looking forward to the day when he will rise in the organization, gives a
16010 query to find all the wheels (using the wheel rule of section 4.4.1):
16011 (wheel ?who)
16012 To his surprise, the system responds
16013 ;;; Query results:
16014 (wheel (Warbucks Oliver))
16015 (wheel (Bitdiddle Ben))
16016 (wheel (Warbucks Oliver))
16017 (wheel (Warbucks Oliver))
16018 (wheel (Warbucks Oliver))
16019 Why is Oliver Warbucks listed four times?
16020 Exercise 4.66. Ben has been generalizing the query system to provide statistics about the company.
16021 For example, to find the total salaries of all the computer programmers one will be able to say
16022 (sum ?amount
16023 (and (job ?x (computer programmer))
16024 (salary ?x ?amount)))
16025 In general, Ben’s new system allows expressions of the form
16026 (accumulation-function <variable>
16027 <query pattern>)
16028
16029 \fwhere accumulation-function can be things like sum, average, or maximum. Ben reasons
16030 that it should be a cinch to implement this. He will simply feed the query pattern to qeval. This will
16031 produce a stream of frames. He will then pass this stream through a mapping function that extracts the
16032 value of the designated variable from each frame in the stream and feed the resulting stream of values
16033 to the accumulation function. Just as Ben completes the implementation and is about to try it out, Cy
16034 walks by, still puzzling over the wheel query result in exercise 4.65. When Cy shows Ben the
16035 system’s response, Ben groans, ‘‘Oh, no, my simple accumulation scheme won’t work!’’
16036 What has Ben just realized? Outline a method he can use to salvage the situation.
16037 Exercise 4.67. Devise a way to install a loop detector in the query system so as to avoid the kinds of
16038 simple loops illustrated in the text and in exercise 4.64. The general idea is that the system should
16039 maintain some sort of history of its current chain of deductions and should not begin processing a
16040 query that it is already working on. Describe what kind of information (patterns and frames) is
16041 included in this history, and how the check should be made. (After you study the details of the
16042 query-system implementation in section 4.4.4, you may want to modify the system to include your
16043 loop detector.)
16044 Exercise 4.68. Define rules to implement the reverse operation of exercise 2.18, which returns a
16045 list containing the same elements as a given list in reverse order. (Hint: Use append-to-form.)
16046 Can your rules answer both (reverse (1 2 3) ?x) and (reverse ?x (1 2 3)) ?
16047 Exercise 4.69. Beginning with the data base and the rules you formulated in exercise 4.63, devise a
16048 rule for adding ‘‘greats’’ to a grandson relationship. This should enable the system to deduce that Irad
16049 is the great-grandson of Adam, or that Jabal and Jubal are the great-great-great-great-great-grandsons
16050 of Adam. (Hint: Represent the fact about Irad, for example, as ((great grandson) Adam
16051 Irad). Write rules that determine if a list ends in the word grandson. Use this to express a rule that
16052 allows one to derive the relationship ((great . ?rel) ?x ?y), where ?rel is a list ending in
16053 grandson.) Check your rules on queries such as ((great grandson) ?g ?ggs) and
16054 (?relationship Adam Irad).
16055
16056 4.4.4 Implementing the Query System
16057 Section 4.4.2 described how the query system works. Now we fill in the details by presenting a
16058 complete implementation of the system.
16059
16060 4.4.4.1 The Driver Loop and Instantiation
16061 The driver loop for the query system repeatedly reads input expressions. If the expression is a rule or
16062 assertion to be added to the data base, then the information is added. Otherwise the expression is
16063 assumed to be a query. The driver passes this query to the evaluator qeval together with an initial
16064 frame stream consisting of a single empty frame. The result of the evaluation is a stream of frames
16065 generated by satisfying the query with variable values found in the data base. These frames are used to
16066 form a new stream consisting of copies of the original query in which the variables are instantiated
16067 with values supplied by the stream of frames, and this final stream is printed at the terminal:
16068 (define input-prompt ";;; Query input:")
16069 (define output-prompt ";;; Query results:")
16070 (define (query-driver-loop)
16071 (prompt-for-input input-prompt)
16072 (let ((q (query-syntax-process (read))))
16073
16074 \f(cond ((assertion-to-be-added? q)
16075 (add-rule-or-assertion! (add-assertion-body q))
16076 (newline)
16077 (display "Assertion added to data base.")
16078 (query-driver-loop))
16079 (else
16080 (newline)
16081 (display output-prompt)
16082 (display-stream
16083 (stream-map
16084 (lambda (frame)
16085 (instantiate q
16086 frame
16087 (lambda (v f)
16088 (contract-question-mark v))))
16089 (qeval q (singleton-stream ’()))))
16090 (query-driver-loop)))))
16091 Here, as in the other evaluators in this chapter, we use an abstract syntax for the expressions of the
16092 query language. The implementation of the expression syntax, including the predicate
16093 assertion-to-be-added? and the selector add-assertion-body, is given in
16094 section 4.4.4.7. Add-rule-or-assertion! is defined in section 4.4.4.5.
16095 Before doing any processing on an input expression, the driver loop transforms it syntactically into a
16096 form that makes the processing more efficient. This involves changing the representation of pattern
16097 variables. When the query is instantiated, any variables that remain unbound are transformed back to
16098 the input representation before being printed. These transformations are performed by the two
16099 procedures query-syntax-process and contract-question-mark (section 4.4.4.7).
16100 To instantiate an expression, we copy it, replacing any variables in the expression by their values in a
16101 given frame. The values are themselves instantiated, since they could contain variables (for example, if
16102 ?x in exp is bound to ?y as the result of unification and ?y is in turn bound to 5). The action to take
16103 if a variable cannot be instantiated is given by a procedural argument to instantiate.
16104 (define (instantiate exp frame unbound-var-handler)
16105 (define (copy exp)
16106 (cond ((var? exp)
16107 (let ((binding (binding-in-frame exp frame)))
16108 (if binding
16109 (copy (binding-value binding))
16110 (unbound-var-handler exp frame))))
16111 ((pair? exp)
16112 (cons (copy (car exp)) (copy (cdr exp))))
16113 (else exp)))
16114 (copy exp))
16115 The procedures that manipulate bindings are defined in section 4.4.4.8.
16116
16117 \f4.4.4.2 The Evaluator
16118 The qeval procedure, called by the query-driver-loop, is the basic evaluator of the query
16119 system. It takes as inputs a query and a stream of frames, and it returns a stream of extended frames. It
16120 identifies special forms by a data-directed dispatch using get and put, just as we did in
16121 implementing generic operations in chapter 2. Any query that is not identified as a special form is
16122 assumed to be a simple query, to be processed by simple-query.
16123 (define (qeval query frame-stream)
16124 (let ((qproc (get (type query) ’qeval)))
16125 (if qproc
16126 (qproc (contents query) frame-stream)
16127 (simple-query query frame-stream))))
16128 Type and contents, defined in section 4.4.4.7, implement the abstract syntax of the special forms.
16129
16130 Simple queries
16131 The simple-query procedure handles simple queries. It takes as arguments a simple query (a
16132 pattern) together with a stream of frames, and it returns the stream formed by extending each frame by
16133 all data-base matches of the query.
16134 (define (simple-query query-pattern frame-stream)
16135 (stream-flatmap
16136 (lambda (frame)
16137 (stream-append-delayed
16138 (find-assertions query-pattern frame)
16139 (delay (apply-rules query-pattern frame))))
16140 frame-stream))
16141 For each frame in the input stream, we use find-assertions (section 4.4.4.3) to match the pattern
16142 against all assertions in the data base, producing a stream of extended frames, and we use
16143 apply-rules (section 4.4.4.4) to apply all possible rules, producing another stream of extended
16144 frames. These two streams are combined (using stream-append-delayed, section 4.4.4.6) to
16145 make a stream of all the ways that the given pattern can be satisfied consistent with the original frame
16146 (see exercise 4.71). The streams for the individual input frames are combined using
16147 stream-flatmap (section 4.4.4.6) to form one large stream of all the ways that any of the frames
16148 in the original input stream can be extended to produce a match with the given pattern.
16149
16150 Compound queries
16151 And queries are handled as illustrated in figure 4.5 by the conjoin procedure. Conjoin takes as
16152 inputs the conjuncts and the frame stream and returns the stream of extended frames. First, conjoin
16153 processes the stream of frames to find the stream of all possible frame extensions that satisfy the first
16154 query in the conjunction. Then, using this as the new frame stream, it recursively applies conjoin to
16155 the rest of the queries.
16156 (define (conjoin conjuncts frame-stream)
16157 (if (empty-conjunction? conjuncts)
16158 frame-stream
16159 (conjoin (rest-conjuncts conjuncts)
16160 (qeval (first-conjunct conjuncts)
16161
16162 \fframe-stream))))
16163 The expression
16164 (put ’and ’qeval conjoin)
16165 sets up qeval to dispatch to conjoin when an and form is encountered.
16166 Or queries are handled similarly, as shown in figure 4.6. The output streams for the various disjuncts
16167 of the or are computed separately and merged using the interleave-delayed procedure from
16168 section 4.4.4.6. (See exercises 4.71 and 4.72.)
16169 (define (disjoin disjuncts frame-stream)
16170 (if (empty-disjunction? disjuncts)
16171 the-empty-stream
16172 (interleave-delayed
16173 (qeval (first-disjunct disjuncts) frame-stream)
16174 (delay (disjoin (rest-disjuncts disjuncts)
16175 frame-stream)))))
16176 (put ’or ’qeval disjoin)
16177 The predicates and selectors for the syntax of conjuncts and disjuncts are given in section 4.4.4.7.
16178
16179 Filters
16180 Not is handled by the method outlined in section 4.4.2. We attempt to extend each frame in the input
16181 stream to satisfy the query being negated, and we include a given frame in the output stream only if it
16182 cannot be extended.
16183 (define (negate operands frame-stream)
16184 (stream-flatmap
16185 (lambda (frame)
16186 (if (stream-null? (qeval (negated-query operands)
16187 (singleton-stream frame)))
16188 (singleton-stream frame)
16189 the-empty-stream))
16190 frame-stream))
16191 (put ’not ’qeval negate)
16192 Lisp-value is a filter similar to not. Each frame in the stream is used to instantiate the variables in
16193 the pattern, the indicated predicate is applied, and the frames for which the predicate returns false are
16194 filtered out of the input stream. An error results if there are unbound pattern variables.
16195 (define (lisp-value call frame-stream)
16196 (stream-flatmap
16197 (lambda (frame)
16198 (if (execute
16199 (instantiate
16200 call
16201 frame
16202 (lambda (v f)
16203 (error "Unknown pat var -- LISP-VALUE" v))))
16204
16205 \f(singleton-stream frame)
16206 the-empty-stream))
16207 frame-stream))
16208 (put ’lisp-value ’qeval lisp-value)
16209 Execute, which applies the predicate to the arguments, must eval the predicate expression to get
16210 the procedure to apply. However, it must not evaluate the arguments, since they are already the actual
16211 arguments, not expressions whose evaluation (in Lisp) will produce the arguments. Note that
16212 execute is implemented using eval and apply from the underlying Lisp system.
16213 (define (execute exp)
16214 (apply (eval (predicate exp) user-initial-environment)
16215 (args exp)))
16216 The always-true special form provides for a query that is always satisfied. It ignores its contents
16217 (normally empty) and simply passes through all the frames in the input stream. Always-true is
16218 used by the rule-body selector (section 4.4.4.7) to provide bodies for rules that were defined
16219 without bodies (that is, rules whose conclusions are always satisfied).
16220 (define (always-true ignore frame-stream) frame-stream)
16221 (put ’always-true ’qeval always-true)
16222 The selectors that define the syntax of not and lisp-value are given in section 4.4.4.7.
16223
16224 4.4.4.3 Finding Assertions by Pattern Matching
16225 Find-assertions, called by simple-query (section 4.4.4.2), takes as input a pattern and a
16226 frame. It returns a stream of frames, each extending the given one by a data-base match of the given
16227 pattern. It uses fetch-assertions (section 4.4.4.5) to get a stream of all the assertions in the data
16228 base that should be checked for a match against the pattern and the frame. The reason for
16229 fetch-assertions here is that we can often apply simple tests that will eliminate many of the
16230 entries in the data base from the pool of candidates for a successful match. The system would still
16231 work if we eliminated fetch-assertions and simply checked a stream of all assertions in the
16232 data base, but the computation would be less efficient because we would need to make many more
16233 calls to the matcher.
16234 (define (find-assertions pattern frame)
16235 (stream-flatmap (lambda (datum)
16236 (check-an-assertion datum pattern frame))
16237 (fetch-assertions pattern frame)))
16238 Check-an-assertion takes as arguments a pattern, a data object (assertion), and a frame and
16239 returns either a one-element stream containing the extended frame or the-empty-stream if the
16240 match fails.
16241 (define (check-an-assertion assertion query-pat query-frame)
16242 (let ((match-result
16243 (pattern-match query-pat assertion query-frame)))
16244 (if (eq? match-result ’failed)
16245 the-empty-stream
16246 (singleton-stream match-result))))
16247
16248 \fThe basic pattern matcher returns either the symbol failed or an extension of the given frame. The
16249 basic idea of the matcher is to check the pattern against the data, element by element, accumulating
16250 bindings for the pattern variables. If the pattern and the data object are the same, the match succeeds
16251 and we return the frame of bindings accumulated so far. Otherwise, if the pattern is a variable we
16252 extend the current frame by binding the variable to the data, so long as this is consistent with the
16253 bindings already in the frame. If the pattern and the data are both pairs, we (recursively) match the
16254 car of the pattern against the car of the data to produce a frame; in this frame we then match the
16255 cdr of the pattern against the cdr of the data. If none of these cases are applicable, the match fails
16256 and we return the symbol failed.
16257 (define (pattern-match pat dat frame)
16258 (cond ((eq? frame ’failed) ’failed)
16259 ((equal? pat dat) frame)
16260 ((var? pat) (extend-if-consistent pat dat frame))
16261 ((and (pair? pat) (pair? dat))
16262 (pattern-match (cdr pat)
16263 (cdr dat)
16264 (pattern-match (car pat)
16265 (car dat)
16266 frame)))
16267 (else ’failed)))
16268 Here is the procedure that extends a frame by adding a new binding, if this is consistent with the
16269 bindings already in the frame:
16270 (define (extend-if-consistent var dat frame)
16271 (let ((binding (binding-in-frame var frame)))
16272 (if binding
16273 (pattern-match (binding-value binding) dat frame)
16274 (extend var dat frame))))
16275 If there is no binding for the variable in the frame, we simply add the binding of the variable to the
16276 data. Otherwise we match, in the frame, the data against the value of the variable in the frame. If the
16277 stored value contains only constants, as it must if it was stored during pattern matching by
16278 extend-if-consistent, then the match simply tests whether the stored and new values are the
16279 same. If so, it returns the unmodified frame; if not, it returns a failure indication. The stored value
16280 may, however, contain pattern variables if it was stored during unification (see section 4.4.4.4). The
16281 recursive match of the stored pattern against the new data will add or check bindings for the variables
16282 in this pattern. For example, suppose we have a frame in which ?x is bound to (f ?y) and ?y is
16283 unbound, and we wish to augment this frame by a binding of ?x to (f b). We look up ?x and find
16284 that it is bound to (f ?y). This leads us to match (f ?y) against the proposed new value (f b) in
16285 the same frame. Eventually this match extends the frame by adding a binding of ?y to b. ?X remains
16286 bound to (f ?y). We never modify a stored binding and we never store more than one binding for a
16287 given variable.
16288 The procedures used by extend-if-consistent to manipulate bindings are defined in
16289 section 4.4.4.8.
16290
16291 \fPatterns with dotted tails
16292 If a pattern contains a dot followed by a pattern variable, the pattern variable matches the rest of the
16293 data list (rather than the next element of the data list), just as one would expect with the dotted-tail
16294 notation described in exercise 2.20. Although the pattern matcher we have just implemented doesn’t
16295 look for dots, it does behave as we want. This is because the Lisp read primitive, which is used by
16296 query-driver-loop to read the query and represent it as a list structure, treats dots in a special
16297 way.
16298 When read sees a dot, instead of making the next item be the next element of a list (the car of a
16299 cons whose cdr will be the rest of the list) it makes the next item be the cdr of the list structure. For
16300 example, the list structure produced by read for the pattern (computer ?type) could be
16301 constructed by evaluating the expression (cons ’computer (cons ’?type ’())), and that
16302 for (computer . ?type) could be constructed by evaluating the expression (cons
16303 ’computer ’?type).
16304 Thus, as pattern-match recursively compares cars and cdrs of a data list and a pattern that had
16305 a dot, it eventually matches the variable after the dot (which is a cdr of the pattern) against a sublist
16306 of the data list, binding the variable to that list. For example, matching the pattern (computer .
16307 ?type) against (computer programmer trainee) will match ?type against the list
16308 (programmer trainee).
16309
16310 4.4.4.4 Rules and Unification
16311 Apply-rules is the rule analog of find-assertions (section 4.4.4.3). It takes as input a
16312 pattern and a frame, and it forms a stream of extension frames by applying rules from the data base.
16313 Stream-flatmap maps apply-a-rule down the stream of possibly applicable rules (selected
16314 by fetch-rules, section 4.4.4.5) and combines the resulting streams of frames.
16315 (define (apply-rules pattern frame)
16316 (stream-flatmap (lambda (rule)
16317 (apply-a-rule rule pattern frame))
16318 (fetch-rules pattern frame)))
16319 Apply-a-rule applies rules using the method outlined in section 4.4.2. It first augments its
16320 argument frame by unifying the rule conclusion with the pattern in the given frame. If this succeeds, it
16321 evaluates the rule body in this new frame.
16322 Before any of this happens, however, the program renames all the variables in the rule with unique
16323 new names. The reason for this is to prevent the variables for different rule applications from
16324 becoming confused with each other. For instance, if two rules both use a variable named ?x, then each
16325 one may add a binding for ?x to the frame when it is applied. These two ?x’s have nothing to do with
16326 each other, and we should not be fooled into thinking that the two bindings must be consistent. Rather
16327 than rename variables, we could devise a more clever environment structure; however, the renaming
16328 approach we have chosen here is the most straightforward, even if not the most efficient. (See
16329 exercise 4.79.) Here is the apply-a-rule procedure:
16330 (define (apply-a-rule rule query-pattern query-frame)
16331 (let ((clean-rule (rename-variables-in rule)))
16332 (let ((unify-result
16333 (unify-match query-pattern
16334 (conclusion clean-rule)
16335
16336 \fquery-frame)))
16337 (if (eq? unify-result ’failed)
16338 the-empty-stream
16339 (qeval (rule-body clean-rule)
16340 (singleton-stream unify-result))))))
16341 The selectors rule-body and conclusion that extract parts of a rule are defined in
16342 section 4.4.4.7.
16343 We generate unique variable names by associating a unique identifier (such as a number) with each
16344 rule application and combining this identifier with the original variable names. For example, if the
16345 rule-application identifier is 7, we might change each ?x in the rule to ?x-7 and each ?y in the rule
16346 to ?y-7. (Make-new-variable and new-rule-application-id are included with the
16347 syntax procedures in section 4.4.4.7.)
16348 (define (rename-variables-in rule)
16349 (let ((rule-application-id (new-rule-application-id)))
16350 (define (tree-walk exp)
16351 (cond ((var? exp)
16352 (make-new-variable exp rule-application-id))
16353 ((pair? exp)
16354 (cons (tree-walk (car exp))
16355 (tree-walk (cdr exp))))
16356 (else exp)))
16357 (tree-walk rule)))
16358 The unification algorithm is implemented as a procedure that takes as inputs two patterns and a frame
16359 and returns either the extended frame or the symbol failed. The unifier is like the pattern matcher
16360 except that it is symmetrical -- variables are allowed on both sides of the match. Unify-match is
16361 basically the same as pattern-match, except that there is extra code (marked ‘‘***’’ below) to
16362 handle the case where the object on the right side of the match is a variable.
16363 (define (unify-match p1 p2 frame)
16364 (cond ((eq? frame ’failed) ’failed)
16365 ((equal? p1 p2) frame)
16366 ((var? p1) (extend-if-possible p1 p2 frame))
16367 ((var? p2) (extend-if-possible p2 p1 frame))
16368 ((and (pair? p1) (pair? p2))
16369 (unify-match (cdr p1)
16370 (cdr p2)
16371 (unify-match (car p1)
16372 (car p2)
16373 frame)))
16374 (else ’failed)))
16375
16376 ; ***
16377
16378 In unification, as in one-sided pattern matching, we want to accept a proposed extension of the frame
16379 only if it is consistent with existing bindings. The procedure extend-if-possible used in
16380 unification is the same as the extend-if-consistent used in pattern matching except for two
16381 special checks, marked ‘‘***’’ in the program below. In the first case, if the variable we are trying to
16382 match is not bound, but the value we are trying to match it with is itself a (different) variable, it is
16383 necessary to check to see if the value is bound, and if so, to match its value. If both parties to the
16384
16385 \fmatch are unbound, we may bind either to the other.
16386 The second check deals with attempts to bind a variable to a pattern that includes that variable. Such a
16387 situation can occur whenever a variable is repeated in both patterns. Consider, for example, unifying
16388 the two patterns (?x ?x) and (?y <expression involving ?y>) in a frame where both ?x
16389 and ?y are unbound. First ?x is matched against ?y, making a binding of ?x to ?y. Next, the same
16390 ?x is matched against the given expression involving ?y. Since ?x is already bound to ?y, this results
16391 in matching ?y against the expression. If we think of the unifier as finding a set of values for the
16392 pattern variables that make the patterns the same, then these patterns imply instructions to find a ?y
16393 such that ?y is equal to the expression involving ?y. There is no general method for solving such
16394 equations, so we reject such bindings; these cases are recognized by the predicate depends-on?. 80
16395 On the other hand, we do not want to reject attempts to bind a variable to itself. For example, consider
16396 unifying (?x ?x) and (?y ?y). The second attempt to bind ?x to ?y matches ?y (the stored value
16397 of ?x) against ?y (the new value of ?x). This is taken care of by the equal? clause of
16398 unify-match.
16399 (define (extend-if-possible var val frame)
16400 (let ((binding (binding-in-frame var frame)))
16401 (cond (binding
16402 (unify-match
16403 (binding-value binding) val frame))
16404 ((var? val)
16405 ; ***
16406 (let ((binding (binding-in-frame val frame)))
16407 (if binding
16408 (unify-match
16409 var (binding-value binding) frame)
16410 (extend var val frame))))
16411 ((depends-on? val var frame)
16412 ; ***
16413 ’failed)
16414 (else (extend var val frame)))))
16415 Depends-on? is a predicate that tests whether an expression proposed to be the value of a pattern
16416 variable depends on the variable. This must be done relative to the current frame because the
16417 expression may contain occurrences of a variable that already has a value that depends on our test
16418 variable. The structure of depends-on? is a simple recursive tree walk in which we substitute for
16419 the values of variables whenever necessary.
16420 (define (depends-on? exp var frame)
16421 (define (tree-walk e)
16422 (cond ((var? e)
16423 (if (equal? var e)
16424 true
16425 (let ((b (binding-in-frame e frame)))
16426 (if b
16427 (tree-walk (binding-value b))
16428 false))))
16429 ((pair? e)
16430 (or (tree-walk (car e))
16431 (tree-walk (cdr e))))
16432 (else false)))
16433 (tree-walk exp))
16434
16435 \f4.4.4.5 Maintaining the Data Base
16436 One important problem in designing logic programming languages is that of arranging things so that as
16437 few irrelevant data-base entries as possible will be examined in checking a given pattern. In our
16438 system, in addition to storing all assertions in one big stream, we store all assertions whose cars are
16439 constant symbols in separate streams, in a table indexed by the symbol. To fetch an assertion that may
16440 match a pattern, we first check to see if the car of the pattern is a constant symbol. If so, we return (to
16441 be tested using the matcher) all the stored assertions that have the same car. If the pattern’s car is
16442 not a constant symbol, we return all the stored assertions. Cleverer methods could also take advantage
16443 of information in the frame, or try also to optimize the case where the car of the pattern is not a
16444 constant symbol. We avoid building our criteria for indexing (using the car, handling only the case of
16445 constant symbols) into the program; instead we call on predicates and selectors that embody our
16446 criteria.
16447 (define THE-ASSERTIONS the-empty-stream)
16448 (define (fetch-assertions pattern frame)
16449 (if (use-index? pattern)
16450 (get-indexed-assertions pattern)
16451 (get-all-assertions)))
16452 (define (get-all-assertions) THE-ASSERTIONS)
16453 (define (get-indexed-assertions pattern)
16454 (get-stream (index-key-of pattern) ’assertion-stream))
16455 Get-stream looks up a stream in the table and returns an empty stream if nothing is stored there.
16456 (define (get-stream key1 key2)
16457 (let ((s (get key1 key2)))
16458 (if s s the-empty-stream)))
16459 Rules are stored similarly, using the car of the rule conclusion. Rule conclusions are arbitrary
16460 patterns, however, so they differ from assertions in that they can contain variables. A pattern whose
16461 car is a constant symbol can match rules whose conclusions start with a variable as well as rules
16462 whose conclusions have the same car. Thus, when fetching rules that might match a pattern whose
16463 car is a constant symbol we fetch all rules whose conclusions start with a variable as well as those
16464 whose conclusions have the same car as the pattern. For this purpose we store all rules whose
16465 conclusions start with a variable in a separate stream in our table, indexed by the symbol ?.
16466 (define THE-RULES the-empty-stream)
16467 (define (fetch-rules pattern frame)
16468 (if (use-index? pattern)
16469 (get-indexed-rules pattern)
16470 (get-all-rules)))
16471 (define (get-all-rules) THE-RULES)
16472 (define (get-indexed-rules pattern)
16473 (stream-append
16474 (get-stream (index-key-of pattern) ’rule-stream)
16475 (get-stream ’? ’rule-stream)))
16476 Add-rule-or-assertion! is used by query-driver-loop to add assertions and rules to the
16477 data base. Each item is stored in the index, if appropriate, and in a stream of all assertions or rules in
16478 the data base.
16479
16480 \f(define (add-rule-or-assertion! assertion)
16481 (if (rule? assertion)
16482 (add-rule! assertion)
16483 (add-assertion! assertion)))
16484 (define (add-assertion! assertion)
16485 (store-assertion-in-index assertion)
16486 (let ((old-assertions THE-ASSERTIONS))
16487 (set! THE-ASSERTIONS
16488 (cons-stream assertion old-assertions))
16489 ’ok))
16490 (define (add-rule! rule)
16491 (store-rule-in-index rule)
16492 (let ((old-rules THE-RULES))
16493 (set! THE-RULES (cons-stream rule old-rules))
16494 ’ok))
16495 To actually store an assertion or a rule, we check to see if it can be indexed. If so, we store it in the
16496 appropriate stream.
16497 (define (store-assertion-in-index assertion)
16498 (if (indexable? assertion)
16499 (let ((key (index-key-of assertion)))
16500 (let ((current-assertion-stream
16501 (get-stream key ’assertion-stream)))
16502 (put key
16503 ’assertion-stream
16504 (cons-stream assertion
16505 current-assertion-stream))))))
16506 (define (store-rule-in-index rule)
16507 (let ((pattern (conclusion rule)))
16508 (if (indexable? pattern)
16509 (let ((key (index-key-of pattern)))
16510 (let ((current-rule-stream
16511 (get-stream key ’rule-stream)))
16512 (put key
16513 ’rule-stream
16514 (cons-stream rule
16515 current-rule-stream)))))))
16516 The following procedures define how the data-base index is used. A pattern (an assertion or a rule
16517 conclusion) will be stored in the table if it starts with a variable or a constant symbol.
16518 (define (indexable? pat)
16519 (or (constant-symbol? (car pat))
16520 (var? (car pat))))
16521 The key under which a pattern is stored in the table is either ? (if it starts with a variable) or the
16522 constant symbol with which it starts.
16523
16524 \f(define (index-key-of pat)
16525 (let ((key (car pat)))
16526 (if (var? key) ’? key)))
16527 The index will be used to retrieve items that might match a pattern if the pattern starts with a constant
16528 symbol.
16529 (define (use-index? pat)
16530 (constant-symbol? (car pat)))
16531 Exercise 4.70. What is the purpose of the let bindings in the procedures add-assertion! and
16532 add-rule! ? What would be wrong with the following implementation of add-assertion! ?
16533 Hint: Recall the definition of the infinite stream of ones in section 3.5.2: (define ones
16534 (cons-stream 1 ones)).
16535 (define (add-assertion! assertion)
16536 (store-assertion-in-index assertion)
16537 (set! THE-ASSERTIONS
16538 (cons-stream assertion THE-ASSERTIONS))
16539 ’ok)
16540
16541 4.4.4.6 Stream Operations
16542 The query system uses a few stream operations that were not presented in chapter 3.
16543 Stream-append-delayed and interleave-delayed are just like stream-append and
16544 interleave (section 3.5.3), except that they take a delayed argument (like the integral
16545 procedure in section 3.5.4). This postpones looping in some cases (see exercise 4.71).
16546 (define (stream-append-delayed s1 delayed-s2)
16547 (if (stream-null? s1)
16548 (force delayed-s2)
16549 (cons-stream
16550 (stream-car s1)
16551 (stream-append-delayed (stream-cdr s1) delayed-s2))))
16552 (define (interleave-delayed s1 delayed-s2)
16553 (if (stream-null? s1)
16554 (force delayed-s2)
16555 (cons-stream
16556 (stream-car s1)
16557 (interleave-delayed (force delayed-s2)
16558 (delay (stream-cdr s1))))))
16559 Stream-flatmap, which is used throughout the query evaluator to map a procedure over a stream
16560 of frames and combine the resulting streams of frames, is the stream analog of the flatmap
16561 procedure introduced for ordinary lists in section 2.2.3. Unlike ordinary flatmap, however, we
16562 accumulate the streams with an interleaving process, rather than simply appending them (see
16563 exercises 4.72 and 4.73).
16564 (define (stream-flatmap proc s)
16565 (flatten-stream (stream-map proc s)))
16566 (define (flatten-stream stream)
16567
16568 \f(if (stream-null? stream)
16569 the-empty-stream
16570 (interleave-delayed
16571 (stream-car stream)
16572 (delay (flatten-stream (stream-cdr stream))))))
16573 The evaluator also uses the following simple procedure to generate a stream consisting of a single
16574 element:
16575 (define (singleton-stream x)
16576 (cons-stream x the-empty-stream))
16577
16578 4.4.4.7 Query Syntax Procedures
16579 Type and contents, used by qeval (section 4.4.4.2), specify that a special form is identified by
16580 the symbol in its car. They are the same as the type-tag and contents procedures in
16581 section 2.4.2, except for the error message.
16582 (define (type exp)
16583 (if (pair? exp)
16584 (car exp)
16585 (error "Unknown expression TYPE" exp)))
16586 (define (contents exp)
16587 (if (pair? exp)
16588 (cdr exp)
16589 (error "Unknown expression CONTENTS" exp)))
16590 The following procedures, used by query-driver-loop (in section 4.4.4.1), specify that rules and
16591 assertions are added to the data base by expressions of the form (assert!
16592 <rule-or-assertion>):
16593 (define (assertion-to-be-added? exp)
16594 (eq? (type exp) ’assert!))
16595 (define (add-assertion-body exp)
16596 (car (contents exp)))
16597 Here are the syntax definitions for the and, or, not, and lisp-value special forms
16598 (section 4.4.4.2):
16599 (define
16600 (define
16601 (define
16602 (define
16603 (define
16604 (define
16605 (define
16606 (define
16607 (define
16608
16609 (empty-conjunction? exps) (null? exps))
16610 (first-conjunct exps) (car exps))
16611 (rest-conjuncts exps) (cdr exps))
16612 (empty-disjunction? exps) (null? exps))
16613 (first-disjunct exps) (car exps))
16614 (rest-disjuncts exps) (cdr exps))
16615 (negated-query exps) (car exps))
16616 (predicate exps) (car exps))
16617 (args exps) (cdr exps))
16618
16619 The following three procedures define the syntax of rules:
16620
16621 \f(define (rule? statement)
16622 (tagged-list? statement ’rule))
16623 (define (conclusion rule) (cadr rule))
16624 (define (rule-body rule)
16625 (if (null? (cddr rule))
16626 ’(always-true)
16627 (caddr rule)))
16628 Query-driver-loop (section 4.4.4.1) calls query-syntax-process to transform pattern
16629 variables in the expression, which have the form ?symbol, into the internal format (? symbol).
16630 That is to say, a pattern such as (job ?x ?y) is actually represented internally by the system as
16631 (job (? x) (? y)). This increases the efficiency of query processing, since it means that the
16632 system can check to see if an expression is a pattern variable by checking whether the car of the
16633 expression is the symbol ?, rather than having to extract characters from the symbol. The syntax
16634 transformation is accomplished by the following procedure: 81
16635 (define (query-syntax-process exp)
16636 (map-over-symbols expand-question-mark exp))
16637 (define (map-over-symbols proc exp)
16638 (cond ((pair? exp)
16639 (cons (map-over-symbols proc (car exp))
16640 (map-over-symbols proc (cdr exp))))
16641 ((symbol? exp) (proc exp))
16642 (else exp)))
16643 (define (expand-question-mark symbol)
16644 (let ((chars (symbol->string symbol)))
16645 (if (string=? (substring chars 0 1) "?")
16646 (list ’?
16647 (string->symbol
16648 (substring chars 1 (string-length chars))))
16649 symbol)))
16650 Once the variables are transformed in this way, the variables in a pattern are lists starting with ?, and
16651 the constant symbols (which need to be recognized for data-base indexing, section 4.4.4.5) are just the
16652 symbols.
16653 (define (var? exp)
16654 (tagged-list? exp ’?))
16655 (define (constant-symbol? exp) (symbol? exp))
16656 Unique variables are constructed during rule application (in section 4.4.4.4) by means of the following
16657 procedures. The unique identifier for a rule application is a number, which is incremented each time a
16658 rule is applied.
16659 (define rule-counter 0)
16660 (define (new-rule-application-id)
16661 (set! rule-counter (+ 1 rule-counter))
16662 rule-counter)
16663 (define (make-new-variable var rule-application-id)
16664 (cons ’? (cons rule-application-id (cdr var))))
16665
16666 \fWhen query-driver-loop instantiates the query to print the answer, it converts any unbound
16667 pattern variables back to the right form for printing, using
16668 (define (contract-question-mark variable)
16669 (string->symbol
16670 (string-append "?"
16671 (if (number? (cadr variable))
16672 (string-append (symbol->string (caddr variable))
16673 "-"
16674 (number->string (cadr variable)))
16675 (symbol->string (cadr variable))))))
16676
16677 4.4.4.8 Frames and Bindings
16678 Frames are represented as lists of bindings, which are variable-value pairs:
16679 (define (make-binding variable value)
16680 (cons variable value))
16681 (define (binding-variable binding)
16682 (car binding))
16683 (define (binding-value binding)
16684 (cdr binding))
16685 (define (binding-in-frame variable frame)
16686 (assoc variable frame))
16687 (define (extend variable value frame)
16688 (cons (make-binding variable value) frame))
16689 Exercise 4.71. Louis Reasoner wonders why the simple-query and disjoin procedures
16690 (section 4.4.4.2) are implemented using explicit delay operations, rather than being defined as
16691 follows:
16692 (define (simple-query query-pattern frame-stream)
16693 (stream-flatmap
16694 (lambda (frame)
16695 (stream-append (find-assertions query-pattern frame)
16696 (apply-rules query-pattern frame)))
16697 frame-stream))
16698 (define (disjoin disjuncts frame-stream)
16699 (if (empty-disjunction? disjuncts)
16700 the-empty-stream
16701 (interleave
16702 (qeval (first-disjunct disjuncts) frame-stream)
16703 (disjoin (rest-disjuncts disjuncts) frame-stream))))
16704 Can you give examples of queries where these simpler definitions would lead to undesirable behavior?
16705 Exercise 4.72. Why do disjoin and stream-flatmap interleave the streams rather than simply
16706 append them? Give examples that illustrate why interleaving works better. (Hint: Why did we use
16707 interleave in section 3.5.3?)
16708
16709 \fExercise 4.73. Why does flatten-stream use delay explicitly? What would be wrong with
16710 defining it as follows:
16711 (define (flatten-stream stream)
16712 (if (stream-null? stream)
16713 the-empty-stream
16714 (interleave
16715 (stream-car stream)
16716 (flatten-stream (stream-cdr stream)))))
16717 Exercise 4.74. Alyssa P. Hacker proposes to use a simpler version of stream-flatmap in
16718 negate, lisp-value, and find-assertions. She observes that the procedure that is mapped
16719 over the frame stream in these cases always produces either the empty stream or a singleton stream, so
16720 no interleaving is needed when combining these streams.
16721 a. Fill in the missing expressions in Alyssa’s program.
16722 (define (simple-stream-flatmap proc s)
16723 (simple-flatten (stream-map proc s)))
16724 (define (simple-flatten stream)
16725 (stream-map <??>
16726 (stream-filter <??> stream)))
16727 b. Does the query system’s behavior change if we change it in this way?
16728 Exercise 4.75. Implement for the query language a new special form called unique. Unique
16729 should succeed if there is precisely one item in the data base satisfying a specified query. For example,
16730 (unique (job ?x (computer wizard)))
16731 should print the one-item stream
16732 (unique (job (Bitdiddle Ben) (computer wizard)))
16733 since Ben is the only computer wizard, and
16734 (unique (job ?x (computer programmer)))
16735 should print the empty stream, since there is more than one computer programmer. Moreover,
16736 (and (job ?x ?j) (unique (job ?anyone ?j)))
16737 should list all the jobs that are filled by only one person, and the people who fill them.
16738 There are two parts to implementing unique. The first is to write a procedure that handles this
16739 special form, and the second is to make qeval dispatch to that procedure. The second part is trivial,
16740 since qeval does its dispatching in a data-directed way. If your procedure is called
16741 uniquely-asserted, all you need to do is
16742 (put ’unique ’qeval uniquely-asserted)
16743
16744 \fand qeval will dispatch to this procedure for every query whose type (car) is the symbol
16745 unique.
16746 The real problem is to write the procedure uniquely-asserted. This should take as input the
16747 contents (cdr) of the unique query, together with a stream of frames. For each frame in the
16748 stream, it should use qeval to find the stream of all extensions to the frame that satisfy the given
16749 query. Any stream that does not have exactly one item in it should be eliminated. The remaining
16750 streams should be passed back to be accumulated into one big stream that is the result of the unique
16751 query. This is similar to the implementation of the not special form.
16752 Test your implementation by forming a query that lists all people who supervise precisely one person.
16753 Exercise 4.76. Our implementation of and as a series combination of queries (figure 4.5) is elegant,
16754 but it is inefficient because in processing the second query of the and we must scan the data base for
16755 each frame produced by the first query. If the data base has N elements, and a typical query produces a
16756 number of output frames proportional to N (say N/k), then scanning the data base for each frame
16757 produced by the first query will require N 2 /k calls to the pattern matcher. Another approach would be
16758 to process the two clauses of the and separately, then look for all pairs of output frames that are
16759 compatible. If each query produces N/k output frames, then this means that we must perform N 2 /k 2
16760 compatibility checks -- a factor of k fewer than the number of matches required in our current method.
16761 Devise an implementation of and that uses this strategy. You must implement a procedure that takes
16762 two frames as inputs, checks whether the bindings in the frames are compatible, and, if so, produces a
16763 frame that merges the two sets of bindings. This operation is similar to unification.
16764 Exercise 4.77. In section 4.4.3 we saw that not and lisp-value can cause the query language to
16765 give ‘‘wrong’’ answers if these filtering operations are applied to frames in which variables are
16766 unbound. Devise a way to fix this shortcoming. One idea is to perform the filtering in a ‘‘delayed’’
16767 manner by appending to the frame a ‘‘promise’’ to filter that is fulfilled only when enough variables
16768 have been bound to make the operation possible. We could wait to perform filtering until all other
16769 operations have been performed. However, for efficiency’s sake, we would like to perform filtering as
16770 soon as possible so as to cut down on the number of intermediate frames generated.
16771 Exercise 4.78. Redesign the query language as a nondeterministic program to be implemented using
16772 the evaluator of section 4.3, rather than as a stream process. In this approach, each query will produce
16773 a single answer (rather than the stream of all answers) and the user can type try-again to see more
16774 answers. You should find that much of the mechanism we built in this section is subsumed by
16775 nondeterministic search and backtracking. You will probably also find, however, that your new query
16776 language has subtle differences in behavior from the one implemented here. Can you find examples
16777 that illustrate this difference?
16778 Exercise 4.79. When we implemented the Lisp evaluator in section 4.1, we saw how to use local
16779 environments to avoid name conflicts between the parameters of procedures. For example, in
16780 evaluating
16781 (define (square x)
16782 (* x x))
16783 (define (sum-of-squares x y)
16784 (+ (square x) (square y)))
16785 (sum-of-squares 3 4)
16786
16787 \fthere is no confusion between the x in square and the x in sum-of-squares, because we
16788 evaluate the body of each procedure in an environment that is specially constructed to contain bindings
16789 for the local variables. In the query system, we used a different strategy to avoid name conflicts in
16790 applying rules. Each time we apply a rule we rename the variables with new names that are guaranteed
16791 to be unique. The analogous strategy for the Lisp evaluator would be to do away with local
16792 environments and simply rename the variables in the body of a procedure each time we apply the
16793 procedure.
16794 Implement for the query language a rule-application method that uses environments rather than
16795 renaming. See if you can build on your environment structure to create constructs in the query
16796 language for dealing with large systems, such as the rule analog of block-structured procedures. Can
16797 you relate any of this to the problem of making deductions in a context (e.g., ‘‘If I supposed that P
16798 were true, then I would be able to deduce A and B.’’) as a method of problem solving? (This problem
16799 is open-ended. A good answer is probably worth a Ph.D.)
16800 58 Logic programming has grown out of a long history of research in automatic theorem proving.
16801
16802 Early theorem-proving programs could accomplish very little, because they exhaustively searched the
16803 space of possible proofs. The major breakthrough that made such a search plausible was the discovery
16804 in the early 1960s of the unification algorithm and the resolution principle (Robinson 1965).
16805 Resolution was used, for example, by Green and Raphael (1968) (see also Green 1969) as the basis for
16806 a deductive question-answering system. During most of this period, researchers concentrated on
16807 algorithms that are guaranteed to find a proof if one exists. Such algorithms were difficult to control
16808 and to direct toward a proof. Hewitt (1969) recognized the possibility of merging the control structure
16809 of a programming language with the operations of a logic-manipulation system, leading to the work in
16810 automatic search mentioned in section 4.3.1 (footnote 47). At the same time that this was being done,
16811 Colmerauer, in Marseille, was developing rule-based systems for manipulating natural language (see
16812 Colmerauer et al. 1973). He invented a programming language called Prolog for representing those
16813 rules. Kowalski (1973; 1979), in Edinburgh, recognized that execution of a Prolog program could be
16814 interpreted as proving theorems (using a proof technique called linear Horn-clause resolution). The
16815 merging of the last two strands led to the logic-programming movement. Thus, in assigning credit for
16816 the development of logic programming, the French can point to Prolog’s genesis at the University of
16817 Marseille, while the British can highlight the work at the University of Edinburgh. According to
16818 people at MIT, logic programming was developed by these groups in an attempt to figure out what
16819 Hewitt was talking about in his brilliant but impenetrable Ph.D. thesis. For a history of logic
16820 programming, see Robinson 1983.
16821 59 To see the correspondence between the rules and the procedure, let x in the procedure (where x is
16822
16823 nonempty) correspond to (cons u v) in the rule. Then z in the rule corresponds to the append of
16824 (cdr x) and y.
16825 60 This certainly does not relieve the user of the entire problem of how to compute the answer. There
16826
16827 are many different mathematically equivalent sets of rules for formulating the append relation, only
16828 some of which can be turned into effective devices for computing in any direction. In addition,
16829 sometimes ‘‘what is’’ information gives no clue ‘‘how to’’ compute an answer. For example, consider
16830 the problem of computing the y such that y 2 = x.
16831 61 Interest in logic programming peaked during the early 80s when the Japanese government began an
16832
16833 ambitious project aimed at building superfast computers optimized to run logic programming
16834 languages. The speed of such computers was to be measured in LIPS (Logical Inferences Per Second)
16835 rather than the usual FLOPS (FLoating-point Operations Per Second). Although the project succeeded
16836
16837 \fin developing hardware and software as originally planned, the international computer industry moved
16838 in a different direction. See Feigenbaum and Shrobe 1993 for an overview evaluation of the Japanese
16839 project. The logic programming community has also moved on to consider relational programming
16840 based on techniques other than simple pattern matching, such as the ability to deal with numerical
16841 constraints such as the ones illustrated in the constraint-propagation system of section 3.3.5.
16842 62 This uses the dotted-tail notation introduced in exercise 2.20.
16843 63 Actually, this description of not is valid only for simple cases. The real behavior of not is more
16844
16845 complex. We will examine not’s peculiarities in sections 4.4.2 and 4.4.3.
16846 64 Lisp-value should be used only to perform an operation not provided in the query language. In
16847
16848 particular, it should not be used to test equality (since that is what the matching in the query language
16849 is designed to do) or inequality (since that can be done with the same rule shown below).
16850 65 Notice that we do not need same in order to make two things be the same: We just use the same
16851
16852 pattern variable for each -- in effect, we have one thing instead of two things in the first place. For
16853 example, see ?town in the lives-near rule and ?middle-manager in the wheel rule below.
16854 Same is useful when we want to force two things to be different, such as ?person-1 and
16855 ?person-2 in the lives-near rule. Although using the same pattern variable in two parts of a
16856 query forces the same value to appear in both places, using different pattern variables does not force
16857 different values to appear. (The values assigned to different pattern variables may be the same or
16858 different.)
16859 66 We will also allow rules without bodies, as in same, and we will interpret such a rule to mean that
16860
16861 the rule conclusion is satisfied by any values of the variables.
16862 67 Because matching is generally very expensive, we would like to avoid applying the full matcher to
16863
16864 every element of the data base. This is usually arranged by breaking up the process into a fast, coarse
16865 match and the final match. The coarse match filters the data base to produce a small set of candidates
16866 for the final match. With care, we can arrange our data base so that some of the work of coarse
16867 matching can be done when the data base is constructed rather then when we want to select the
16868 candidates. This is called indexing the data base. There is a vast technology built around
16869 data-base-indexing schemes. Our implementation, described in section 4.4.4, contains a
16870 simple-minded form of such an optimization.
16871 68 But this kind of exponential explosion is not common in and queries because the added conditions
16872
16873 tend to reduce rather than expand the number of frames produced.
16874 69 There is a large literature on data-base-management systems that is concerned with how to handle
16875
16876 complex queries efficiently.
16877 70 There is a subtle difference between this filter implementation of not and the usual meaning of
16878
16879 not in mathematical logic. See section 4.4.3.
16880 71 In one-sided pattern matching, all the equations that contain pattern variables are explicit and
16881
16882 already solved for the unknown (the pattern variable).
16883 72 Another way to think of unification is that it generates the most general pattern that is a
16884
16885 specialization of the two input patterns. That is, the unification of (?x a) and ((b ?y) ?z) is
16886 ((b ?y) a), and the unification of (?x a ?y) and (?y ?z a), discussed above, is (a a a).
16887 For our implementation, it is more convenient to think of the result of unification as a frame rather
16888
16889 \fthan a pattern.
16890 73 Since unification is a generalization of matching, we could simplify the system by using the unifier
16891
16892 to produce both streams. Treating the easy case with the simple matcher, however, illustrates how
16893 matching (as opposed to full-blown unification) can be useful in its own right.
16894 74 The reason we use streams (rather than lists) of frames is that the recursive application of rules can
16895
16896 generate infinite numbers of values that satisfy a query. The delayed evaluation embodied in streams is
16897 crucial here: The system will print responses one by one as they are generated, regardless of whether
16898 there are a finite or infinite number of responses.
16899 75 That a particular method of inference is legitimate is not a trivial assertion. One must prove that if
16900
16901 one starts with true premises, only true conclusions can be derived. The method of inference
16902 represented by rule applications is modus ponens, the familiar method of inference that says that if A is
16903 true and A implies B is true, then we may conclude that B is true.
16904 76 We must qualify this statement by agreeing that, in speaking of the ‘‘inference’’ accomplished by a
16905
16906 logic program, we assume that the computation terminates. Unfortunately, even this qualified
16907 statement is false for our implementation of the query language (and also false for programs in Prolog
16908 and most other current logic programming languages) because of our use of not and lisp-value.
16909 As we will describe below, the not implemented in the query language is not always consistent with
16910 the not of mathematical logic, and lisp-value introduces additional complications. We could
16911 implement a language consistent with mathematical logic by simply removing not and
16912 lisp-value from the language and agreeing to write programs using only simple queries, and, and
16913 or. However, this would greatly restrict the expressive power of the language. One of the major
16914 concerns of research in logic programming is to find ways to achieve more consistency with
16915 mathematical logic without unduly sacrificing expressive power.
16916 77 This is not a problem of the logic but one of the procedural interpretation of the logic provided by
16917
16918 our interpreter. We could write an interpreter that would not fall into a loop here. For example, we
16919 could enumerate all the proofs derivable from our assertions and our rules in a breadth-first rather than
16920 a depth-first order. However, such a system makes it more difficult to take advantage of the order of
16921 deductions in our programs. One attempt to build sophisticated control into such a program is
16922 described in deKleer et al. 1977. Another technique, which does not lead to such serious control
16923 problems, is to put in special knowledge, such as detectors for particular kinds of loops (exercise 4.67).
16924 However, there can be no general scheme for reliably preventing a system from going down infinite
16925 paths in performing deductions. Imagine a diabolical rule of the form ‘‘To show P(x) is true, show that
16926 P(f(x)) is true,’’ for some suitably chosen function f.
16927 78 Consider the query (not (baseball-fan (Bitdiddle Ben))). The system finds that
16928
16929 (baseball-fan (Bitdiddle Ben)) is not in the data base, so the empty frame does not
16930 satisfy the pattern and is not filtered out of the initial stream of frames. The result of the query is thus
16931 the empty frame, which is used to instantiate the input query to produce (not (baseball-fan
16932 (Bitdiddle Ben))).
16933 79 A discussion and justification of this treatment of not can be found in the article by Clark (1978).
16934 80 In general, unifying ?y with an expression involving ?y would require our being able to find a
16935
16936 fixed point of the equation ?y = <expression involving ?y>. It is sometimes possible to syntactically
16937 form an expression that appears to be the solution. For example, ?y = (f ?y) seems to have the
16938 fixed point (f (f (f ... ))), which we can produce by beginning with the expression (f ?y)
16939 and repeatedly substituting (f ?y) for ?y. Unfortunately, not every such equation has a meaningful
16940
16941 \ffixed point. The issues that arise here are similar to the issues of manipulating infinite series in
16942 mathematics. For example, we know that 2 is the solution to the equation y = 1 + y/2. Beginning with
16943 the expression 1 + y/2 and repeatedly substituting 1 + y/2 for y gives
16944
16945 which leads to
16946
16947 However, if we try the same manipulation beginning with the observation that - 1 is the solution to the
16948 equation y = 1 + 2y, we obtain
16949
16950 which leads to
16951
16952 Although the formal manipulations used in deriving these two equations are identical, the first result is
16953 a valid assertion about infinite series but the second is not. Similarly, for our unification results,
16954 reasoning with an arbitrary syntactically constructed expression may lead to errors.
16955 81 Most Lisp systems give the user the ability to modify the ordinary read procedure to perform such
16956
16957 transformations by defining reader macro characters. Quoted expressions are already handled in this
16958 way: The reader automatically translates ’expression into (quote expression) before the
16959 evaluator sees it. We could arrange for ?expression to be transformed into (? expression) in
16960 the same way; however, for the sake of clarity we have included the transformation procedure here
16961 explicitly.
16962 Expand-question-mark and contract-question-mark use several procedures with
16963 string in their names. These are Scheme primitives.
16964 [Go to first, previous, next page; contents; index]
16965
16966 \f[Go to first, previous, next page; contents; index]
16967
16968 Chapter 5
16969 Computing with Register Machines
16970 My aim is to show that the heavenly machine is not a
16971 kind of divine, live being, but a kind of clockwork (and
16972 he who believes that a clock has soul attributes the
16973 maker’s glory to the work), insofar as nearly all the
16974 manifold motions are caused by a most simple and
16975 material force, just as all motions of the clock are caused
16976 by a single weight.
16977 Johannes Kepler (letter to Herwart von Hohenburg, 1605)
16978 We began this book by studying processes and by describing processes in terms of procedures written
16979 in Lisp. To explain the meanings of these procedures, we used a succession of models of evaluation:
16980 the substitution model of chapter 1, the environment model of chapter 3, and the metacircular
16981 evaluator of chapter 4. Our examination of the metacircular evaluator, in particular, dispelled much of
16982 the mystery of how Lisp-like languages are interpreted. But even the metacircular evaluator leaves
16983 important questions unanswered, because it fails to elucidate the mechanisms of control in a Lisp
16984 system. For instance, the evaluator does not explain how the evaluation of a subexpression manages to
16985 return a value to the expression that uses this value, nor does the evaluator explain how some recursive
16986 procedures generate iterative processes (that is, are evaluated using constant space) whereas other
16987 recursive procedures generate recursive processes. These questions remain unanswered because the
16988 metacircular evaluator is itself a Lisp program and hence inherits the control structure of the
16989 underlying Lisp system. In order to provide a more complete description of the control structure of the
16990 Lisp evaluator, we must work at a more primitive level than Lisp itself.
16991 In this chapter we will describe processes in terms of the step-by-step operation of a traditional
16992 computer. Such a computer, or register machine, sequentially executes instructions that manipulate the
16993 contents of a fixed set of storage elements called registers. A typical register-machine instruction
16994 applies a primitive operation to the contents of some registers and assigns the result to another register.
16995 Our descriptions of processes executed by register machines will look very much like
16996 ‘‘machine-language’’ programs for traditional computers. However, instead of focusing on the
16997 machine language of any particular computer, we will examine several Lisp procedures and design a
16998 specific register machine to execute each procedure. Thus, we will approach our task from the
16999 perspective of a hardware architect rather than that of a machine-language computer programmer. In
17000 designing register machines, we will develop mechanisms for implementing important programming
17001 constructs such as recursion. We will also present a language for describing designs for register
17002 machines. In section 5.2 we will implement a Lisp program that uses these descriptions to simulate the
17003 machines we design.
17004 Most of the primitive operations of our register machines are very simple. For example, an operation
17005 might add the numbers fetched from two registers, producing a result to be stored into a third register.
17006 Such an operation can be performed by easily described hardware. In order to deal with list structure,
17007 however, we will also use the memory operations car, cdr, and cons, which require an elaborate
17008 storage-allocation mechanism. In section 5.3 we study their implementation in terms of more
17009
17010 \felementary operations.
17011 In section 5.4, after we have accumulated experience formulating simple procedures as register
17012 machines, we will design a machine that carries out the algorithm described by the metacircular
17013 evaluator of section 4.1. This will fill in the gap in our understanding of how Scheme expressions are
17014 interpreted, by providing an explicit model for the mechanisms of control in the evaluator. In
17015 section 5.5 we will study a simple compiler that translates Scheme programs into sequences of
17016 instructions that can be executed directly with the registers and operations of the evaluator register
17017 machine.
17018 [Go to first, previous, next page; contents; index]
17019
17020 \f[Go to first, previous, next page; contents; index]
17021
17022 5.1 Designing Register Machines
17023 To design a register machine, we must design its data paths (registers and operations) and the
17024 controller that sequences these operations. To illustrate the design of a simple register machine, let us
17025 examine Euclid’s Algorithm, which is used to compute the greatest common divisor (GCD) of two
17026 integers. As we saw in section 1.2.5, Euclid’s Algorithm can be carried out by an iterative process, as
17027 specified by the following procedure:
17028 (define (gcd a b)
17029 (if (= b 0)
17030 a
17031 (gcd b (remainder a b))))
17032 A machine to carry out this algorithm must keep track of two numbers, a and b, so let us assume that
17033 these numbers are stored in two registers with those names. The basic operations required are testing
17034 whether the contents of register b is zero and computing the remainder of the contents of register a
17035 divided by the contents of register b. The remainder operation is a complex process, but assume for
17036 the moment that we have a primitive device that computes remainders. On each cycle of the GCD
17037 algorithm, the contents of register a must be replaced by the contents of register b, and the contents of
17038 b must be replaced by the remainder of the old contents of a divided by the old contents of b. It would
17039 be convenient if these replacements could be done simultaneously, but in our model of register
17040 machines we will assume that only one register can be assigned a new value at each step. To
17041 accomplish the replacements, our machine will use a third ‘‘temporary’’ register, which we call t.
17042 (First the remainder will be placed in t, then the contents of b will be placed in a, and finally the
17043 remainder stored in t will be placed in b.)
17044 We can illustrate the registers and operations required for this machine by using the data-path diagram
17045 shown in figure 5.1. In this diagram, the registers (a, b, and t) are represented by rectangles. Each
17046 way to assign a value to a register is indicated by an arrow with an X behind the head, pointing from
17047 the source of data to the register. We can think of the X as a button that, when pushed, allows the value
17048 at the source to ‘‘flow’’ into the designated register. The label next to each button is the name we will
17049 use to refer to the button. The names are arbitrary, and can be chosen to have mnemonic value (for
17050 example, a<-b denotes pushing the button that assigns the contents of register b to register a). The
17051 source of data for a register can be another register (as in the a<-b assignment), an operation result
17052 (as in the t<-r assignment), or a constant (a built-in value that cannot be changed, represented in a
17053 data-path diagram by a triangle containing the constant).
17054 An operation that computes a value from constants and the contents of registers is represented in a
17055 data-path diagram by a trapezoid containing a name for the operation. For example, the box marked
17056 rem in figure 5.1 represents an operation that computes the remainder of the contents of the registers
17057 a and b to which it is attached. Arrows (without buttons) point from the input registers and constants
17058 to the box, and arrows connect the operation’s output value to registers. A test is represented by a
17059 circle containing a name for the test. For example, our GCD machine has an operation that tests
17060 whether the contents of register b is zero. A test also has arrows from its input registers and constants,
17061 but it has no output arrows; its value is used by the controller rather than by the data paths. Overall, the
17062 data-path diagram shows the registers and operations that are required for the machine and how they
17063 must be connected. If we view the arrows as wires and the X buttons as switches, the data-path
17064 diagram is very like the wiring diagram for a machine that could be constructed from electrical
17065 components.
17066
17067 \fFigure 5.2: Controller for a GCD machine.
17068 Exercise 5.1. Design a register machine to compute factorials using the iterative algorithm specified
17069 by the following procedure. Draw data-path and controller diagrams for this machine.
17070 (define (factorial n)
17071 (define (iter product counter)
17072 (if (> counter n)
17073 product
17074 (iter (* counter product)
17075 (+ counter 1))))
17076 (iter 1 1))
17077 Figure 5.1: Data paths for a GCD machine.
17078 5.1.1
17079 A Language for Describing Register Machines
17080 Figure 5.1: Data paths for a GCD machine.
17081 Data-path and controller diagrams are adequate for representing simple machines such as GCD, but
17082 they are unwieldy for describing large machines such as a Lisp interpreter. To make it possible to deal
17083 with
17084 complex
17085 create
17086 a language
17087 thatthe
17088 presents,
17089 textual
17090 form, allinthe
17091 In order
17092 for themachines,
17093 data pathswe
17094 towill
17095 actually
17096 compute
17097 GCDs,
17098 buttonsinmust
17099 be pushed
17100 theinformation
17101 correct
17102 given
17103 by the
17104 and controller
17105 diagrams.
17106 Weofwill
17107 start withdiagram,
17108 a notation
17109 directlyinmirrors
17110 the
17111 sequence.
17112 Wedata-path
17113 will describe
17114 this sequence
17115 in terms
17116 a controller
17117 asthat
17118 illustrated
17119 figure 5.2.
17120 diagrams.
17121 The
17122 elements of the controller diagram indicate how the data-path components should be operated.
17123 The rectangular boxes in the controller diagram identify data-path buttons to be pushed, and the arrows
17124 We define
17125 data pathsfrom
17126 of a machine
17127 describing
17128 thediamond
17129 registersinand
17130 operations.
17131 To describe
17132 a
17133 describe
17134 thethe
17135 sequencing
17136 one step by
17137 to the
17138 next. The
17139 thethe
17140 diagram
17141 represents
17142 a decision.
17143 register,
17144 wetwo
17145 givesequencing
17146 it a name and
17147 specify
17148 buttons that
17149 control assignment
17150 Wedata-path
17151 give eachtest
17152 of these
17153 One
17154 of the
17155 arrows
17156 will the
17157 be followed,
17158 depending
17159 on the valuetoofit.the
17160 buttons a name
17161 specifyWe
17162 thecan
17163 source
17164 of thethe
17165 data
17166 that enters
17167 the register
17168 under the
17169 button’s
17170 control.
17171 identified
17172 in theand
17173 diamond.
17174 interpret
17175 controller
17176 in terms
17177 of a physical
17178 analogy:
17179 Think
17180 of the
17181 (The
17182 source
17183 is
17184 a
17185 register,
17186 a
17187 constant,
17188 or
17189 an
17190 operation.)
17191 To
17192 describe
17193 an
17194 operation,
17195 we
17196 give
17197 it
17198 a
17199 name
17200 diagram as a maze in which a marble is rolling. When the marble rolls into a box, it pushes the and
17201 specify itsbutton
17202 inputsthat
17203 (registers
17204 or constants).
17205 data-path
17206 is named
17207 by the box. When the marble rolls into a decision node (such as the test
17208 for b = 0), it leaves the node on the path determined by the result of the indicated test. Taken together,
17209 We data
17210 define
17211 the and
17212 controller
17213 of a machine
17214 as a sequence
17215 instructions
17216 together with
17217 labelsWe
17218 that
17219 identify
17220 the
17221 paths
17222 the controller
17223 completely
17224 describeof
17225 a machine
17226 for computing
17227 GCDs.
17228 start
17229 the
17230 entry
17231 points
17232 in
17233 the
17234 sequence.
17235 An
17236 instruction
17237 is
17238 one
17239 of
17240 the
17241 following:
17242 controller (the rolling marble) at the place marked start, after placing numbers in registers a and b.
17243 When the controller reaches done, we will find the value of the GCD in register a.
17244 The name of a data-path button to push to assign a value to a register. (This corresponds to a box
17245 in the controller diagram.)
17246 A test instruction, that performs a specified test.
17247 A conditional branch (branch instruction) to a location indicated by a controller label, based on
17248 the result of the previous test. (The test and branch together correspond to a diamond in the
17249 controller diagram.) If the test is false, the controller should continue with the next instruction in
17250 the sequence. Otherwise, the controller should continue with the instruction after the label.
17251 An unconditional branch (goto instruction) naming a controller label at which to continue
17252 execution.
17253 The machine starts at the beginning of the controller instruction sequence and stops when execution
17254 reaches the end of the sequence. Except when a branch changes the flow of control, instructions are
17255 executed in the order in which they are listed.
17256
17257 Figure 5.2: Controller for a GCD machine.
17258
17259 \f(data-paths
17260 (registers
17261 ((name a)
17262 (buttons ((name a<-b) (source (register b)))))
17263 ((name b)
17264 (buttons ((name b<-t) (source (register t)))))
17265 ((name t)
17266 (buttons ((name t<-r) (source (operation rem))))))
17267 (operations
17268 ((name rem)
17269 (inputs (register a) (register b)))
17270 ((name =)
17271 (inputs (register b) (constant 0)))))
17272 (controller
17273 test-b
17274 ; label
17275 (test =)
17276 ; test
17277 (branch (label gcd-done))
17278 ; conditional branch
17279 (t<-r)
17280 ; button push
17281 (a<-b)
17282 ; button push
17283 (b<-t)
17284 ; button push
17285 (goto (label test-b))
17286 ; unconditional branch
17287 gcd-done)
17288 ; label
17289 Figure 5.3: A specification of the GCD machine.
17290 Figure 5.3: A specification of the GCD machine.
17291 Figure 5.3 shows the GCD machine described in this way. This example only hints at the generality of
17292 these descriptions, since the GCD machine is a very simple case: Each register has only one button,
17293 and each button and test is used only once in the controller.
17294 Unfortunately, it is difficult to read such a description. In order to understand the controller
17295 instructions we must constantly refer back to the definitions of the button names and the operation
17296 names, and to understand what the buttons do we may have to refer to the definitions of the operation
17297 names. We will thus transform our notation to combine the information from the data-path and
17298 controller descriptions so that we see it all together.
17299 To obtain this form of description, we will replace the arbitrary button and operation names by the
17300 definitions of their behavior. That is, instead of saying (in the controller) ‘‘Push button t<-r’’ and
17301 separately saying (in the data paths) ‘‘Button t<-r assigns the value of the rem operation to register
17302 t’’ and ‘‘The rem operation’s inputs are the contents of registers a and b,’’ we will say (in the
17303 controller) ‘‘Push the button that assigns to register t the value of the rem operation on the contents
17304 of registers a and b.’’ Similarly, instead of saying (in the controller) ‘‘Perform the = test’’ and
17305 separately saying (in the data paths) ‘‘The = test operates on the contents of register b and the constant
17306 0,’’ we will say ‘‘Perform the = test on the contents of register b and the constant 0.’’ We will omit
17307 the data-path description, leaving only the controller sequence. Thus, the GCD machine is described as
17308 follows:
17309
17310 \f(controller
17311 test-b
17312 (test (op =) (reg b) (const 0))
17313 (branch (label gcd-done))
17314 (assign t (op rem) (reg a) (reg b))
17315 (assign a (reg b))
17316 (assign b (reg t))
17317 (goto (label test-b))
17318 gcd-done)
17319 This form of description is easier to read than the kind illustrated in figure 5.3, but it also has
17320 disadvantages:
17321 It is more verbose for large machines, because complete descriptions of the data-path elements
17322 are repeated whenever the elements are mentioned in the controller instruction sequence. (This is
17323 not a problem in the GCD example, because each operation and button is used only once.)
17324 Moreover, repeating the data-path descriptions obscures the actual data-path structure of the
17325 machine; it is not obvious for a large machine how many registers, operations, and buttons there
17326 are and how they are interconnected.
17327 Because the controller instructions in a machine definition look like Lisp expressions, it is easy to
17328 forget that they are not arbitrary Lisp expressions. They can notate only legal machine operations.
17329 For example, operations can operate directly only on constants and the contents of registers, not
17330 on the results of other operations.
17331 In spite of these disadvantages, we will use this register-machine language throughout this chapter,
17332 because we will be more concerned with understanding controllers than with understanding the
17333 elements and connections in data paths. We should keep in mind, however, that data-path design is
17334 crucial in designing real machines.
17335 Exercise 5.2. Use the register-machine language to describe the iterative factorial machine of
17336 exercise 5.1.
17337
17338 Actions
17339 Let us modify the GCD machine so that we can type in the numbers whose GCD we want and get the
17340 answer printed at our terminal. We will not discuss how to make a machine that can read and print, but
17341 will assume (as we do when we use read and display in Scheme) that they are available as
17342 primitive operations. 1
17343 Read is like the operations we have been using in that it produces a value that can be stored in a
17344 register. But read does not take inputs from any registers; its value depends on something that
17345 happens outside the parts of the machine we are designing. We will allow our machine’s operations to
17346 have such behavior, and thus will draw and notate the use of read just as we do any other operation
17347 that computes a value.
17348 Print, on the other hand, differs from the operations we have been using in a fundamental way: It
17349 does not produce an output value to be stored in a register. Though it has an effect, this effect is not on
17350 a part of the machine we are designing. We will refer to this kind of operation as an action. We will
17351 represent an action in a data-path diagram just as we represent an operation that computes a value -- as
17352 a trapezoid that contains the name of the action. Arrows point to the action box from any inputs
17353 (registers or constants). We also associate a button with the action. Pushing the button makes the
17354
17355 \faction happen. To make a controller push an action button we use a new kind of instruction called
17356 perform. Thus, the action of printing the contents of register a is represented in a controller
17357 sequence by the instruction
17358 (perform (op print) (reg a))
17359 Figure 5.4 shows the data paths and controller for the new GCD machine. Instead of having the
17360 machine stop after printing the answer, we have made it start over, so that it repeatedly reads a pair of
17361 numbers, computes their GCD, and prints the result. This structure is like the driver loops we used in
17362 the interpreters of chapter 4.
17363
17364 (controller
17365 gcd-loop
17366 (assign a (op read))
17367 (assign b (op read))
17368 test-b
17369 (test (op =) (reg b) (const 0))
17370 (branch (label gcd-done))
17371 (assign t (op rem) (reg a) (reg b))
17372 (assign a (reg b))
17373 (assign b (reg t))
17374 (goto (label test-b))
17375 gcd-done
17376 (perform (op print) (reg a))
17377 (goto (label gcd-loop)))
17378 Figure 5.4: A GCD machine that reads inputs and prints results.
17379 Figure 5.4: A GCD machine that reads inputs and prints results.
17380
17381 \f5.1.2 Abstraction in Machine Design
17382 We will often define a machine to include ‘‘primitive’’ operations that are actually very complex. For
17383 example, in sections 5.4 and 5.5 we will treat Scheme’s environment manipulations as primitive. Such
17384 abstraction is valuable because it allows us to ignore the details of parts of a machine so that we can
17385 concentrate on other aspects of the design. The fact that we have swept a lot of complexity under the
17386 rug, however, does not mean that a machine design is unrealistic. We can always replace the complex
17387 ‘‘primitives’’ by simpler primitive operations.
17388 Consider the GCD machine. The machine has an instruction that computes the remainder of the
17389 contents of registers a and b and assigns the result to register t. If we want to construct the GCD
17390 machine without using a primitive remainder operation, we must specify how to compute remainders
17391 in terms of simpler operations, such as subtraction. Indeed, we can write a Scheme procedure that
17392 finds remainders in this way:
17393 (define (remainder n d)
17394 (if (< n d)
17395 n
17396 (remainder (- n d) d)))
17397 We can thus replace the remainder operation in the GCD machine’s data paths with a subtraction
17398 operation and a comparison test. Figure 5.5 shows the data paths and controller for the elaborated
17399 machine. The instruction
17400
17401 \fFigure 5.5: Data paths and controller for the elaborated GCD machine.
17402 Figure 5.5: Data paths and controller for the elaborated GCD machine.
17403 (assign t (op rem) (reg a) (reg b))
17404 in the GCD controller definition is replaced by a sequence of instructions that contains a loop, as
17405 shown in figure 5.6.
17406
17407 \f(controller
17408 test-b
17409 (test (op =) (reg b) (const 0))
17410 (branch (label gcd-done))
17411 (assign t (reg a))
17412 rem-loop
17413 (test (op <) (reg t) (reg b))
17414 (branch (label rem-done))
17415 (assign t (op -) (reg t) (reg b))
17416 (goto (label rem-loop))
17417 rem-done
17418 (assign a (reg b))
17419 (assign b (reg t))
17420 (goto (label test-b))
17421 gcd-done)
17422 Figure 5.6: Controller instruction sequence for the GCD machine in figure 5.5.
17423 Figure 5.6: Controller instruction sequence for the GCD machine in figure 5.5.
17424 Exercise 5.3. Design a machine to compute square roots using Newton’s method, as described in
17425 section 1.1.7:
17426 (define (sqrt x)
17427 (define (good-enough? guess)
17428 (< (abs (- (square guess) x)) 0.001))
17429 (define (improve guess)
17430 (average guess (/ x guess)))
17431 (define (sqrt-iter guess)
17432 (if (good-enough? guess)
17433 guess
17434 (sqrt-iter (improve guess))))
17435 (sqrt-iter 1.0))
17436 Begin by assuming that good-enough? and improve operations are available as primitives. Then
17437 show how to expand these in terms of arithmetic operations. Describe each version of the sqrt
17438 machine design by drawing a data-path diagram and writing a controller definition in the
17439 register-machine language.
17440
17441 5.1.3 Subroutines
17442 When designing a machine to perform a computation, we would often prefer to arrange for
17443 components to be shared by different parts of the computation rather than duplicate the components.
17444 Consider a machine that includes two GCD computations -- one that finds the GCD of the contents of
17445 registers a and b and one that finds the GCD of the contents of registers c and d. We might start by
17446 assuming we have a primitive gcd operation, then expand the two instances of gcd in terms of more
17447 primitive operations. Figure 5.7 shows just the GCD portions of the resulting machine’s data paths,
17448 without showing how they connect to the rest of the machine. The figure also shows the corresponding
17449 portions of the machine’s controller sequence.
17450
17451 \fgcd-1
17452 (test (op =) (reg b) (const 0))
17453 (branch (label after-gcd-1))
17454 (assign t (op rem) (reg a) (reg b))
17455 (assign a (reg b))
17456 (assign b (reg t))
17457 (goto (label gcd-1))
17458 after-gcd-1
17459 gcd-2
17460 (test (op =) (reg d) (const 0))
17461 (branch (label after-gcd-2))
17462 (assign s (op rem) (reg c) (reg d))
17463 (assign c (reg d))
17464 (assign d (reg s))
17465 (goto (label gcd-2))
17466 after-gcd-2
17467 Figure 5.7: Portions of the data paths and controller sequence for a machine with two GCD
17468 computations.
17469 Figure 5.7: Portions of the data paths and controller sequence for a machine with two GCD
17470 computations.
17471 This machine has two remainder operation boxes and two boxes for testing equality. If the duplicated
17472 components are complicated, as is the remainder box, this will not be an economical way to build the
17473 machine. We can avoid duplicating the data-path components by using the same components for both
17474 GCD computations, provided that doing so will not affect the rest of the larger machine’s computation.
17475 If the values in registers a and b are not needed by the time the controller gets to gcd-2 (or if these
17476 values can be moved to other registers for safekeeping), we can change the machine so that it uses
17477 registers a and b, rather than registers c and d, in computing the second GCD as well as the first. If
17478 we do this, we obtain the controller sequence shown in figure 5.8.
17479 We have removed the duplicate data-path components (so that the data paths are again as in
17480 figure 5.1), but the controller now has two GCD sequences that differ only in their entry-point labels. It
17481 would be better to replace these two sequences by branches to a single sequence -- a gcd subroutine -at the end of which we branch back to the correct place in the main instruction sequence. We can
17482
17483 \faccomplish this as follows: Before branching to gcd, we place a distinguishing value (such as 0 or 1)
17484 into a special register, continue. At the end of the gcd subroutine we return either to
17485 after-gcd-1 or to after-gcd-2, depending on the value of the continue register. Figure 5.9
17486 shows the relevant portion of the resulting controller sequence, which includes only a single copy of
17487 the gcd instructions.
17488 gcd-1
17489 (test (op =) (reg b) (const 0))
17490 (branch (label after-gcd-1))
17491 (assign t (op rem) (reg a) (reg b))
17492 (assign a (reg b))
17493 (assign b (reg t))
17494 (goto (label gcd-1))
17495 after-gcd-1
17496 gcd-2
17497 (test (op =) (reg b) (const 0))
17498 (branch (label after-gcd-2))
17499 (assign t (op rem) (reg a) (reg b))
17500 (assign a (reg b))
17501 (assign b (reg t))
17502 (goto (label gcd-2))
17503 after-gcd-2
17504 Figure 5.8: Portions of the controller sequence for a machine that uses the same data-path
17505 components for two different GCD computations.
17506 Figure 5.8: Portions of the controller sequence for a machine that uses the same data-path
17507 components for two different GCD computations.
17508
17509 \fgcd
17510 (test (op =) (reg b) (const 0))
17511 (branch (label gcd-done))
17512 (assign t (op rem) (reg a) (reg b))
17513 (assign a (reg b))
17514 (assign b (reg t))
17515 (goto (label gcd))
17516 gcd-done
17517 (test (op =) (reg continue) (const 0))
17518 (branch (label after-gcd-1))
17519 (goto (label after-gcd-2))
17520 ;; Before branching to gcd from the first place where
17521 ;; it is needed, we place 0 in the continue register
17522 (assign continue (const 0))
17523 (goto (label gcd))
17524 after-gcd-1
17525 ;; Before the second use of gcd, we place 1 in the continue register
17526 (assign continue (const 1))
17527 (goto (label gcd))
17528 after-gcd-2
17529 Figure 5.9: Using a continue register to avoid the duplicate controller sequence in figure 5.8.
17530 Figure 5.9: Using a continue register to avoid the duplicate controller sequence in figure 5.8.
17531
17532 \fgcd
17533 (test (op =) (reg b) (const 0))
17534 (branch (label gcd-done))
17535 (assign t (op rem) (reg a) (reg b))
17536 (assign a (reg b))
17537 (assign b (reg t))
17538 (goto (label gcd))
17539 gcd-done
17540 (goto (reg continue))
17541 ;; Before calling gcd, we assign to continue
17542 ;; the label to which gcd should return.
17543 (assign continue (label after-gcd-1))
17544 (goto (label gcd))
17545 after-gcd-1
17546 ;; Here is the second call to gcd, with a different continuation.
17547 (assign continue (label after-gcd-2))
17548 (goto (label gcd))
17549 after-gcd-2
17550 Figure 5.10: Assigning labels to the continue register simplifies and generalizes the strategy
17551 shown in figure 5.9.
17552 Figure 5.10: Assigning labels to the continue register simplifies and generalizes the strategy
17553 shown in figure 5.9.
17554 This is a reasonable approach for handling small problems, but it would be awkward if there were
17555 many instances of GCD computations in the controller sequence. To decide where to continue
17556 executing after the gcd subroutine, we would need tests in the data paths and branch instructions in
17557 the controller for all the places that use gcd. A more powerful method for implementing subroutines is
17558 to have the continue register hold the label of the entry point in the controller sequence at which
17559 execution should continue when the subroutine is finished. Implementing this strategy requires a new
17560 kind of connection between the data paths and the controller of a register machine: There must be a
17561 way to assign to a register a label in the controller sequence in such a way that this value can be
17562 fetched from the register and used to continue execution at the designated entry point.
17563 To reflect this ability, we will extend the assign instruction of the register-machine language to
17564 allow a register to be assigned as value a label from the controller sequence (as a special kind of
17565 constant). We will also extend the goto instruction to allow execution to continue at the entry point
17566 described by the contents of a register rather than only at an entry point described by a constant label.
17567 Using these new constructs we can terminate the gcd subroutine with a branch to the location stored
17568 in the continue register. This leads to the controller sequence shown in figure 5.10.
17569 A machine with more than one subroutine could use multiple continuation registers (e.g.,
17570 gcd-continue, factorial-continue) or we could have all subroutines share a single
17571 continue register. Sharing is more economical, but we must be careful if we have a subroutine
17572 (sub1) that calls another subroutine (sub2). Unless sub1 saves the contents of continue in some
17573 other register before setting up continue for the call to sub2, sub1 will not know where to go
17574 when it is finished. The mechanism developed in the next section to handle recursion also provides a
17575 better solution to this problem of nested subroutine calls.
17576
17577 \f5.1.4 Using a Stack to Implement Recursion
17578 With the ideas illustrated so far, we can implement any iterative process by specifying a register
17579 machine that has a register corresponding to each state variable of the process. The machine repeatedly
17580 executes a controller loop, changing the contents of the registers, until some termination condition is
17581 satisfied. At each point in the controller sequence, the state of the machine (representing the state of
17582 the iterative process) is completely determined by the contents of the registers (the values of the state
17583 variables).
17584 Implementing recursive processes, however, requires an additional mechanism. Consider the following
17585 recursive method for computing factorials, which we first examined in section 1.2.1:
17586 (define (factorial n)
17587 (if (= n 1)
17588 1
17589 (* (factorial (- n 1)) n)))
17590 As we see from the procedure, computing n! requires computing (n - 1)!. Our GCD machine, modeled
17591 on the procedure
17592 (define (gcd a b)
17593 (if (= b 0)
17594 a
17595 (gcd b (remainder a b))))
17596 similarly had to compute another GCD. But there is an important difference between the gcd
17597 procedure, which reduces the original computation to a new GCD computation, and factorial,
17598 which requires computing another factorial as a subproblem. In GCD, the answer to the new GCD
17599 computation is the answer to the original problem. To compute the next GCD, we simply place the
17600 new arguments in the input registers of the GCD machine and reuse the machine’s data paths by
17601 executing the same controller sequence. When the machine is finished solving the final GCD problem,
17602 it has completed the entire computation.
17603 In the case of factorial (or any recursive process) the answer to the new factorial subproblem is not the
17604 answer to the original problem. The value obtained for (n - 1)! must be multiplied by n to get the final
17605 answer. If we try to imitate the GCD design, and solve the factorial subproblem by decrementing the n
17606 register and rerunning the factorial machine, we will no longer have available the old value of n by
17607 which to multiply the result. We thus need a second factorial machine to work on the subproblem. This
17608 second factorial computation itself has a factorial subproblem, which requires a third factorial
17609 machine, and so on. Since each factorial machine contains another factorial machine within it, the total
17610 machine contains an infinite nest of similar machines and hence cannot be constructed from a fixed,
17611 finite number of parts.
17612 Nevertheless, we can implement the factorial process as a register machine if we can arrange to use the
17613 same components for each nested instance of the machine. Specifically, the machine that computes n!
17614 should use the same components to work on the subproblem of computing (n - 1)!, on the subproblem
17615 for (n - 2)!, and so on. This is plausible because, although the factorial process dictates that an
17616 unbounded number of copies of the same machine are needed to perform a computation, only one of
17617 these copies needs to be active at any given time. When the machine encounters a recursive
17618 subproblem, it can suspend work on the main problem, reuse the same physical parts to work on the
17619 subproblem, then continue the suspended computation.
17620
17621 \fIn the subproblem, the contents of the registers will be different than they were in the main problem.
17622 (In this case the n register is decremented.) In order to be able to continue the suspended computation,
17623 the machine must save the contents of any registers that will be needed after the subproblem is solved
17624 so that these can be restored to continue the suspended computation. In the case of factorial, we will
17625 save the old value of n, to be restored when we are finished computing the factorial of the
17626 decremented n register. 2
17627 Since there is no a priori limit on the depth of nested recursive calls, we may need to save an arbitrary
17628 number of register values. These values must be restored in the reverse of the order in which they were
17629 saved, since in a nest of recursions the last subproblem to be entered is the first to be finished. This
17630 dictates the use of a stack, or ‘‘last in, first out’’ data structure, to save register values. We can extend
17631 the register-machine language to include a stack by adding two kinds of instructions: Values are
17632 placed on the stack using a save instruction and restored from the stack using a restore
17633 instruction. After a sequence of values has been saved on the stack, a sequence of restores will
17634 retrieve these values in reverse order. 3
17635 With the aid of the stack, we can reuse a single copy of the factorial machine’s data paths for each
17636 factorial subproblem. There is a similar design issue in reusing the controller sequence that operates
17637 the data paths. To reexecute the factorial computation, the controller cannot simply loop back to the
17638 beginning, as with an iterative process, because after solving the (n - 1)! subproblem the machine must
17639 still multiply the result by n. The controller must suspend its computation of n!, solve the (n - 1)!
17640 subproblem, then continue its computation of n!. This view of the factorial computation suggests the
17641 use of the subroutine mechanism described in section 5.1.3, which has the controller use a continue
17642 register to transfer to the part of the sequence that solves a subproblem and then continue where it left
17643 off on the main problem. We can thus make a factorial subroutine that returns to the entry point stored
17644 in the continue register. Around each subroutine call, we save and restore continue just as we do
17645 the n register, since each ‘‘level’’ of the factorial computation will use the same continue register.
17646 That is, the factorial subroutine must put a new value in continue when it calls itself for a
17647 subproblem, but it will need the old value in order to return to the place that called it to solve a
17648 subproblem.
17649 Figure 5.11 shows the data paths and controller for a machine that implements the recursive
17650 factorial procedure. The machine has a stack and three registers, called n, val, and continue.
17651 To simplify the data-path diagram, we have not named the register-assignment buttons, only the
17652 stack-operation buttons (sc and sn to save registers, rc and rn to restore registers). To operate the
17653 machine, we put in register n the number whose factorial we wish to compute and start the machine.
17654 When the machine reaches fact-done, the computation is finished and the answer will be found in
17655 the val register. In the controller sequence, n and continue are saved before each recursive call
17656 and restored upon return from the call. Returning from a call is accomplished by branching to the
17657 location stored in continue. Continue is initialized when the machine starts so that the last return
17658 will go to fact-done. The val register, which holds the result of the factorial computation, is not
17659 saved before the recursive call, because the old contents of val is not useful after the subroutine
17660 returns. Only the new value, which is the value produced by the subcomputation, is needed. Although
17661 in principle the factorial computation requires an infinite machine, the machine in figure 5.11 is
17662 actually finite except for the stack, which is potentially unbounded. Any particular physical
17663 implementation of a stack, however, will be of finite size, and this will limit the depth of recursive
17664 calls that can be handled by the machine. This implementation of factorial illustrates the general
17665 strategy for realizing recursive algorithms as ordinary register machines augmented by stacks. When a
17666 recursive subproblem is encountered, we save on the stack the registers whose current values will be
17667 required after the subproblem is solved, solve the recursive subproblem, then restore the saved
17668 registers and continue execution on the main problem. The continue register must always be saved.
17669
17670 \fWhether there are other registers that need to be saved depends on the particular machine, since not all
17671 recursive computations need the original values of registers that are modified during solution of the
17672 subproblem (see exercise 5.4).
17673
17674 A double recursion
17675 Let us examine a more complex recursive process, the tree-recursive computation of the Fibonacci
17676 numbers, which we introduced in section 1.2.2:
17677 (define (fib n)
17678 (if (< n 2)
17679 n
17680 (+ (fib (- n 1)) (fib (- n 2)))))
17681 Just as with factorial, we can implement the recursive Fibonacci computation as a register machine
17682 with registers n, val, and continue. The machine is more complex than the one for factorial,
17683 because there are two places in the controller sequence where we need to perform recursive calls -once to compute Fib(n - 1) and once to compute Fib(n - 2). To set up for each of these calls, we save
17684 the registers whose values will be needed later, set the n register to the number whose Fib we need to
17685 compute recursively (n - 1 or n - 2), and assign to continue the entry point in the main sequence to
17686 which to return (afterfib-n-1 or afterfib-n-2, respectively). We then go to fib-loop.
17687 When we return from the recursive call, the answer is in val. Figure 5.12 shows the controller
17688 sequence for this machine.
17689
17690 \f(controller
17691 (assign continue (label fact-done))
17692 ; set up final return address
17693 fact-loop
17694 (test (op =) (reg n) (const 1))
17695 (branch (label base-case))
17696 ;; Set up for the recursive call by saving n and continue.
17697 ;; Set up continue so that the computation will continue
17698 ;; at after-fact when the subroutine returns.
17699 (save continue)
17700 (save n)
17701 (assign n (op -) (reg n) (const 1))
17702 (assign continue (label after-fact))
17703 (goto (label fact-loop))
17704 after-fact
17705 (restore n)
17706 (restore continue)
17707 (assign val (op *) (reg n) (reg val))
17708 ; val now contains n(n - 1)!
17709 (goto (reg continue))
17710 ; return to caller
17711 base-case
17712 (assign val (const 1))
17713 ; base case: 1! = 1
17714 (goto (reg continue))
17715 ; return to caller
17716 fact-done)
17717 Figure 5.11: A recursive factorial machine.
17718 Figure 5.11: A recursive factorial machine.
17719
17720 \f(controller
17721 (assign continue (label fib-done))
17722 fib-loop
17723 (test (op <) (reg n) (const 2))
17724 (branch (label immediate-answer))
17725 ;; set up to compute Fib(n - 1)
17726 (save continue)
17727 (assign continue (label afterfib-n-1))
17728 (save n)
17729 ; save old value of n
17730 (assign n (op -) (reg n) (const 1)); clobber n to n - 1
17731 (goto (label fib-loop))
17732 ; perform recursive call
17733 afterfib-n-1
17734 ; upon return, val contains Fib(n 1)
17735 (restore n)
17736 (restore continue)
17737 ;; set up to compute Fib(n - 2)
17738 (assign n (op -) (reg n) (const 2))
17739 (save continue)
17740 (assign continue (label afterfib-n-2))
17741 (save val)
17742 ; save Fib(n - 1)
17743 (goto (label fib-loop))
17744 afterfib-n-2
17745 ; upon return, val contains Fib(n 2)
17746 (assign n (reg val))
17747 ; n now contains Fib(n - 2)
17748 (restore val)
17749 ; val now contains Fib(n - 1)
17750 (restore continue)
17751 (assign val
17752 ; Fib(n - 1) + Fib(n - 2)
17753 (op +) (reg val) (reg n))
17754 (goto (reg continue))
17755 ; return to caller, answer is in val
17756 immediate-answer
17757 (assign val (reg n))
17758 ; base case: Fib(n) = n
17759 (goto (reg continue))
17760 fib-done)
17761 Figure 5.12: Controller for a machine to compute Fibonacci numbers.
17762 Figure 5.12: Controller for a machine to compute Fibonacci numbers.
17763
17764 Exercise 5.4. Specify register machines that implement each of the following procedures. For each
17765 machine, write a controller instruction sequence and draw a diagram showing the data paths.
17766 a. Recursive exponentiation:
17767 (define (expt b n)
17768 (if (= n 0)
17769 1
17770 (* b (expt b (- n 1)))))
17771
17772 \fb. Iterative exponentiation:
17773 (define (expt b n)
17774 (define (expt-iter counter product)
17775 (if (= counter 0)
17776 product
17777 (expt-iter (- counter 1) (* b product))))
17778 (expt-iter n 1))
17779 Exercise 5.5. Hand-simulate the factorial and Fibonacci machines, using some nontrivial input
17780 (requiring execution of at least one recursive call). Show the contents of the stack at each significant
17781 point in the execution.
17782 Exercise 5.6. Ben Bitdiddle observes that the Fibonacci machine’s controller sequence has an extra
17783 save and an extra restore, which can be removed to make a faster machine. Where are these
17784 instructions?
17785
17786 5.1.5 Instruction Summary
17787 A controller instruction in our register-machine language has one of the following forms, where each
17788 <input i > is either (reg <register-name>) or (const <constant-value>).
17789 These instructions were introduced in section 5.1.1:
17790 (assign <register-name> (reg <register-name>))
17791 (assign <register-name> (const <constant-value>))
17792 (assign <register-name> (op <operation-name>) <input 1 > ...
17793 <input n >)
17794 (perform (op <operation-name>) <input 1 > ... <input n >)
17795 (test (op <operation-name>) <input 1 > ... <input n >)
17796 (branch (label <label-name>))
17797 (goto (label <label-name>))
17798 The use of registers to hold labels was introduced in section 5.1.3:
17799 (assign <register-name> (label <label-name>))
17800 (goto (reg <register-name>))
17801 Instructions to use the stack were introduced in section 5.1.4:
17802 (save <register-name>)
17803 (restore <register-name>)
17804 The only kind of <constant-value> we have seen so far is a number, but later we will use strings,
17805 symbols, and lists. For example, (const "abc") is the string "abc", (const abc) is the
17806 symbol abc, (const (a b c)) is the list (a b c), and (const ()) is the empty list.
17807 1 This assumption glosses over a great deal of complexity. Usually a large portion of the
17808
17809 implementation of a Lisp system is dedicated to making reading and printing work.
17810
17811 \f2 One might argue that we don’t need to save the old n; after we decrement it and solve the
17812
17813 subproblem, we could simply increment it to recover the old value. Although this strategy works for
17814 factorial, it cannot work in general, since the old value of a register cannot always be computed from
17815 the new one.
17816 3 In section 5.3 we will see how to implement a stack in terms of more primitive operations.
17817
17818 [Go to first, previous, next page; contents; index]
17819
17820 \f[Go to first, previous, next page; contents; index]
17821
17822 5.2 A Register-Machine Simulator
17823 In order to gain a good understanding of the design of register machines, we must test the machines we
17824 design to see if they perform as expected. One way to test a design is to hand-simulate the operation of
17825 the controller, as in exercise 5.5. But this is extremely tedious for all but the simplest machines. In this
17826 section we construct a simulator for machines described in the register-machine language. The
17827 simulator is a Scheme program with four interface procedures. The first uses a description of a register
17828 machine to construct a model of the machine (a data structure whose parts correspond to the parts of
17829 the machine to be simulated), and the other three allow us to simulate the machine by manipulating the
17830 model:
17831 (make-machine <register-names> <operations> <controller>)
17832 constructs and returns a model of the machine with the given registers, operations, and controller.
17833 (set-register-contents! <machine-model> <register-name> <value>)
17834 stores a value in a simulated register in the given machine.
17835 (get-register-contents <machine-model> <register-name>)
17836 returns the contents of a simulated register in the given machine.
17837 (start <machine-model>)
17838 simulates the execution of the given machine, starting from the beginning of the controller
17839 sequence and stopping when it reaches the end of the sequence.
17840 As an example of how these procedures are used, we can define gcd-machine to be a model of the
17841 GCD machine of section 5.1.1 as follows:
17842 (define gcd-machine
17843 (make-machine
17844 ’(a b t)
17845 (list (list ’rem remainder) (list ’= =))
17846 ’(test-b
17847 (test (op =) (reg b) (const 0))
17848 (branch (label gcd-done))
17849 (assign t (op rem) (reg a) (reg b))
17850 (assign a (reg b))
17851 (assign b (reg t))
17852 (goto (label test-b))
17853 gcd-done)))
17854 The first argument to make-machine is a list of register names. The next argument is a table (a list
17855 of two-element lists) that pairs each operation name with a Scheme procedure that implements the
17856 operation (that is, produces the same output value given the same input values). The last argument
17857 specifies the controller as a list of labels and machine instructions, as in section 5.1.
17858 To compute GCDs with this machine, we set the input registers, start the machine, and examine the
17859 result when the simulation terminates:
17860
17861 \f(set-register-contents! gcd-machine ’a 206)
17862 done
17863 (set-register-contents! gcd-machine ’b 40)
17864 done
17865 (start gcd-machine)
17866 done
17867 (get-register-contents gcd-machine ’a)
17868 2
17869 This computation will run much more slowly than a gcd procedure written in Scheme, because we
17870 will simulate low-level machine instructions, such as assign, by much more complex operations.
17871 Exercise 5.7. Use the simulator to test the machines you designed in exercise 5.4.
17872
17873 5.2.1 The Machine Model
17874 The machine model generated by make-machine is represented as a procedure with local state
17875 using the message-passing techniques developed in chapter 3. To build this model, make-machine
17876 begins by calling the procedure make-new-machine to construct the parts of the machine model
17877 that are common to all register machines. This basic machine model constructed by
17878 make-new-machine is essentially a container for some registers and a stack, together with an
17879 execution mechanism that processes the controller instructions one by one.
17880 Make-machine then extends this basic model (by sending it messages) to include the registers,
17881 operations, and controller of the particular machine being defined. First it allocates a register in the
17882 new machine for each of the supplied register names and installs the designated operations in the
17883 machine. Then it uses an assembler (described below in section 5.2.2) to transform the controller list
17884 into instructions for the new machine and installs these as the machine’s instruction sequence.
17885 Make-machine returns as its value the modified machine model.
17886 (define (make-machine register-names ops controller-text)
17887 (let ((machine (make-new-machine)))
17888 (for-each (lambda (register-name)
17889 ((machine ’allocate-register) register-name))
17890 register-names)
17891 ((machine ’install-operations) ops)
17892 ((machine ’install-instruction-sequence)
17893 (assemble controller-text machine))
17894 machine))
17895
17896 Registers
17897 We will represent a register as a procedure with local state, as in chapter 3. The procedure
17898 make-register creates a register that holds a value that can be accessed or changed:
17899 (define (make-register name)
17900 (let ((contents ’*unassigned*))
17901 (define (dispatch message)
17902 (cond ((eq? message ’get) contents)
17903 ((eq? message ’set)
17904 (lambda (value) (set! contents value)))
17905
17906 \f(else
17907 (error "Unknown request -- REGISTER" message))))
17908 dispatch))
17909 The following procedures are used to access registers:
17910 (define (get-contents register)
17911 (register ’get))
17912 (define (set-contents! register value)
17913 ((register ’set) value))
17914
17915 The stack
17916 We can also represent a stack as a procedure with local state. The procedure make-stack creates a
17917 stack whose local state consists of a list of the items on the stack. A stack accepts requests to push an
17918 item onto the stack, to pop the top item off the stack and return it, and to initialize the stack to
17919 empty.
17920 (define (make-stack)
17921 (let ((s ’()))
17922 (define (push x)
17923 (set! s (cons x s)))
17924 (define (pop)
17925 (if (null? s)
17926 (error "Empty stack -- POP")
17927 (let ((top (car s)))
17928 (set! s (cdr s))
17929 top)))
17930 (define (initialize)
17931 (set! s ’())
17932 ’done)
17933 (define (dispatch message)
17934 (cond ((eq? message ’push) push)
17935 ((eq? message ’pop) (pop))
17936 ((eq? message ’initialize) (initialize))
17937 (else (error "Unknown request -- STACK"
17938 message))))
17939 dispatch))
17940 The following procedures are used to access stacks:
17941 (define (pop stack)
17942 (stack ’pop))
17943 (define (push stack value)
17944 ((stack ’push) value))
17945
17946 The basic machine
17947 The make-new-machine procedure, shown in figure 5.13, constructs an object whose local state
17948 consists of a stack, an initially empty instruction sequence, a list of operations that initially contains an
17949 operation to initialize the stack, and a register table that initially contains two registers, named flag
17950 and pc (for ‘‘program counter’’). The internal procedure allocate-register adds new entries to
17951
17952 \fthe register table, and the internal procedure lookup-register looks up registers in the table.
17953 The flag register is used to control branching in the simulated machine. Test instructions set the
17954 contents of flag to the result of the test (true or false). Branch instructions decide whether or not to
17955 branch by examining the contents of flag.
17956 The pc register determines the sequencing of instructions as the machine runs. This sequencing is
17957 implemented by the internal procedure execute. In the simulation model, each machine instruction
17958 is a data structure that includes a procedure of no arguments, called the instruction execution
17959 procedure, such that calling this procedure simulates executing the instruction. As the simulation runs,
17960 pc points to the place in the instruction sequence beginning with the next instruction to be executed.
17961 Execute gets that instruction, executes it by calling the instruction execution procedure, and repeats
17962 this cycle until there are no more instructions to execute (i.e., until pc points to the end of the
17963 instruction sequence).
17964
17965 \f(define (make-new-machine)
17966 (let ((pc (make-register ’pc))
17967 (flag (make-register ’flag))
17968 (stack (make-stack))
17969 (the-instruction-sequence ’()))
17970 (let ((the-ops
17971 (list (list ’initialize-stack
17972 (lambda () (stack ’initialize)))))
17973 (register-table
17974 (list (list ’pc pc) (list ’flag flag))))
17975 (define (allocate-register name)
17976 (if (assoc name register-table)
17977 (error "Multiply defined register: " name)
17978 (set! register-table
17979 (cons (list name (make-register name))
17980 register-table)))
17981 ’register-allocated)
17982 (define (lookup-register name)
17983 (let ((val (assoc name register-table)))
17984 (if val
17985 (cadr val)
17986 (error "Unknown register:" name))))
17987 (define (execute)
17988 (let ((insts (get-contents pc)))
17989 (if (null? insts)
17990 ’done
17991 (begin
17992 ((instruction-execution-proc (car insts)))
17993 (execute)))))
17994 (define (dispatch message)
17995 (cond ((eq? message ’start)
17996 (set-contents! pc the-instruction-sequence)
17997 (execute))
17998 ((eq? message ’install-instruction-sequence)
17999 (lambda (seq) (set! the-instruction-sequence seq)))
18000 ((eq? message ’allocate-register) allocate-register)
18001 ((eq? message ’get-register) lookup-register)
18002 ((eq? message ’install-operations)
18003 (lambda (ops) (set! the-ops (append the-ops ops))))
18004 ((eq? message ’stack) stack)
18005 ((eq? message ’operations) the-ops)
18006 (else (error "Unknown request -- MACHINE" message))))
18007 dispatch)))
18008 Figure 5.13: The make-new-machine procedure, which implements the basic machine model.
18009 Figure 5.13: The make-new-machine procedure, which implements the basic machine model.
18010
18011 \fAs part of its operation, each instruction execution procedure modifies pc to indicate the next
18012 instruction to be executed. Branch and goto instructions change pc to point to the new destination.
18013 All other instructions simply advance pc, making it point to the next instruction in the sequence.
18014 Observe that each call to execute calls execute again, but this does not produce an infinite loop
18015 because running the instruction execution procedure changes the contents of pc.
18016 Make-new-machine returns a dispatch procedure that implements message-passing access to
18017 the internal state. Notice that starting the machine is accomplished by setting pc to the beginning of
18018 the instruction sequence and calling execute.
18019 For convenience, we provide an alternate procedural interface to a machine’s start operation, as
18020 well as procedures to set and examine register contents, as specified at the beginning of section 5.2:
18021 (define (start machine)
18022 (machine ’start))
18023 (define (get-register-contents machine register-name)
18024 (get-contents (get-register machine register-name)))
18025 (define (set-register-contents! machine register-name value)
18026 (set-contents! (get-register machine register-name) value)
18027 ’done)
18028 These procedures (and many procedures in sections 5.2.2 and 5.2.3) use the following to look up the
18029 register with a given name in a given machine:
18030 (define (get-register machine reg-name)
18031 ((machine ’get-register) reg-name))
18032
18033 5.2.2 The Assembler
18034 The assembler transforms the sequence of controller expressions for a machine into a corresponding
18035 list of machine instructions, each with its execution procedure. Overall, the assembler is much like the
18036 evaluators we studied in chapter 4 -- there is an input language (in this case, the register-machine
18037 language) and we must perform an appropriate action for each type of expression in the language.
18038 The technique of producing an execution procedure for each instruction is just what we used in
18039 section 4.1.7 to speed up the evaluator by separating analysis from runtime execution. As we saw in
18040 chapter 4, much useful analysis of Scheme expressions could be performed without knowing the actual
18041 values of variables. Here, analogously, much useful analysis of register-machine-language expressions
18042 can be performed without knowing the actual contents of machine registers. For example, we can
18043 replace references to registers by pointers to the register objects, and we can replace references to
18044 labels by pointers to the place in the instruction sequence that the label designates.
18045 Before it can generate the instruction execution procedures, the assembler must know what all the
18046 labels refer to, so it begins by scanning the controller text to separate the labels from the instructions.
18047 As it scans the text, it constructs both a list of instructions and a table that associates each label with a
18048 pointer into that list. Then the assembler augments the instruction list by inserting the execution
18049 procedure for each instruction.
18050 The assemble procedure is the main entry to the assembler. It takes the controller text and the
18051 machine model as arguments and returns the instruction sequence to be stored in the model.
18052 Assemble calls extract-labels to build the initial instruction list and label table from the
18053 supplied controller text. The second argument to extract-labels is a procedure to be called to
18054
18055 \fprocess these results: This procedure uses update-insts! to generate the instruction execution
18056 procedures and insert them into the instruction list, and returns the modified list.
18057 (define (assemble controller-text machine)
18058 (extract-labels controller-text
18059 (lambda (insts labels)
18060 (update-insts! insts labels machine)
18061 insts)))
18062 Extract-labels takes as arguments a list text (the sequence of controller instruction
18063 expressions) and a receive procedure. Receive will be called with two values: (1) a list insts of
18064 instruction data structures, each containing an instruction from text; and (2) a table called labels,
18065 which associates each label from text with the position in the list insts that the label designates.
18066 (define (extract-labels text receive)
18067 (if (null? text)
18068 (receive ’() ’())
18069 (extract-labels (cdr text)
18070 (lambda (insts labels)
18071 (let ((next-inst (car text)))
18072 (if (symbol? next-inst)
18073 (receive insts
18074 (cons (make-label-entry next-inst
18075 insts)
18076 labels))
18077 (receive (cons (make-instruction next-inst)
18078 insts)
18079 labels)))))))
18080 Extract-labels works by sequentially scanning the elements of the text and accumulating the
18081 insts and the labels. If an element is a symbol (and thus a label) an appropriate entry is added to
18082 the labels table. Otherwise the element is accumulated onto the insts list. 4
18083 Update-insts! modifies the instruction list, which initially contains only the text of the
18084 instructions, to include the corresponding execution procedures:
18085 (define (update-insts! insts labels machine)
18086 (let ((pc (get-register machine ’pc))
18087 (flag (get-register machine ’flag))
18088 (stack (machine ’stack))
18089 (ops (machine ’operations)))
18090 (for-each
18091 (lambda (inst)
18092 (set-instruction-execution-proc!
18093 inst
18094 (make-execution-procedure
18095 (instruction-text inst) labels machine
18096 pc flag stack ops)))
18097 insts)))
18098
18099 \fThe machine instruction data structure simply pairs the instruction text with the corresponding
18100 execution procedure. The execution procedure is not yet available when extract-labels
18101 constructs the instruction, and is inserted later by update-insts!.
18102 (define (make-instruction text)
18103 (cons text ’()))
18104 (define (instruction-text inst)
18105 (car inst))
18106 (define (instruction-execution-proc inst)
18107 (cdr inst))
18108 (define (set-instruction-execution-proc! inst proc)
18109 (set-cdr! inst proc))
18110 The instruction text is not used by our simulator, but it is handy to keep around for debugging (see
18111 exercise 5.16).
18112 Elements of the label table are pairs:
18113 (define (make-label-entry label-name insts)
18114 (cons label-name insts))
18115 Entries will be looked up in the table with
18116 (define (lookup-label labels label-name)
18117 (let ((val (assoc label-name labels)))
18118 (if val
18119 (cdr val)
18120 (error "Undefined label -- ASSEMBLE" label-name))))
18121 Exercise 5.8. The following register-machine code is ambiguous, because the label here is defined
18122 more than once:
18123 start
18124 (goto (label here))
18125 here
18126 (assign a (const 3))
18127 (goto (label there))
18128 here
18129 (assign a (const 4))
18130 (goto (label there))
18131 there
18132 With the simulator as written, what will the contents of register a be when control reaches there?
18133 Modify the extract-labels procedure so that the assembler will signal an error if the same label
18134 name is used to indicate two different locations.
18135
18136 5.2.3 Generating Execution Procedures for Instructions
18137 The assembler calls make-execution-procedure to generate the execution procedure for an
18138 instruction. Like the analyze procedure in the evaluator of section 4.1.7, this dispatches on the type
18139 of instruction to generate the appropriate execution procedure.
18140
18141 \f(define (make-execution-procedure inst labels machine
18142 pc flag stack ops)
18143 (cond ((eq? (car inst) ’assign)
18144 (make-assign inst machine labels ops pc))
18145 ((eq? (car inst) ’test)
18146 (make-test inst machine labels ops flag pc))
18147 ((eq? (car inst) ’branch)
18148 (make-branch inst machine labels flag pc))
18149 ((eq? (car inst) ’goto)
18150 (make-goto inst machine labels pc))
18151 ((eq? (car inst) ’save)
18152 (make-save inst machine stack pc))
18153 ((eq? (car inst) ’restore)
18154 (make-restore inst machine stack pc))
18155 ((eq? (car inst) ’perform)
18156 (make-perform inst machine labels ops pc))
18157 (else (error "Unknown instruction type -- ASSEMBLE"
18158 inst))))
18159 For each type of instruction in the register-machine language, there is a generator that builds an
18160 appropriate execution procedure. The details of these procedures determine both the syntax and
18161 meaning of the individual instructions in the register-machine language. We use data abstraction to
18162 isolate the detailed syntax of register-machine expressions from the general execution mechanism, as
18163 we did for evaluators in section 4.1.2, by using syntax procedures to extract and classify the parts of an
18164 instruction.
18165
18166 Assign instructions
18167 The make-assign procedure handles assign instructions:
18168 (define (make-assign inst machine labels operations pc)
18169 (let ((target
18170 (get-register machine (assign-reg-name inst)))
18171 (value-exp (assign-value-exp inst)))
18172 (let ((value-proc
18173 (if (operation-exp? value-exp)
18174 (make-operation-exp
18175 value-exp machine labels operations)
18176 (make-primitive-exp
18177 (car value-exp) machine labels))))
18178 (lambda ()
18179 ; execution procedure for assign
18180 (set-contents! target (value-proc))
18181 (advance-pc pc)))))
18182 Make-assign extracts the target register name (the second element of the instruction) and the value
18183 expression (the rest of the list that forms the instruction) from the assign instruction using the
18184 selectors
18185 (define (assign-reg-name assign-instruction)
18186 (cadr assign-instruction))
18187 (define (assign-value-exp assign-instruction)
18188
18189 \f(cddr assign-instruction))
18190 The register name is looked up with get-register to produce the target register object. The value
18191 expression is passed to make-operation-exp if the value is the result of an operation, and to
18192 make-primitive-exp otherwise. These procedures (shown below) parse the value expression and
18193 produce an execution procedure for the value. This is a procedure of no arguments, called
18194 value-proc, which will be evaluated during the simulation to produce the actual value to be
18195 assigned to the register. Notice that the work of looking up the register name and parsing the value
18196 expression is performed just once, at assembly time, not every time the instruction is simulated. This
18197 saving of work is the reason we use execution procedures, and corresponds directly to the saving in
18198 work we obtained by separating program analysis from execution in the evaluator of section 4.1.7.
18199 The result returned by make-assign is the execution procedure for the assign instruction. When
18200 this procedure is called (by the machine model’s execute procedure), it sets the contents of the
18201 target register to the result obtained by executing value-proc. Then it advances the pc to the next
18202 instruction by running the procedure
18203 (define (advance-pc pc)
18204 (set-contents! pc (cdr (get-contents pc))))
18205 Advance-pc is the normal termination for all instructions except branch and goto.
18206
18207 Test, branch, and goto instructions
18208 Make-test handles test instructions in a similar way. It extracts the expression that specifies the
18209 condition to be tested and generates an execution procedure for it. At simulation time, the procedure
18210 for the condition is called, the result is assigned to the flag register, and the pc is advanced:
18211 (define (make-test inst machine labels operations flag pc)
18212 (let ((condition (test-condition inst)))
18213 (if (operation-exp? condition)
18214 (let ((condition-proc
18215 (make-operation-exp
18216 condition machine labels operations)))
18217 (lambda ()
18218 (set-contents! flag (condition-proc))
18219 (advance-pc pc)))
18220 (error "Bad TEST instruction -- ASSEMBLE" inst))))
18221 (define (test-condition test-instruction)
18222 (cdr test-instruction))
18223 The execution procedure for a branch instruction checks the contents of the flag register and either
18224 sets the contents of the pc to the branch destination (if the branch is taken) or else just advances the
18225 pc (if the branch is not taken). Notice that the indicated destination in a branch instruction must be a
18226 label, and the make-branch procedure enforces this. Notice also that the label is looked up at
18227 assembly time, not each time the branch instruction is simulated.
18228 (define (make-branch inst machine labels flag pc)
18229 (let ((dest (branch-dest inst)))
18230 (if (label-exp? dest)
18231 (let ((insts
18232 (lookup-label labels (label-exp-label dest))))
18233
18234 \f(lambda ()
18235 (if (get-contents flag)
18236 (set-contents! pc insts)
18237 (advance-pc pc))))
18238 (error "Bad BRANCH instruction -- ASSEMBLE" inst))))
18239 (define (branch-dest branch-instruction)
18240 (cadr branch-instruction))
18241 A goto instruction is similar to a branch, except that the destination may be specified either as a label
18242 or as a register, and there is no condition to check -- the pc is always set to the new destination.
18243 (define (make-goto inst machine labels pc)
18244 (let ((dest (goto-dest inst)))
18245 (cond ((label-exp? dest)
18246 (let ((insts
18247 (lookup-label labels
18248 (label-exp-label dest))))
18249 (lambda () (set-contents! pc insts))))
18250 ((register-exp? dest)
18251 (let ((reg
18252 (get-register machine
18253 (register-exp-reg dest))))
18254 (lambda ()
18255 (set-contents! pc (get-contents reg)))))
18256 (else (error "Bad GOTO instruction -- ASSEMBLE"
18257 inst)))))
18258 (define (goto-dest goto-instruction)
18259 (cadr goto-instruction))
18260
18261 Other instructions
18262 The stack instructions save and restore simply use the stack with the designated register and
18263 advance the pc:
18264 (define (make-save inst machine stack pc)
18265 (let ((reg (get-register machine
18266 (stack-inst-reg-name inst))))
18267 (lambda ()
18268 (push stack (get-contents reg))
18269 (advance-pc pc))))
18270 (define (make-restore inst machine stack pc)
18271 (let ((reg (get-register machine
18272 (stack-inst-reg-name inst))))
18273 (lambda ()
18274 (set-contents! reg (pop stack))
18275 (advance-pc pc))))
18276 (define (stack-inst-reg-name stack-instruction)
18277 (cadr stack-instruction))
18278
18279 \fThe final instruction type, handled by make-perform, generates an execution procedure for the
18280 action to be performed. At simulation time, the action procedure is executed and the pc advanced.
18281 (define (make-perform inst machine labels operations pc)
18282 (let ((action (perform-action inst)))
18283 (if (operation-exp? action)
18284 (let ((action-proc
18285 (make-operation-exp
18286 action machine labels operations)))
18287 (lambda ()
18288 (action-proc)
18289 (advance-pc pc)))
18290 (error "Bad PERFORM instruction -- ASSEMBLE" inst))))
18291 (define (perform-action inst) (cdr inst))
18292
18293 Execution procedures for subexpressions
18294 The value of a reg, label, or const expression may be needed for assignment to a register
18295 (make-assign) or for input to an operation (make-operation-exp, below). The following
18296 procedure generates execution procedures to produce values for these expressions during the
18297 simulation:
18298 (define (make-primitive-exp exp machine labels)
18299 (cond ((constant-exp? exp)
18300 (let ((c (constant-exp-value exp)))
18301 (lambda () c)))
18302 ((label-exp? exp)
18303 (let ((insts
18304 (lookup-label labels
18305 (label-exp-label exp))))
18306 (lambda () insts)))
18307 ((register-exp? exp)
18308 (let ((r (get-register machine
18309 (register-exp-reg exp))))
18310 (lambda () (get-contents r))))
18311 (else
18312 (error "Unknown expression type -- ASSEMBLE" exp))))
18313 The syntax of reg, label, and const expressions is determined by
18314 (define
18315 (define
18316 (define
18317 (define
18318 (define
18319 (define
18320
18321 (register-exp? exp) (tagged-list? exp ’reg))
18322 (register-exp-reg exp) (cadr exp))
18323 (constant-exp? exp) (tagged-list? exp ’const))
18324 (constant-exp-value exp) (cadr exp))
18325 (label-exp? exp) (tagged-list? exp ’label))
18326 (label-exp-label exp) (cadr exp))
18327
18328 Assign, perform, and test instructions may include the application of a machine operation
18329 (specified by an op expression) to some operands (specified by reg and const expressions). The
18330 following procedure produces an execution procedure for an ‘‘operation expression’’ -- a list
18331 containing the operation and operand expressions from the instruction:
18332
18333 \f(define (make-operation-exp exp machine labels operations)
18334 (let ((op (lookup-prim (operation-exp-op exp) operations))
18335 (aprocs
18336 (map (lambda (e)
18337 (make-primitive-exp e machine labels))
18338 (operation-exp-operands exp))))
18339 (lambda ()
18340 (apply op (map (lambda (p) (p)) aprocs)))))
18341 The syntax of operation expressions is determined by
18342 (define (operation-exp? exp)
18343 (and (pair? exp) (tagged-list? (car exp) ’op)))
18344 (define (operation-exp-op operation-exp)
18345 (cadr (car operation-exp)))
18346 (define (operation-exp-operands operation-exp)
18347 (cdr operation-exp))
18348 Observe that the treatment of operation expressions is very much like the treatment of procedure
18349 applications by the analyze-application procedure in the evaluator of section 4.1.7 in that we
18350 generate an execution procedure for each operand. At simulation time, we call the operand procedures
18351 and apply the Scheme procedure that simulates the operation to the resulting values. The simulation
18352 procedure is found by looking up the operation name in the operation table for the machine:
18353 (define (lookup-prim symbol operations)
18354 (let ((val (assoc symbol operations)))
18355 (if val
18356 (cadr val)
18357 (error "Unknown operation -- ASSEMBLE" symbol))))
18358 Exercise 5.9. The treatment of machine operations above permits them to operate on labels as well as
18359 on constants and the contents of registers. Modify the expression-processing procedures to enforce the
18360 condition that operations can be used only with registers and constants.
18361 Exercise 5.10. Design a new syntax for register-machine instructions and modify the simulator to use
18362 your new syntax. Can you implement your new syntax without changing any part of the simulator
18363 except the syntax procedures in this section?
18364 Exercise 5.11. When we introduced save and restore in section 5.1.4, we didn’t specify what
18365 would happen if you tried to restore a register that was not the last one saved, as in the sequence
18366 (save y)
18367 (save x)
18368 (restore y)
18369 There are several reasonable possibilities for the meaning of restore:
18370 a. (restore y) puts into y the last value saved on the stack, regardless of what register that value
18371 came from. This is the way our simulator behaves. Show how to take advantage of this behavior to
18372 eliminate one instruction from the Fibonacci machine of section 5.1.4 (figure 5.12).
18373
18374 \fb. (restore y) puts into y the last value saved on the stack, but only if that value was saved from
18375 y; otherwise, it signals an error. Modify the simulator to behave this way. You will have to change
18376 save to put the register name on the stack along with the value.
18377 c. (restore y) puts into y the last value saved from y regardless of what other registers were
18378 saved after y and not restored. Modify the simulator to behave this way. You will have to associate a
18379 separate stack with each register. You should make the initialize-stack operation initialize all
18380 the register stacks.
18381 Exercise 5.12. The simulator can be used to help determine the data paths required for implementing
18382 a machine with a given controller. Extend the assembler to store the following information in the
18383 machine model:
18384 a list of all instructions, with duplicates removed, sorted by instruction type (assign, goto, and
18385 so on);
18386 a list (without duplicates) of the registers used to hold entry points (these are the registers
18387 referenced by goto instructions);
18388 a list (without duplicates) of the registers that are saved or restored;
18389 for each register, a list (without duplicates) of the sources from which it is assigned (for example,
18390 the sources for register val in the factorial machine of figure 5.11 are (const 1) and ((op
18391 *) (reg n) (reg val))).
18392 Extend the message-passing interface to the machine to provide access to this new information. To test
18393 your analyzer, define the Fibonacci machine from figure 5.12 and examine the lists you constructed.
18394 Exercise 5.13. Modify the simulator so that it uses the controller sequence to determine what registers
18395 the machine has rather than requiring a list of registers as an argument to make-machine. Instead of
18396 pre-allocating the registers in make-machine, you can allocate them one at a time when they are
18397 first seen during assembly of the instructions.
18398
18399 5.2.4 Monitoring Machine Performance
18400 Simulation is useful not only for verifying the correctness of a proposed machine design but also for
18401 measuring the machine’s performance. For example, we can install in our simulation program a
18402 ‘‘meter’’ that measures the number of stack operations used in a computation. To do this, we modify
18403 our simulated stack to keep track of the number of times registers are saved on the stack and the
18404 maximum depth reached by the stack, and add a message to the stack’s interface that prints the
18405 statistics, as shown below. We also add an operation to the basic machine model to print the stack
18406 statistics, by initializing the-ops in make-new-machine to
18407 (list (list ’initialize-stack
18408 (lambda () (stack ’initialize)))
18409 (list ’print-stack-statistics
18410 (lambda () (stack ’print-statistics))))
18411 Here is the new version of make-stack:
18412
18413 \f(define (make-stack)
18414 (let ((s ’())
18415 (number-pushes 0)
18416 (max-depth 0)
18417 (current-depth 0))
18418 (define (push x)
18419 (set! s (cons x s))
18420 (set! number-pushes (+ 1 number-pushes))
18421 (set! current-depth (+ 1 current-depth))
18422 (set! max-depth (max current-depth max-depth)))
18423 (define (pop)
18424 (if (null? s)
18425 (error "Empty stack -- POP")
18426 (let ((top (car s)))
18427 (set! s (cdr s))
18428 (set! current-depth (- current-depth 1))
18429 top)))
18430 (define (initialize)
18431 (set! s ’())
18432 (set! number-pushes 0)
18433 (set! max-depth 0)
18434 (set! current-depth 0)
18435 ’done)
18436 (define (print-statistics)
18437 (newline)
18438 (display (list ’total-pushes ’= number-pushes
18439 ’maximum-depth ’= max-depth)))
18440 (define (dispatch message)
18441 (cond ((eq? message ’push) push)
18442 ((eq? message ’pop) (pop))
18443 ((eq? message ’initialize) (initialize))
18444 ((eq? message ’print-statistics)
18445 (print-statistics))
18446 (else
18447 (error "Unknown request -- STACK" message))))
18448 dispatch))
18449 Exercises 5.15 through 5.19 describe other useful monitoring and debugging features that can be added
18450 to the register-machine simulator.
18451 Exercise 5.14. Measure the number of pushes and the maximum stack depth required to compute n!
18452 for various small values of n using the factorial machine shown in figure 5.11. From your data
18453 determine formulas in terms of n for the total number of push operations and the maximum stack depth
18454 used in computing n! for any n > 1. Note that each of these is a linear function of n and is thus
18455 determined by two constants. In order to get the statistics printed, you will have to augment the
18456 factorial machine with instructions to initialize the stack and print the statistics. You may want to also
18457 modify the machine so that it repeatedly reads a value for n, computes the factorial, and prints the
18458 result (as we did for the GCD machine in figure 5.4), so that you will not have to repeatedly invoke
18459 get-register-contents, set-register-contents!, and start.
18460
18461 \fExercise 5.15. Add instruction counting to the register machine simulation. That is, have the machine
18462 model keep track of the number of instructions executed. Extend the machine model’s interface to
18463 accept a new message that prints the value of the instruction count and resets the count to zero.
18464 Exercise 5.16. Augment the simulator to provide for instruction tracing. That is, before each
18465 instruction is executed, the simulator should print the text of the instruction. Make the machine model
18466 accept trace-on and trace-off messages to turn tracing on and off.
18467 Exercise 5.17. Extend the instruction tracing of exercise 5.16 so that before printing an instruction,
18468 the simulator prints any labels that immediately precede that instruction in the controller sequence. Be
18469 careful to do this in a way that does not interfere with instruction counting (exercise 5.15). You will
18470 have to make the simulator retain the necessary label information.
18471 Exercise 5.18. Modify the make-register procedure of section 5.2.1 so that registers can be
18472 traced. Registers should accept messages that turn tracing on and off. When a register is traced,
18473 assigning a value to the register should print the name of the register, the old contents of the register,
18474 and the new contents being assigned. Extend the interface to the machine model to permit you to turn
18475 tracing on and off for designated machine registers.
18476 Exercise 5.19. Alyssa P. Hacker wants a breakpoint feature in the simulator to help her debug her
18477 machine designs. You have been hired to install this feature for her. She wants to be able to specify a
18478 place in the controller sequence where the simulator will stop and allow her to examine the state of the
18479 machine. You are to implement a procedure
18480 (set-breakpoint <machine> <label> <n>)
18481 that sets a breakpoint just before the nth instruction after the given label. For example,
18482 (set-breakpoint gcd-machine ’test-b 4)
18483 installs a breakpoint in gcd-machine just before the assignment to register a. When the simulator
18484 reaches the breakpoint it should print the label and the offset of the breakpoint and stop executing
18485 instructions. Alyssa can then use get-register-contents and set-register-contents!
18486 to manipulate the state of the simulated machine. She should then be able to continue execution by
18487 saying
18488 (proceed-machine <machine>)
18489 She should also be able to remove a specific breakpoint by means of
18490 (cancel-breakpoint <machine> <label> <n>)
18491 or to remove all breakpoints by means of
18492 (cancel-all-breakpoints <machine>)
18493 4 Using the receive procedure here is a way to get extract-labels to effectively return two
18494
18495 values -- labels and insts -- without explicitly making a compound data structure to hold them.
18496 An alternative implementation, which returns an explicit pair of values, is
18497
18498 \f(define (extract-labels text)
18499 (if (null? text)
18500 (cons ’() ’())
18501 (let ((result (extract-labels (cdr text))))
18502 (let ((insts (car result)) (labels (cdr result)))
18503 (let ((next-inst (car text)))
18504 (if (symbol? next-inst)
18505 (cons insts
18506 (cons (make-label-entry next-inst insts) labels))
18507 (cons (cons (make-instruction next-inst) insts)
18508 labels)))))))
18509 which would be called by assemble as follows:
18510 (define (assemble controller-text machine)
18511 (let ((result (extract-labels controller-text)))
18512 (let ((insts (car result)) (labels (cdr result)))
18513 (update-insts! insts labels machine)
18514 insts)))
18515 You can consider our use of receive as demonstrating an elegant way to return multiple values, or
18516 simply an excuse to show off a programming trick. An argument like receive that is the next
18517 procedure to be invoked is called a ‘‘continuation.’’ Recall that we also used continuations to
18518 implement the backtracking control structure in the amb evaluator in section 4.3.3.
18519 [Go to first, previous, next page; contents; index]
18520
18521 \f[Go to first, previous, next page; contents; index]
18522
18523 5.3 Storage Allocation and Garbage Collection
18524 In section 5.4, we will show how to implement a Scheme evaluator as a register machine. In order to
18525 simplify the discussion, we will assume that our register machines can be equipped with a
18526 list-structured memory, in which the basic operations for manipulating list-structured data are
18527 primitive. Postulating the existence of such a memory is a useful abstraction when one is focusing on
18528 the mechanisms of control in a Scheme interpreter, but this does not reflect a realistic view of the
18529 actual primitive data operations of contemporary computers. To obtain a more complete picture of
18530 how a Lisp system operates, we must investigate how list structure can be represented in a way that is
18531 compatible with conventional computer memories.
18532 There are two considerations in implementing list structure. The first is purely an issue of
18533 representation: how to represent the ‘‘box-and-pointer’’ structure of Lisp pairs, using only the storage
18534 and addressing capabilities of typical computer memories. The second issue concerns the management
18535 of memory as a computation proceeds. The operation of a Lisp system depends crucially on the ability
18536 to continually create new data objects. These include objects that are explicitly created by the Lisp
18537 procedures being interpreted as well as structures created by the interpreter itself, such as
18538 environments and argument lists. Although the constant creation of new data objects would pose no
18539 problem on a computer with an infinite amount of rapidly addressable memory, computer memories
18540 are available only in finite sizes (more’s the pity). Lisp systems thus provide an automatic storage
18541 allocation facility to support the illusion of an infinite memory. When a data object is no longer
18542 needed, the memory allocated to it is automatically recycled and used to construct new data objects.
18543 There are various techniques for providing such automatic storage allocation. The method we shall
18544 discuss in this section is called garbage collection.
18545
18546 5.3.1 Memory as Vectors
18547 A conventional computer memory can be thought of as an array of cubbyholes, each of which can
18548 contain a piece of information. Each cubbyhole has a unique name, called its address or location.
18549 Typical memory systems provide two primitive operations: one that fetches the data stored in a
18550 specified location and one that assigns new data to a specified location. Memory addresses can be
18551 incremented to support sequential access to some set of the cubbyholes. More generally, many
18552 important data operations require that memory addresses be treated as data, which can be stored in
18553 memory locations and manipulated in machine registers. The representation of list structure is one
18554 application of such address arithmetic.
18555 To model computer memory, we use a new kind of data structure called a vector. Abstractly, a vector
18556 is a compound data object whose individual elements can be accessed by means of an integer index in
18557 an amount of time that is independent of the index. 5 In order to describe memory operations, we use
18558 two primitive Scheme procedures for manipulating vectors:
18559 (vector-ref <vector> <n>) returns the nth element of the vector.
18560 (vector-set! <vector> <n> <value>) sets the nth element of the vector to the
18561 designated value.
18562
18563 \fFor example, if v is a vector, then (vector-ref v 5) gets the fifth entry in the vector v and
18564 (vector-set! v 5 7) changes the value of the fifth entry of the vector v to 7. 6 For computer
18565 memory, this access can be implemented through the use of address arithmetic to combine a base
18566 address that specifies the beginning location of a vector in memory with an index that specifies the
18567 offset of a particular element of the vector.
18568
18569 Representing Lisp data
18570 We can use vectors to implement the basic pair structures required for a list-structured memory. Let us
18571 imagine that computer memory is divided into two vectors: the-cars and the-cdrs. We will
18572 represent list structure as follows: A pointer to a pair is an index into the two vectors. The car of the
18573 pair is the entry in the-cars with the designated index, and the cdr of the pair is the entry in
18574 the-cdrs with the designated index. We also need a representation for objects other than pairs (such
18575 as numbers and symbols) and a way to distinguish one kind of data from another. There are many
18576 methods of accomplishing this, but they all reduce to using typed pointers, that is, to extending the
18577 notion of ‘‘pointer’’ to include information on data type. 7 The data type enables the system to
18578 distinguish a pointer to a pair (which consists of the ‘‘pair’’ data type and an index into the memory
18579 vectors) from pointers to other kinds of data (which consist of some other data type and whatever is
18580 being used to represent data of that type). Two data objects are considered to be the same (eq?) if
18581 their pointers are identical. 8 Figure 5.14 illustrates the use of this method to represent the list ((1 2)
18582 3 4), whose box-and-pointer diagram is also shown. We use letter prefixes to denote the data-type
18583 information. Thus, a pointer to the pair with index 5 is denoted p5, the empty list is denoted by the
18584 pointer e0, and a pointer to the number 4 is denoted n4. In the box-and-pointer diagram, we have
18585 indicated at the lower left of each pair the vector index that specifies where the car and cdr of the
18586 pair are stored. The blank locations in the-cars and the-cdrs may contain parts of other list
18587 structures (not of interest here).
18588
18589 Figure 5.14: Box-and-pointer and memory-vector representations of the list ((1 2) 3 4).
18590 Figure 5.14: Box-and-pointer and memory-vector representations of the list ((1 2) 3 4).
18591 A pointer to a number, such as n4, might consist of a type indicating numeric data together with the
18592 actual representation of the number 4. 9 To deal with numbers that are too large to be represented in
18593 the fixed amount of space allocated for a single pointer, we could use a distinct bignum data type, for
18594 which the pointer designates a list in which the parts of the number are stored. 10
18595
18596 \fA symbol might be represented as a typed pointer that designates a sequence of the characters that
18597 form the symbol’s printed representation. This sequence is constructed by the Lisp reader when the
18598 character string is initially encountered in input. Since we want two instances of a symbol to be
18599 recognized as the ‘‘same’’ symbol by eq? and we want eq? to be a simple test for equality of
18600 pointers, we must ensure that if the reader sees the same character string twice, it will use the same
18601 pointer (to the same sequence of characters) to represent both occurrences. To accomplish this, the
18602 reader maintains a table, traditionally called the obarray, of all the symbols it has ever encountered.
18603 When the reader encounters a character string and is about to construct a symbol, it checks the obarray
18604 to see if it has ever before seen the same character string. If it has not, it uses the characters to
18605 construct a new symbol (a typed pointer to a new character sequence) and enters this pointer in the
18606 obarray. If the reader has seen the string before, it returns the symbol pointer stored in the obarray.
18607 This process of replacing character strings by unique pointers is called interning symbols.
18608
18609 Implementing the primitive list operations
18610 Given the above representation scheme, we can replace each ‘‘primitive’’ list operation of a register
18611 machine with one or more primitive vector operations. We will use two registers, the-cars and
18612 the-cdrs, to identify the memory vectors, and will assume that vector-ref and vector-set!
18613 are available as primitive operations. We also assume that numeric operations on pointers (such as
18614 incrementing a pointer, using a pair pointer to index a vector, or adding two numbers) use only the
18615 index portion of the typed pointer.
18616 For example, we can make a register machine support the instructions
18617 (assign <reg 1 > (op car) (reg <reg 2 >))
18618 (assign <reg 1 > (op cdr) (reg <reg 2 >))
18619 if we implement these, respectively, as
18620 (assign <reg 1 > (op vector-ref) (reg the-cars) (reg <reg 2 >))
18621 (assign <reg 1 > (op vector-ref) (reg the-cdrs) (reg <reg 2 >))
18622 The instructions
18623 (perform (op set-car!) (reg <reg 1 >) (reg <reg 2 >))
18624 (perform (op set-cdr!) (reg <reg 1 >) (reg <reg 2 >))
18625 are implemented as
18626 (perform
18627 (op vector-set!) (reg the-cars) (reg <reg 1 >) (reg <reg 2 >))
18628 (perform
18629 (op vector-set!) (reg the-cdrs) (reg <reg 1 >) (reg <reg 2 >))
18630 Cons is performed by allocating an unused index and storing the arguments to cons in the-cars
18631 and the-cdrs at that indexed vector position. We presume that there is a special register, free, that
18632 always holds a pair pointer containing the next available index, and that we can increment the index
18633 part of that pointer to find the next free location. 11 For example, the instruction
18634
18635 \f(assign <reg 1 > (op cons) (reg <reg 2 >) (reg <reg 3 >))
18636 is implemented as the following sequence of vector operations: 12
18637 (perform
18638 (op vector-set!) (reg the-cars) (reg free) (reg <reg 2 >))
18639 (perform
18640 (op vector-set!) (reg the-cdrs) (reg free) (reg <reg 3 >))
18641 (assign <reg 1 > (reg free))
18642 (assign free (op +) (reg free) (const 1))
18643 The eq? operation
18644 (op eq?) (reg <reg 1 >) (reg <reg 2 >)
18645 simply tests the equality of all fields in the registers, and predicates such as pair?, null?,
18646 symbol?, and number? need only check the type field.
18647
18648 Implementing stacks
18649 Although our register machines use stacks, we need do nothing special here, since stacks can be
18650 modeled in terms of lists. The stack can be a list of the saved values, pointed to by a special register
18651 the-stack. Thus, (save <reg>) can be implemented as
18652 (assign the-stack (op cons) (reg <reg>) (reg the-stack))
18653 Similarly, (restore <reg>) can be implemented as
18654 (assign <reg> (op car) (reg the-stack))
18655 (assign the-stack (op cdr) (reg the-stack))
18656 and (perform (op initialize-stack)) can be implemented as
18657 (assign the-stack (const ()))
18658 These operations can be further expanded in terms of the vector operations given above. In
18659 conventional computer architectures, however, it is usually advantageous to allocate the stack as a
18660 separate vector. Then pushing and popping the stack can be accomplished by incrementing or
18661 decrementing an index into that vector.
18662 Exercise 5.20. Draw the box-and-pointer representation and the memory-vector representation (as in
18663 figure 5.14) of the list structure produced by
18664 (define x (cons 1 2))
18665 (define y (list x x))
18666 with the free pointer initially p1. What is the final value of free ? What pointers represent the
18667 values of x and y ?
18668 Exercise 5.21. Implement register machines for the following procedures. Assume that the
18669 list-structure memory operations are available as machine primitives.
18670
18671 \fa. Recursive count-leaves:
18672 (define (count-leaves tree)
18673 (cond ((null? tree) 0)
18674 ((not (pair? tree)) 1)
18675 (else (+ (count-leaves (car tree))
18676 (count-leaves (cdr tree))))))
18677 b. Recursive count-leaves with explicit counter:
18678 (define (count-leaves tree)
18679 (define (count-iter tree n)
18680 (cond ((null? tree) n)
18681 ((not (pair? tree)) (+ n 1))
18682 (else (count-iter (cdr tree)
18683 (count-iter (car tree) n)))))
18684 (count-iter tree 0))
18685 Exercise 5.22. Exercise 3.12 of section 3.3.1 presented an append procedure that appends two lists
18686 to form a new list and an append! procedure that splices two lists together. Design a register
18687 machine to implement each of these procedures. Assume that the list-structure memory operations are
18688 available as primitive operations.
18689
18690 5.3.2 Maintaining the Illusion of Infinite Memory
18691 The representation method outlined in section 5.3.1 solves the problem of implementing list structure,
18692 provided that we have an infinite amount of memory. With a real computer we will eventually run out
18693 of free space in which to construct new pairs. 13 However, most of the pairs generated in a typical
18694 computation are used only to hold intermediate results. After these results are accessed, the pairs are
18695 no longer needed -- they are garbage. For instance, the computation
18696 (accumulate + 0 (filter odd? (enumerate-interval 0 n)))
18697 constructs two lists: the enumeration and the result of filtering the enumeration. When the
18698 accumulation is complete, these lists are no longer needed, and the allocated memory can be
18699 reclaimed. If we can arrange to collect all the garbage periodically, and if this turns out to recycle
18700 memory at about the same rate at which we construct new pairs, we will have preserved the illusion
18701 that there is an infinite amount of memory.
18702 In order to recycle pairs, we must have a way to determine which allocated pairs are not needed (in the
18703 sense that their contents can no longer influence the future of the computation). The method we shall
18704 examine for accomplishing this is known as garbage collection. Garbage collection is based on the
18705 observation that, at any moment in a Lisp interpretation, the only objects that can affect the future of
18706 the computation are those that can be reached by some succession of car and cdr operations starting
18707 from the pointers that are currently in the machine registers. 14 Any memory cell that is not so
18708 accessible may be recycled.
18709 There are many ways to perform garbage collection. The method we shall examine here is called
18710 stop-and-copy. The basic idea is to divide memory into two halves: ‘‘working memory’’ and ‘‘free
18711 memory.’’ When cons constructs pairs, it allocates these in working memory. When working
18712 memory is full, we perform garbage collection by locating all the useful pairs in working memory and
18713 copying these into consecutive locations in free memory. (The useful pairs are located by tracing all
18714
18715 \fthe car and cdr pointers, starting with the machine registers.) Since we do not copy the garbage,
18716 there will presumably be additional free memory that we can use to allocate new pairs. In addition,
18717 nothing in the working memory is needed, since all the useful pairs in it have been copied. Thus, if we
18718 interchange the roles of working memory and free memory, we can continue processing; new pairs
18719 will be allocated in the new working memory (which was the old free memory). When this is full, we
18720 can copy the useful pairs into the new free memory (which was the old working memory). 15
18721
18722 Implementation of a stop-and-copy garbage collector
18723 We now use our register-machine language to describe the stop-and-copy algorithm in more detail. We
18724 will assume that there is a register called root that contains a pointer to a structure that eventually
18725 points at all accessible data. This can be arranged by storing the contents of all the machine registers in
18726 a pre-allocated list pointed at by root just before starting garbage collection. 16 We also assume that,
18727 in addition to the current working memory, there is free memory available into which we can copy the
18728 useful data. The current working memory consists of vectors whose base addresses are in registers
18729 called the-cars and the-cdrs, and the free memory is in registers called new-cars and
18730 new-cdrs.
18731 Garbage collection is triggered when we exhaust the free cells in the current working memory, that is,
18732 when a cons operation attempts to increment the free pointer beyond the end of the memory vector.
18733 When the garbage-collection process is complete, the root pointer will point into the new memory,
18734 all objects accessible from the root will have been moved to the new memory, and the free pointer
18735 will indicate the next place in the new memory where a new pair can be allocated. In addition, the
18736 roles of working memory and new memory will have been interchanged -- new pairs will be
18737 constructed in the new memory, beginning at the place indicated by free, and the (previous) working
18738 memory will be available as the new memory for the next garbage collection. Figure 5.15 shows the
18739 arrangement of memory just before and just after garbage collection.
18740
18741 \fFigure 5.15: Reconfiguration of memory by the garbage-collection process.
18742 Figure 5.15: Reconfiguration of memory by the garbage-collection process.
18743 The state of the garbage-collection process is controlled by maintaining two pointers: free and
18744 scan. These are initialized to point to the beginning of the new memory. The algorithm begins by
18745 relocating the pair pointed at by root to the beginning of the new memory. The pair is copied, the
18746 root pointer is adjusted to point to the new location, and the free pointer is incremented. In
18747 addition, the old location of the pair is marked to show that its contents have been moved. This
18748 marking is done as follows: In the car position, we place a special tag that signals that this is an
18749 already-moved object. (Such an object is traditionally called a broken heart.) 17 In the cdr position
18750 we place a forwarding address that points at the location to which the object has been moved.
18751 After relocating the root, the garbage collector enters its basic cycle. At each step in the algorithm, the
18752 scan pointer (initially pointing at the relocated root) points at a pair that has been moved to the new
18753 memory but whose car and cdr pointers still refer to objects in the old memory. These objects are
18754 each relocated, and the scan pointer is incremented. To relocate an object (for example, the object
18755 indicated by the car pointer of the pair we are scanning) we check to see if the object has already
18756 been moved (as indicated by the presence of a broken-heart tag in the car position of the object). If
18757 the object has not already been moved, we copy it to the place indicated by free, update free, set
18758
18759 \fup a broken heart at the object’s old location, and update the pointer to the object (in this example, the
18760 car pointer of the pair we are scanning) to point to the new location. If the object has already been
18761 moved, its forwarding address (found in the cdr position of the broken heart) is substituted for the
18762 pointer in the pair being scanned. Eventually, all accessible objects will have been moved and scanned,
18763 at which point the scan pointer will overtake the free pointer and the process will terminate.
18764 We can specify the stop-and-copy algorithm as a sequence of instructions for a register machine. The
18765 basic step of relocating an object is accomplished by a subroutine called
18766 relocate-old-result-in-new. This subroutine gets its argument, a pointer to the object to be
18767 relocated, from a register named old. It relocates the designated object (incrementing free in the
18768 process), puts a pointer to the relocated object into a register called new, and returns by branching to
18769 the entry point stored in the register relocate-continue. To begin garbage collection, we invoke
18770 this subroutine to relocate the root pointer, after initializing free and scan. When the relocation of
18771 root has been accomplished, we install the new pointer as the new root and enter the main loop of
18772 the garbage collector.
18773 begin-garbage-collection
18774 (assign free (const 0))
18775 (assign scan (const 0))
18776 (assign old (reg root))
18777 (assign relocate-continue (label reassign-root))
18778 (goto (label relocate-old-result-in-new))
18779 reassign-root
18780 (assign root (reg new))
18781 (goto (label gc-loop))
18782 In the main loop of the garbage collector we must determine whether there are any more objects to be
18783 scanned. We do this by testing whether the scan pointer is coincident with the free pointer. If the
18784 pointers are equal, then all accessible objects have been relocated, and we branch to gc-flip, which
18785 cleans things up so that we can continue the interrupted computation. If there are still pairs to be
18786 scanned, we call the relocate subroutine to relocate the car of the next pair (by placing the car
18787 pointer in old). The relocate-continue register is set up so that the subroutine will return to
18788 update the car pointer.
18789 gc-loop
18790 (test (op =) (reg scan) (reg free))
18791 (branch (label gc-flip))
18792 (assign old (op vector-ref) (reg new-cars) (reg scan))
18793 (assign relocate-continue (label update-car))
18794 (goto (label relocate-old-result-in-new))
18795 At update-car, we modify the car pointer of the pair being scanned, then proceed to relocate the
18796 cdr of the pair. We return to update-cdr when that relocation has been accomplished. After
18797 relocating and updating the cdr, we are finished scanning that pair, so we continue with the main
18798 loop.
18799 update-car
18800 (perform
18801 (op vector-set!) (reg new-cars) (reg scan) (reg new))
18802 (assign old (op vector-ref) (reg new-cdrs) (reg scan))
18803 (assign relocate-continue (label update-cdr))
18804
18805 \f(goto (label relocate-old-result-in-new))
18806 update-cdr
18807 (perform
18808 (op vector-set!) (reg new-cdrs) (reg scan) (reg new))
18809 (assign scan (op +) (reg scan) (const 1))
18810 (goto (label gc-loop))
18811 The subroutine relocate-old-result-in-new relocates objects as follows: If the object to be
18812 relocated (pointed at by old) is not a pair, then we return the same pointer to the object unchanged (in
18813 new). (For example, we may be scanning a pair whose car is the number 4. If we represent the car
18814 by n4, as described in section 5.3.1, then we want the ‘‘relocated’’ car pointer to still be n4.)
18815 Otherwise, we must perform the relocation. If the car position of the pair to be relocated contains a
18816 broken-heart tag, then the pair has in fact already been moved, so we retrieve the forwarding address
18817 (from the cdr position of the broken heart) and return this in new. If the pointer in old points at a
18818 yet-unmoved pair, then we move the pair to the first free cell in new memory (pointed at by free)
18819 and set up the broken heart by storing a broken-heart tag and forwarding address at the old location.
18820 Relocate-old-result-in-new uses a register oldcr to hold the car or the cdr of the object
18821 pointed at by old. 18
18822 relocate-old-result-in-new
18823 (test (op pointer-to-pair?) (reg old))
18824 (branch (label pair))
18825 (assign new (reg old))
18826 (goto (reg relocate-continue))
18827 pair
18828 (assign oldcr (op vector-ref) (reg the-cars) (reg old))
18829 (test (op broken-heart?) (reg oldcr))
18830 (branch (label already-moved))
18831 (assign new (reg free)) ; new location for pair
18832 ;; Update free pointer.
18833 (assign free (op +) (reg free) (const 1))
18834 ;; Copy the car and cdr to new memory.
18835 (perform (op vector-set!)
18836 (reg new-cars) (reg new) (reg oldcr))
18837 (assign oldcr (op vector-ref) (reg the-cdrs) (reg old))
18838 (perform (op vector-set!)
18839 (reg new-cdrs) (reg new) (reg oldcr))
18840 ;; Construct the broken heart.
18841 (perform (op vector-set!)
18842 (reg the-cars) (reg old) (const broken-heart))
18843 (perform
18844 (op vector-set!) (reg the-cdrs) (reg old) (reg new))
18845 (goto (reg relocate-continue))
18846 already-moved
18847 (assign new (op vector-ref) (reg the-cdrs) (reg old))
18848 (goto (reg relocate-continue))
18849 At the very end of the garbage-collection process, we interchange the role of old and new memories by
18850 interchanging pointers: interchanging the-cars with new-cars, and the-cdrs with
18851 new-cdrs. We will then be ready to perform another garbage collection the next time memory runs
18852 out.
18853
18854 \fgc-flip
18855 (assign
18856 (assign
18857 (assign
18858 (assign
18859 (assign
18860 (assign
18861
18862 temp (reg the-cdrs))
18863 the-cdrs (reg new-cdrs))
18864 new-cdrs (reg temp))
18865 temp (reg the-cars))
18866 the-cars (reg new-cars))
18867 new-cars (reg temp))
18868
18869 5 We could represent memory as lists of items. However, the access time would then not be
18870
18871 independent of the index, since accessing the nth element of a list requires n - 1 cdr operations.
18872 6 For completeness, we should specify a make-vector operation that constructs vectors. However,
18873
18874 in the present application we will use vectors only to model fixed divisions of the computer memory.
18875 7 This is precisely the same ‘‘tagged data’’ idea we introduced in chapter 2 for dealing with generic
18876
18877 operations. Here, however, the data types are included at the primitive machine level rather than
18878 constructed through the use of lists.
18879 8 Type information may be encoded in a variety of ways, depending on the details of the machine on
18880
18881 which the Lisp system is to be implemented. The execution efficiency of Lisp programs will be
18882 strongly dependent on how cleverly this choice is made, but it is difficult to formulate general design
18883 rules for good choices. The most straightforward way to implement typed pointers is to allocate a fixed
18884 set of bits in each pointer to be a type field that encodes the data type. Important questions to be
18885 addressed in designing such a representation include the following: How many type bits are required?
18886 How large must the vector indices be? How efficiently can the primitive machine instructions be used
18887 to manipulate the type fields of pointers? Machines that include special hardware for the efficient
18888 handling of type fields are said to have tagged architectures.
18889 9 This decision on the representation of numbers determines whether eq?, which tests equality of
18890
18891 pointers, can be used to test for equality of numbers. If the pointer contains the number itself, then
18892 equal numbers will have the same pointer. But if the pointer contains the index of a location where the
18893 number is stored, equal numbers will be guaranteed to have equal pointers only if we are careful never
18894 to store the same number in more than one location.
18895 10 This is just like writing a number as a sequence of digits, except that each ‘‘digit’’ is a number
18896
18897 between 0 and the largest number that can be stored in a single pointer.
18898 11 There are other ways of finding free storage. For example, we could link together all the unused
18899
18900 pairs into a free list. Our free locations are consecutive (and hence can be accessed by incrementing a
18901 pointer) because we are using a compacting garbage collector, as we will see in section 5.3.2.
18902 12 This is essentially the implementation of cons in terms of set-car! and set-cdr!, as
18903
18904 described in section 3.3.1. The operation get-new-pair used in that implementation is realized
18905 here by the free pointer.
18906 13 This may not be true eventually, because memories may get large enough so that it would be
18907
18908 impossible to run out of free memory in the lifetime of the computer. For example, there are about 3×
18909 10 13 , microseconds in a year, so if we were to cons once per microsecond we would need about
18910 10 15 cells of memory to build a machine that could operate for 30 years without running out of
18911 memory. That much memory seems absurdly large by today’s standards, but it is not physically
18912 impossible. On the other hand, processors are getting faster and a future computer may have large
18913
18914 \fnumbers of processors operating in parallel on a single memory, so it may be possible to use up
18915 memory much faster than we have postulated.
18916 14 We assume here that the stack is represented as a list as described in section 5.3.1, so that items on
18917
18918 the stack are accessible via the pointer in the stack register.
18919 15 This idea was invented and first implemented by Minsky, as part of the implementation of Lisp for
18920
18921 the PDP-1 at the MIT Research Laboratory of Electronics. It was further developed by Fenichel and
18922 Yochelson (1969) for use in the Lisp implementation for the Multics time-sharing system. Later, Baker
18923 (1978) developed a ‘‘real-time’’ version of the method, which does not require the computation to stop
18924 during garbage collection. Baker’s idea was extended by Hewitt, Lieberman, and Moon (see
18925 Lieberman and Hewitt 1983) to take advantage of the fact that some structure is more volatile and
18926 other structure is more permanent.
18927 An alternative commonly used garbage-collection technique is the mark-sweep method. This consists
18928 of tracing all the structure accessible from the machine registers and marking each pair we reach. We
18929 then scan all of memory, and any location that is unmarked is ‘‘swept up’’ as garbage and made
18930 available for reuse. A full discussion of the mark-sweep method can be found in Allen 1978.
18931 The Minsky-Fenichel-Yochelson algorithm is the dominant algorithm in use for large-memory
18932 systems because it examines only the useful part of memory. This is in contrast to mark-sweep, in
18933 which the sweep phase must check all of memory. A second advantage of stop-and-copy is that it is a
18934 compacting garbage collector. That is, at the end of the garbage-collection phase the useful data will
18935 have been moved to consecutive memory locations, with all garbage pairs compressed out. This can be
18936 an extremely important performance consideration in machines with virtual memory, in which
18937 accesses to widely separated memory addresses may require extra paging operations.
18938 16 This list of registers does not include the registers used by the storage-allocation system -- root,
18939
18940 the-cars, the-cdrs, and the other registers that will be introduced in this section.
18941 17 The term broken heart was coined by David Cressey, who wrote a garbage collector for MDL, a
18942
18943 dialect of Lisp developed at MIT during the early 1970s.
18944 18 The garbage collector uses the low-level predicate pointer-to-pair? instead of the
18945
18946 list-structure pair? operation because in a real system there might be various things that are treated
18947 as pairs for garbage-collection purposes. For example, in a Scheme system that conforms to the IEEE
18948 standard a procedure object may be implemented as a special kind of ‘‘pair’’ that doesn’t satisfy the
18949 pair? predicate. For simulation purposes, pointer-to-pair? can be implemented as pair?.
18950 [Go to first, previous, next page; contents; index]
18951
18952 \f[Go to first, previous, next page; contents; index]
18953
18954 5.4 The Explicit-Control Evaluator
18955 In section 5.1 we saw how to transform simple Scheme programs into descriptions of register
18956 machines. We will now perform this transformation on a more complex program, the metacircular
18957 evaluator of sections 4.1.1-4.1.4, which shows how the behavior of a Scheme interpreter can be
18958 described in terms of the procedures eval and apply. The explicit-control evaluator that we develop
18959 in this section shows how the underlying procedure-calling and argument-passing mechanisms used in
18960 the evaluation process can be described in terms of operations on registers and stacks. In addition, the
18961 explicit-control evaluator can serve as an implementation of a Scheme interpreter, written in a
18962 language that is very similar to the native machine language of conventional computers. The evaluator
18963 can be executed by the register-machine simulator of section 5.2. Alternatively, it can be used as a
18964 starting point for building a machine-language implementation of a Scheme evaluator, or even a
18965 special-purpose machine for evaluating Scheme expressions. Figure 5.16 shows such a hardware
18966 implementation: a silicon chip that acts as an evaluator for Scheme. The chip designers started with the
18967 data-path and controller specifications for a register machine similar to the evaluator described in this
18968 section and used design automation programs to construct the integrated-circuit layout. 19
18969
18970 Registers and operations
18971 In designing the explicit-control evaluator, we must specify the operations to be used in our register
18972 machine. We described the metacircular evaluator in terms of abstract syntax, using procedures such
18973 as quoted? and make-procedure. In implementing the register machine, we could expand these
18974 procedures into sequences of elementary list-structure memory operations, and implement these
18975 operations on our register machine. However, this would make our evaluator very long, obscuring the
18976 basic structure with details. To clarify the presentation, we will include as primitive operations of the
18977 register machine the syntax procedures given in section 4.1.2 and the procedures for representing
18978 environments and other run-time data given in sections 4.1.3 and 4.1.4. In order to completely specify
18979 an evaluator that could be programmed in a low-level machine language or implemented in hardware,
18980 we would replace these operations by more elementary operations, using the list-structure
18981 implementation we described in section 5.3.
18982
18983 \fFigure 5.16: A silicon-chip implementation of an evaluator for Scheme.
18984 Figure 5.16: A silicon-chip implementation of an evaluator for Scheme.
18985
18986 \fOur Scheme evaluator register machine includes a stack and seven registers: exp, env, val,
18987 continue, proc, argl, and unev. Exp is used to hold the expression to be evaluated, and env
18988 contains the environment in which the evaluation is to be performed. At the end of an evaluation, val
18989 contains the value obtained by evaluating the expression in the designated environment. The
18990 continue register is used to implement recursion, as explained in section 5.1.4. (The evaluator
18991 needs to call itself recursively, since evaluating an expression requires evaluating its subexpressions.)
18992 The registers proc, argl, and unev are used in evaluating combinations.
18993 We will not provide a data-path diagram to show how the registers and operations of the evaluator are
18994 connected, nor will we give the complete list of machine operations. These are implicit in the
18995 evaluator’s controller, which will be presented in detail.
18996
18997 5.4.1 The Core of the Explicit-Control Evaluator
18998 The central element in the evaluator is the sequence of instructions beginning at eval-dispatch.
18999 This corresponds to the eval procedure of the metacircular evaluator described in section 4.1.1.
19000 When the controller starts at eval-dispatch, it evaluates the expression specified by exp in the
19001 environment specified by env. When evaluation is complete, the controller will go to the entry point
19002 stored in continue, and the val register will hold the value of the expression. As with the
19003 metacircular eval, the structure of eval-dispatch is a case analysis on the syntactic type of the
19004 expression to be evaluated. 20
19005 eval-dispatch
19006 (test (op self-evaluating?) (reg exp))
19007 (branch (label ev-self-eval))
19008 (test (op variable?) (reg exp))
19009 (branch (label ev-variable))
19010 (test (op quoted?) (reg exp))
19011 (branch (label ev-quoted))
19012 (test (op assignment?) (reg exp))
19013 (branch (label ev-assignment))
19014 (test (op definition?) (reg exp))
19015 (branch (label ev-definition))
19016 (test (op if?) (reg exp))
19017 (branch (label ev-if))
19018 (test (op lambda?) (reg exp))
19019 (branch (label ev-lambda))
19020 (test (op begin?) (reg exp))
19021 (branch (label ev-begin))
19022 (test (op application?) (reg exp))
19023 (branch (label ev-application))
19024 (goto (label unknown-expression-type))
19025
19026 Evaluating simple expressions
19027 Numbers and strings (which are self-evaluating), variables, quotations, and lambda expressions have
19028 no subexpressions to be evaluated. For these, the evaluator simply places the correct value in the val
19029 register and continues execution at the entry point specified by continue. Evaluation of simple
19030 expressions is performed by the following controller code:
19031
19032 \fev-self-eval
19033 (assign val (reg exp))
19034 (goto (reg continue))
19035 ev-variable
19036 (assign val (op lookup-variable-value) (reg exp) (reg env))
19037 (goto (reg continue))
19038 ev-quoted
19039 (assign val (op text-of-quotation) (reg exp))
19040 (goto (reg continue))
19041 ev-lambda
19042 (assign unev (op lambda-parameters) (reg exp))
19043 (assign exp (op lambda-body) (reg exp))
19044 (assign val (op make-procedure)
19045 (reg unev) (reg exp) (reg env))
19046 (goto (reg continue))
19047 Observe how ev-lambda uses the unev and exp registers to hold the parameters and body of the
19048 lambda expression so that they can be passed to the make-procedure operation, along with the
19049 environment in env.
19050
19051 Evaluating procedure applications
19052 A procedure application is specified by a combination containing an operator and operands. The
19053 operator is a subexpression whose value is a procedure, and the operands are subexpressions whose
19054 values are the arguments to which the procedure should be applied. The metacircular eval handles
19055 applications by calling itself recursively to evaluate each element of the combination, and then passing
19056 the results to apply, which performs the actual procedure application. The explicit-control evaluator
19057 does the same thing; these recursive calls are implemented by goto instructions, together with use of
19058 the stack to save registers that will be restored after the recursive call returns. Before each call we will
19059 be careful to identify which registers must be saved (because their values will be needed later). 21
19060 We begin the evaluation of an application by evaluating the operator to produce a procedure, which
19061 will later be applied to the evaluated operands. To evaluate the operator, we move it to the exp
19062 register and go to eval-dispatch. The environment in the env register is already the correct one
19063 in which to evaluate the operator. However, we save env because we will need it later to evaluate the
19064 operands. We also extract the operands into unev and save this on the stack. We set up continue so
19065 that eval-dispatch will resume at ev-appl-did-operator after the operator has been
19066 evaluated. First, however, we save the old value of continue, which tells the controller where to
19067 continue after the application.
19068 ev-application
19069 (save continue)
19070 (save env)
19071 (assign unev (op operands) (reg exp))
19072 (save unev)
19073 (assign exp (op operator) (reg exp))
19074 (assign continue (label ev-appl-did-operator))
19075 (goto (label eval-dispatch))
19076
19077 \fUpon returning from evaluating the operator subexpression, we proceed to evaluate the operands of
19078 the combination and to accumulate the resulting arguments in a list, held in argl. First we restore the
19079 unevaluated operands and the environment. We initialize argl to an empty list. Then we assign to the
19080 proc register the procedure that was produced by evaluating the operator. If there are no operands,
19081 we go directly to apply-dispatch. Otherwise we save proc on the stack and start the
19082 argument-evaluation loop: 22
19083 ev-appl-did-operator
19084 (restore unev)
19085 ; the operands
19086 (restore env)
19087 (assign argl (op empty-arglist))
19088 (assign proc (reg val))
19089 ; the operator
19090 (test (op no-operands?) (reg unev))
19091 (branch (label apply-dispatch))
19092 (save proc)
19093 Each cycle of the argument-evaluation loop evaluates an operand from the list in unev and
19094 accumulates the result into argl. To evaluate an operand, we place it in the exp register and go to
19095 eval-dispatch, after setting continue so that execution will resume with the
19096 argument-accumulation phase. But first we save the arguments accumulated so far (held in argl), the
19097 environment (held in env), and the remaining operands to be evaluated (held in unev). A special case
19098 is made for the evaluation of the last operand, which is handled at ev-appl-last-arg.
19099 ev-appl-operand-loop
19100 (save argl)
19101 (assign exp (op first-operand) (reg unev))
19102 (test (op last-operand?) (reg unev))
19103 (branch (label ev-appl-last-arg))
19104 (save env)
19105 (save unev)
19106 (assign continue (label ev-appl-accumulate-arg))
19107 (goto (label eval-dispatch))
19108 When an operand has been evaluated, the value is accumulated into the list held in argl. The operand
19109 is then removed from the list of unevaluated operands in unev, and the argument-evaluation
19110 continues.
19111 ev-appl-accumulate-arg
19112 (restore unev)
19113 (restore env)
19114 (restore argl)
19115 (assign argl (op adjoin-arg) (reg val) (reg argl))
19116 (assign unev (op rest-operands) (reg unev))
19117 (goto (label ev-appl-operand-loop))
19118 Evaluation of the last argument is handled differently. There is no need to save the environment or the
19119 list of unevaluated operands before going to eval-dispatch, since they will not be required after
19120 the last operand is evaluated. Thus, we return from the evaluation to a special entry point
19121 ev-appl-accum-last-arg, which restores the argument list, accumulates the new argument,
19122 restores the saved procedure, and goes off to perform the application. 23
19123
19124 \fev-appl-last-arg
19125 (assign continue (label ev-appl-accum-last-arg))
19126 (goto (label eval-dispatch))
19127 ev-appl-accum-last-arg
19128 (restore argl)
19129 (assign argl (op adjoin-arg) (reg val) (reg argl))
19130 (restore proc)
19131 (goto (label apply-dispatch))
19132 The details of the argument-evaluation loop determine the order in which the interpreter evaluates the
19133 operands of a combination (e.g., left to right or right to left -- see exercise 3.8). This order is not
19134 determined by the metacircular evaluator, which inherits its control structure from the underlying
19135 Scheme in which it is implemented. 24 Because the first-operand selector (used in
19136 ev-appl-operand-loop to extract successive operands from unev) is implemented as car and
19137 the rest-operands selector is implemented as cdr, the explicit-control evaluator will evaluate the
19138 operands of a combination in left-to-right order.
19139
19140 Procedure application
19141 The entry point apply-dispatch corresponds to the apply procedure of the metacircular
19142 evaluator. By the time we get to apply-dispatch, the proc register contains the procedure to
19143 apply and argl contains the list of evaluated arguments to which it must be applied. The saved value
19144 of continue (originally passed to eval-dispatch and saved at ev-application), which
19145 tells where to return with the result of the procedure application, is on the stack. When the application
19146 is complete, the controller transfers to the entry point specified by the saved continue, with the
19147 result of the application in val. As with the metacircular apply, there are two cases to consider.
19148 Either the procedure to be applied is a primitive or it is a compound procedure.
19149 apply-dispatch
19150 (test (op primitive-procedure?) (reg proc))
19151 (branch (label primitive-apply))
19152 (test (op compound-procedure?) (reg proc))
19153 (branch (label compound-apply))
19154 (goto (label unknown-procedure-type))
19155 We assume that each primitive is implemented so as to obtain its arguments from argl and place its
19156 result in val. To specify how the machine handles primitives, we would have to provide a sequence
19157 of controller instructions to implement each primitive and arrange for primitive-apply to
19158 dispatch to the instructions for the primitive identified by the contents of proc. Since we are
19159 interested in the structure of the evaluation process rather than the details of the primitives, we will
19160 instead just use an apply-primitive-procedure operation that applies the procedure in proc
19161 to the arguments in argl. For the purpose of simulating the evaluator with the simulator of
19162 section 5.2 we use the procedure apply-primitive-procedure, which calls on the underlying
19163 Scheme system to perform the application, just as we did for the metacircular evaluator in
19164 section 4.1.4. After computing the value of the primitive application, we restore continue and go to
19165 the designated entry point.
19166 primitive-apply
19167 (assign val (op apply-primitive-procedure)
19168 (reg proc)
19169 (reg argl))
19170
19171 \f(restore continue)
19172 (goto (reg continue))
19173 To apply a compound procedure, we proceed just as with the metacircular evaluator. We construct a
19174 frame that binds the procedure’s parameters to the arguments, use this frame to extend the
19175 environment carried by the procedure, and evaluate in this extended environment the sequence of
19176 expressions that forms the body of the procedure. Ev-sequence, described below in section 5.4.2,
19177 handles the evaluation of the sequence.
19178 compound-apply
19179 (assign unev (op procedure-parameters) (reg proc))
19180 (assign env (op procedure-environment) (reg proc))
19181 (assign env (op extend-environment)
19182 (reg unev) (reg argl) (reg env))
19183 (assign unev (op procedure-body) (reg proc))
19184 (goto (label ev-sequence))
19185 Compound-apply is the only place in the interpreter where the env register is ever assigned a new
19186 value. Just as in the metacircular evaluator, the new environment is constructed from the environment
19187 carried by the procedure, together with the argument list and the corresponding list of variables to be
19188 bound.
19189
19190 5.4.2 Sequence Evaluation and Tail Recursion
19191 The portion of the explicit-control evaluator at ev-sequence is analogous to the metacircular
19192 evaluator’s eval-sequence procedure. It handles sequences of expressions in procedure bodies or
19193 in explicit begin expressions.
19194 Explicit begin expressions are evaluated by placing the sequence of expressions to be evaluated in
19195 unev, saving continue on the stack, and jumping to ev-sequence.
19196 ev-begin
19197 (assign unev (op begin-actions) (reg exp))
19198 (save continue)
19199 (goto (label ev-sequence))
19200 The implicit sequences in procedure bodies are handled by jumping to ev-sequence from
19201 compound-apply, at which point continue is already on the stack, having been saved at
19202 ev-application.
19203 The entries at ev-sequence and ev-sequence-continue form a loop that successively
19204 evaluates each expression in a sequence. The list of unevaluated expressions is kept in unev. Before
19205 evaluating each expression, we check to see if there are additional expressions to be evaluated in the
19206 sequence. If so, we save the rest of the unevaluated expressions (held in unev) and the environment in
19207 which these must be evaluated (held in env) and call eval-dispatch to evaluate the expression.
19208 The two saved registers are restored upon the return from this evaluation, at
19209 ev-sequence-continue.
19210 The final expression in the sequence is handled differently, at the entry point
19211 ev-sequence-last-exp. Since there are no more expressions to be evaluated after this one, we
19212 need not save unev or env before going to eval-dispatch. The value of the whole sequence is
19213 the value of the last expression, so after the evaluation of the last expression there is nothing left to do
19214
19215 \fexcept continue at the entry point currently held on the stack (which was saved by
19216 ev-application or ev-begin.) Rather than setting up continue to arrange for
19217 eval-dispatch to return here and then restoring continue from the stack and continuing at that
19218 entry point, we restore continue from the stack before going to eval-dispatch, so that
19219 eval-dispatch will continue at that entry point after evaluating the expression.
19220 ev-sequence
19221 (assign exp (op first-exp) (reg unev))
19222 (test (op last-exp?) (reg unev))
19223 (branch (label ev-sequence-last-exp))
19224 (save unev)
19225 (save env)
19226 (assign continue (label ev-sequence-continue))
19227 (goto (label eval-dispatch))
19228 ev-sequence-continue
19229 (restore env)
19230 (restore unev)
19231 (assign unev (op rest-exps) (reg unev))
19232 (goto (label ev-sequence))
19233 ev-sequence-last-exp
19234 (restore continue)
19235 (goto (label eval-dispatch))
19236
19237 Tail recursion
19238 In chapter 1 we said that the process described by a procedure such as
19239 (define (sqrt-iter guess x)
19240 (if (good-enough? guess x)
19241 guess
19242 (sqrt-iter (improve guess x)
19243 x)))
19244 is an iterative process. Even though the procedure is syntactically recursive (defined in terms of itself),
19245 it is not logically necessary for an evaluator to save information in passing from one call to
19246 sqrt-iter to the next. 25 An evaluator that can execute a procedure such as sqrt-iter without
19247 requiring increasing storage as the procedure continues to call itself is called a tail-recursive evaluator.
19248 The metacircular implementation of the evaluator in chapter 4 does not specify whether the evaluator
19249 is tail-recursive, because that evaluator inherits its mechanism for saving state from the underlying
19250 Scheme. With the explicit-control evaluator, however, we can trace through the evaluation process to
19251 see when procedure calls cause a net accumulation of information on the stack.
19252 Our evaluator is tail-recursive, because in order to evaluate the final expression of a sequence we
19253 transfer directly to eval-dispatch without saving any information on the stack. Hence, evaluating
19254 the final expression in a sequence -- even if it is a procedure call (as in sqrt-iter, where the if
19255 expression, which is the last expression in the procedure body, reduces to a call to sqrt-iter) -will not cause any information to be accumulated on the stack. 26
19256 If we did not think to take advantage of the fact that it was unnecessary to save information in this
19257 case, we might have implemented eval-sequence by treating all the expressions in a sequence in
19258 the same way -- saving the registers, evaluating the expression, returning to restore the registers, and
19259
19260 \frepeating this until all the expressions have been evaluated: 27
19261 ev-sequence
19262 (test (op no-more-exps?) (reg unev))
19263 (branch (label ev-sequence-end))
19264 (assign exp (op first-exp) (reg unev))
19265 (save unev)
19266 (save env)
19267 (assign continue (label ev-sequence-continue))
19268 (goto (label eval-dispatch))
19269 ev-sequence-continue
19270 (restore env)
19271 (restore unev)
19272 (assign unev (op rest-exps) (reg unev))
19273 (goto (label ev-sequence))
19274 ev-sequence-end
19275 (restore continue)
19276 (goto (reg continue))
19277 This may seem like a minor change to our previous code for evaluation of a sequence: The only
19278 difference is that we go through the save-restore cycle for the last expression in a sequence as well as
19279 for the others. The interpreter will still give the same value for any expression. But this change is fatal
19280 to the tail-recursive implementation, because we must now return after evaluating the final expression
19281 in a sequence in order to undo the (useless) register saves. These extra saves will accumulate during a
19282 nest of procedure calls. Consequently, processes such as sqrt-iter will require space proportional
19283 to the number of iterations rather than requiring constant space. This difference can be significant. For
19284 example, with tail recursion, an infinite loop can be expressed using only the procedure-call
19285 mechanism:
19286 (define (count n)
19287 (newline)
19288 (display n)
19289 (count (+ n 1)))
19290 Without tail recursion, such a procedure would eventually run out of stack space, and expressing a true
19291 iteration would require some control mechanism other than procedure call.
19292
19293 5.4.3 Conditionals, Assignments, and Definitions
19294 As with the metacircular evaluator, special forms are handled by selectively evaluating fragments of
19295 the expression. For an if expression, we must evaluate the predicate and decide, based on the value of
19296 predicate, whether to evaluate the consequent or the alternative.
19297 Before evaluating the predicate, we save the if expression itself so that we can later extract the
19298 consequent or alternative. We also save the environment, which we will need later in order to evaluate
19299 the consequent or the alternative, and we save continue, which we will need later in order to return
19300 to the evaluation of the expression that is waiting for the value of the if.
19301 ev-if
19302 (save exp)
19303 (save env)
19304
19305 ; save expression for later
19306
19307 \f(save continue)
19308 (assign continue (label ev-if-decide))
19309 (assign exp (op if-predicate) (reg exp))
19310 (goto (label eval-dispatch)) ; evaluate the predicate
19311 When we return from evaluating the predicate, we test whether it was true or false and, depending on
19312 the result, place either the consequent or the alternative in exp before going to eval-dispatch.
19313 Notice that restoring env and continue here sets up eval-dispatch to have the correct
19314 environment and to continue at the right place to receive the value of the if expression.
19315 ev-if-decide
19316 (restore continue)
19317 (restore env)
19318 (restore exp)
19319 (test (op true?) (reg val))
19320 (branch (label ev-if-consequent))
19321 ev-if-alternative
19322 (assign exp (op if-alternative) (reg exp))
19323 (goto (label eval-dispatch))
19324 ev-if-consequent
19325 (assign exp (op if-consequent) (reg exp))
19326 (goto (label eval-dispatch))
19327
19328 Assignments and definitions
19329 Assignments are handled by ev-assignment, which is reached from eval-dispatch with the
19330 assignment expression in exp. The code at ev-assignment first evaluates the value part of the
19331 expression and then installs the new value in the environment. Set-variable-value! is assumed
19332 to be available as a machine operation.
19333 ev-assignment
19334 (assign unev (op assignment-variable) (reg exp))
19335 (save unev)
19336 ; save variable for later
19337 (assign exp (op assignment-value) (reg exp))
19338 (save env)
19339 (save continue)
19340 (assign continue (label ev-assignment-1))
19341 (goto (label eval-dispatch)) ; evaluate the assignment value
19342 ev-assignment-1
19343 (restore continue)
19344 (restore env)
19345 (restore unev)
19346 (perform
19347 (op set-variable-value!) (reg unev) (reg val) (reg env))
19348 (assign val (const ok))
19349 (goto (reg continue))
19350 Definitions are handled in a similar way:
19351
19352 \fev-definition
19353 (assign unev (op definition-variable) (reg exp))
19354 (save unev)
19355 ; save variable for later
19356 (assign exp (op definition-value) (reg exp))
19357 (save env)
19358 (save continue)
19359 (assign continue (label ev-definition-1))
19360 (goto (label eval-dispatch)) ; evaluate the definition value
19361 ev-definition-1
19362 (restore continue)
19363 (restore env)
19364 (restore unev)
19365 (perform
19366 (op define-variable!) (reg unev) (reg val) (reg env))
19367 (assign val (const ok))
19368 (goto (reg continue))
19369 Exercise 5.23. Extend the evaluator to handle derived expressions such as cond, let, and so on
19370 (section 4.1.2). You may ‘‘cheat’’ and assume that the syntax transformers such as cond->if are
19371 available as machine operations. 28
19372 Exercise 5.24. Implement cond as a new basic special form without reducing it to if. You will have
19373 to construct a loop that tests the predicates of successive cond clauses until you find one that is true,
19374 and then use ev-sequence to evaluate the actions of the clause.
19375 Exercise 5.25. Modify the evaluator so that it uses normal-order evaluation, based on the lazy
19376 evaluator of section 4.2.
19377
19378 5.4.4 Running the Evaluator
19379 With the implementation of the explicit-control evaluator we come to the end of a development, begun
19380 in chapter 1, in which we have explored successively more precise models of the evaluation process.
19381 We started with the relatively informal substitution model, then extended this in chapter 3 to the
19382 environment model, which enabled us to deal with state and change. In the metacircular evaluator of
19383 chapter 4, we used Scheme itself as a language for making more explicit the environment structure
19384 constructed during evaluation of an expression. Now, with register machines, we have taken a close
19385 look at the evaluator’s mechanisms for storage management, argument passing, and control. At each
19386 new level of description, we have had to raise issues and resolve ambiguities that were not apparent at
19387 the previous, less precise treatment of evaluation. To understand the behavior of the explicit-control
19388 evaluator, we can simulate it and monitor its performance.
19389 We will install a driver loop in our evaluator machine. This plays the role of the driver-loop
19390 procedure of section 4.1.4. The evaluator will repeatedly print a prompt, read an expression, evaluate
19391 the expression by going to eval-dispatch, and print the result. The following instructions form
19392 the beginning of the explicit-control evaluator’s controller sequence: 29
19393 read-eval-print-loop
19394 (perform (op initialize-stack))
19395 (perform
19396 (op prompt-for-input) (const ";;; EC-Eval input:"))
19397 (assign exp (op read))
19398
19399 \f(assign env (op get-global-environment))
19400 (assign continue (label print-result))
19401 (goto (label eval-dispatch))
19402 print-result
19403 (perform
19404 (op announce-output) (const ";;; EC-Eval value:"))
19405 (perform (op user-print) (reg val))
19406 (goto (label read-eval-print-loop))
19407 When we encounter an error in a procedure (such as the ‘‘unknown procedure type error’’ indicated at
19408 apply-dispatch), we print an error message and return to the driver loop. 30
19409 unknown-expression-type
19410 (assign val (const unknown-expression-type-error))
19411 (goto (label signal-error))
19412 unknown-procedure-type
19413 (restore continue)
19414 ; clean up stack (from apply-dispatch)
19415 (assign val (const unknown-procedure-type-error))
19416 (goto (label signal-error))
19417 signal-error
19418 (perform (op user-print) (reg val))
19419 (goto (label read-eval-print-loop))
19420 For the purposes of the simulation, we initialize the stack each time through the driver loop, since it
19421 might not be empty after an error (such as an undefined variable) interrupts an evaluation. 31
19422 If we combine all the code fragments presented in sections 5.4.1-5.4.4, we can create an evaluator
19423 machine model that we can run using the register-machine simulator of section 5.2.
19424 (define eceval
19425 (make-machine
19426 ’(exp env val proc argl continue unev)
19427 eceval-operations
19428 ’(
19429 read-eval-print-loop
19430 <entire machine controller as given above>
19431 )))
19432 We must define Scheme procedures to simulate the operations used as primitives by the evaluator.
19433 These are the same procedures we used for the metacircular evaluator in section 4.1, together with the
19434 few additional ones defined in footnotes throughout section 5.4.
19435 (define eceval-operations
19436 (list (list ’self-evaluating? self-evaluating)
19437 <complete list of operations for eceval machine>))
19438 Finally, we can initialize the global environment and run the evaluator:
19439 (define the-global-environment (setup-environment))
19440 (start eceval)
19441 ;;; EC-Eval input:
19442 (define (append x y)
19443
19444 \f(if (null? x)
19445 y
19446 (cons (car x)
19447 (append (cdr x) y))))
19448 ;;; EC-Eval value:
19449 ok
19450 ;;; EC-Eval input:
19451 (append ’(a b c) ’(d e f))
19452 ;;; EC-Eval value:
19453 (a b c d e f)
19454 Of course, evaluating expressions in this way will take much longer than if we had directly typed them
19455 into Scheme, because of the multiple levels of simulation involved. Our expressions are evaluated by
19456 the explicit-control-evaluator machine, which is being simulated by a Scheme program, which is itself
19457 being evaluated by the Scheme interpreter.
19458
19459 Monitoring the performance of the evaluator
19460 Simulation can be a powerful tool to guide the implementation of evaluators. Simulations make it easy
19461 not only to explore variations of the register-machine design but also to monitor the performance of
19462 the simulated evaluator. For example, one important factor in performance is how efficiently the
19463 evaluator uses the stack. We can observe the number of stack operations required to evaluate various
19464 expressions by defining the evaluator register machine with the version of the simulator that collects
19465 statistics on stack use (section 5.2.4), and adding an instruction at the evaluator’s print-result
19466 entry point to print the statistics:
19467 print-result
19468 (perform (op print-stack-statistics)); added instruction
19469 (perform
19470 (op announce-output) (const ";;; EC-Eval value:"))
19471 ... ; same as before
19472 Interactions with the evaluator now look like this:
19473 ;;; EC-Eval input:
19474 (define (factorial n)
19475 (if (= n 1)
19476 1
19477 (* (factorial (- n 1)) n)))
19478 (total-pushes = 3 maximum-depth = 3)
19479 ;;; EC-Eval value:
19480 ok
19481 ;;; EC-Eval input:
19482 (factorial 5)
19483 (total-pushes = 144 maximum-depth = 28)
19484 ;;; EC-Eval value:
19485 120
19486 Note that the driver loop of the evaluator reinitializes the stack at the start of each interaction, so that
19487 the statistics printed will refer only to stack operations used to evaluate the previous expression.
19488
19489 \fExercise 5.26. Use the monitored stack to explore the tail-recursive property of the evaluator
19490 (section 5.4.2). Start the evaluator and define the iterative factorial procedure from section 1.2.1:
19491 (define (factorial n)
19492 (define (iter product counter)
19493 (if (> counter n)
19494 product
19495 (iter (* counter product)
19496 (+ counter 1))))
19497 (iter 1 1))
19498 Run the procedure with some small values of n. Record the maximum stack depth and the number of
19499 pushes required to compute n! for each of these values.
19500 a. You will find that the maximum depth required to evaluate n! is independent of n. What is that
19501 depth?
19502 b. Determine from your data a formula in terms of n for the total number of push operations used in
19503 evaluating n! for any n > 1. Note that the number of operations used is a linear function of n and is
19504 thus determined by two constants.
19505 Exercise 5.27. For comparison with exercise 5.26, explore the behavior of the following procedure
19506 for computing factorials recursively:
19507 (define (factorial n)
19508 (if (= n 1)
19509 1
19510 (* (factorial (- n 1)) n)))
19511 By running this procedure with the monitored stack, determine, as a function of n, the maximum depth
19512 of the stack and the total number of pushes used in evaluating n! for n > 1. (Again, these functions will
19513 be linear.) Summarize your experiments by filling in the following table with the appropriate
19514 expressions in terms of n:
19515 Maximum depth Number of pushes
19516 Recursive
19517 factorial
19518 Iterative
19519 factorial
19520
19521 The maximum depth is a measure of the amount of space used by the evaluator in carrying out the
19522 computation, and the number of pushes correlates well with the time required.
19523 Exercise 5.28. Modify the definition of the evaluator by changing eval-sequence as described in
19524 section 5.4.2 so that the evaluator is no longer tail-recursive. Rerun your experiments from
19525 exercises 5.26 and 5.27 to demonstrate that both versions of the factorial procedure now require
19526 space that grows linearly with their input.
19527
19528 \fExercise 5.29. Monitor the stack operations in the tree-recursive Fibonacci computation:
19529 (define (fib n)
19530 (if (< n 2)
19531 n
19532 (+ (fib (- n 1)) (fib (- n 2)))))
19533 a. Give a formula in terms of n for the maximum depth of the stack required to compute Fib(n) for n >
19534 2. Hint: In section 1.2.2 we argued that the space used by this process grows linearly with n.
19535 b. Give a formula for the total number of pushes used to compute Fib(n) for n > 2. You should find
19536 that the number of pushes (which correlates well with the time used) grows exponentially with n. Hint:
19537 Let S(n) be the number of pushes used in computing Fib(n). You should be able to argue that there is a
19538 formula that expresses S(n) in terms of S(n - 1), S(n - 2), and some fixed ‘‘overhead’’ constant k that is
19539 independent of n. Give the formula, and say what k is. Then show that S(n) can be expressed as a
19540 Fib(n + 1) + b and give the values of a and b.
19541 Exercise 5.30. Our evaluator currently catches and signals only two kinds of errors -- unknown
19542 expression types and unknown procedure types. Other errors will take us out of the evaluator
19543 read-eval-print loop. When we run the evaluator using the register-machine simulator, these errors are
19544 caught by the underlying Scheme system. This is analogous to the computer crashing when a user
19545 program makes an error. 32 It is a large project to make a real error system work, but it is well worth
19546 the effort to understand what is involved here.
19547 a. Errors that occur in the evaluation process, such as an attempt to access an unbound variable, could
19548 be caught by changing the lookup operation to make it return a distinguished condition code, which
19549 cannot be a possible value of any user variable. The evaluator can test for this condition code and then
19550 do what is necessary to go to signal-error. Find all of the places in the evaluator where such a
19551 change is necessary and fix them. This is lots of work.
19552 b. Much worse is the problem of handling errors that are signaled by applying primitive procedures,
19553 such as an attempt to divide by zero or an attempt to extract the car of a symbol. In a professionally
19554 written high-quality system, each primitive application is checked for safety as part of the primitive.
19555 For example, every call to car could first check that the argument is a pair. If the argument is not a
19556 pair, the application would return a distinguished condition code to the evaluator, which would then
19557 report the failure. We could arrange for this in our register-machine simulator by making each
19558 primitive procedure check for applicability and returning an appropriate distinguished condition code
19559 on failure. Then the primitive-apply code in the evaluator can check for the condition code and
19560 go to signal-error if necessary. Build this structure and make it work. This is a major project.
19561 19 See Batali et al. 1982 for more information on the chip and the method by which it was designed.
19562 20 In our controller, the dispatch is written as a sequence of test and branch instructions.
19563
19564 Alternatively, it could have been written in a data-directed style (and in a real system it probably
19565 would have been) to avoid the need to perform sequential tests and to facilitate the definition of new
19566 expression types. A machine designed to run Lisp would probably include a dispatch-on-type
19567 instruction that would efficiently execute such data-directed dispatches.
19568 21 This is an important but subtle point in translating algorithms from a procedural language, such as
19569
19570 Lisp, to a register-machine language. As an alternative to saving only what is needed, we could save
19571 all the registers (except val) before each recursive call. This is called a framed-stack discipline. This
19572
19573 \fwould work but might save more registers than necessary; this could be an important consideration in
19574 a system where stack operations are expensive. Saving registers whose contents will not be needed
19575 later may also hold onto useless data that could otherwise be garbage-collected, freeing space to be
19576 reused.
19577 22 We add to the evaluator data-structure procedures in section 4.1.3 the following two procedures for
19578
19579 manipulating argument lists:
19580 (define (empty-arglist) ’())
19581 (define (adjoin-arg arg arglist)
19582 (append arglist (list arg)))
19583 We also use an additional syntax procedure to test for the last operand in a combination:
19584 (define (last-operand? ops)
19585 (null? (cdr ops)))
19586 23 The optimization of treating the last operand specially is known as evlis tail recursion (see Wand
19587
19588 1980). We could be somewhat more efficient in the argument evaluation loop if we made evaluation of
19589 the first operand a special case too. This would permit us to postpone initializing argl until after
19590 evaluating the first operand, so as to avoid saving argl in this case. The compiler in section 5.5
19591 performs this optimization. (Compare the construct-arglist procedure of section 5.5.3.)
19592 24 The order of operand evaluation in the metacircular evaluator is determined by the order of
19593
19594 evaluation of the arguments to cons in the procedure list-of-values of section 4.1.1 (see
19595 exercise 4.1).
19596 25 We saw in section 5.1 how to implement such a process with a register machine that had no stack;
19597
19598 the state of the process was stored in a fixed set of registers.
19599 26 This implementation of tail recursion in ev-sequence is one variety of a well-known
19600
19601 optimization technique used by many compilers. In compiling a procedure that ends with a procedure
19602 call, one can replace the call by a jump to the called procedure’s entry point. Building this strategy into
19603 the interpreter, as we have done in this section, provides the optimization uniformly throughout the
19604 language.
19605 27 We can define no-more-exps? as follows:
19606
19607 (define (no-more-exps? seq) (null? seq))
19608 28 This isn’t really cheating. In an actual implementation built from scratch, we would use our
19609
19610 explicit-control evaluator to interpret a Scheme program that performs source-level transformations
19611 like cond->if in a syntax phase that runs before execution.
19612 29 We assume here that read and the various printing operations are available as primitive machine
19613
19614 operations, which is useful for our simulation, but completely unrealistic in practice. These are
19615 actually extremely complex operations. In practice, they would be implemented using low-level
19616 input-output operations such as transferring single characters to and from a device.
19617 To support the get-global-environment operation we define
19618
19619 \f(define the-global-environment (setup-environment))
19620 (define (get-global-environment)
19621 the-global-environment)
19622 30 There are other errors that we would like the interpreter to handle, but these are not so simple. See
19623
19624 exercise 5.30.
19625 31 We could perform the stack initialization only after errors, but doing it in the driver loop will be
19626
19627 convenient for monitoring the evaluator’s performance, as described below.
19628 32 Regrettably, this is the normal state of affairs in conventional compiler-based language systems
19629
19630 such as C. In UNIX TM the system ‘‘dumps core,’’ and in DOS/Windows TM it becomes catatonic.
19631 The Macintosh TM displays a picture of an exploding bomb and offers you the opportunity to reboot
19632 the computer -- if you’re lucky.
19633 [Go to first, previous, next page; contents; index]
19634
19635 \f[Go to first, previous, next page; contents; index]
19636
19637 5.5 Compilation
19638 The explicit-control evaluator of section 5.4 is a register machine whose controller interprets Scheme
19639 programs. In this section we will see how to run Scheme programs on a register machine whose
19640 controller is not a Scheme interpreter.
19641 The explicit-control evaluator machine is universal -- it can carry out any computational process that
19642 can be described in Scheme. The evaluator’s controller orchestrates the use of its data paths to perform
19643 the desired computation. Thus, the evaluator’s data paths are universal: They are sufficient to perform
19644 any computation we desire, given an appropriate controller. 33
19645 Commercial general-purpose computers are register machines organized around a collection of
19646 registers and operations that constitute an efficient and convenient universal set of data paths. The
19647 controller for a general-purpose machine is an interpreter for a register-machine language like the one
19648 we have been using. This language is called the native language of the machine, or simply machine
19649 language. Programs written in machine language are sequences of instructions that use the machine’s
19650 data paths. For example, the explicit-control evaluator’s instruction sequence can be thought of as a
19651 machine-language program for a general-purpose computer rather than as the controller for a
19652 specialized interpreter machine.
19653 There are two common strategies for bridging the gap between higher-level languages and
19654 register-machine languages. The explicit-control evaluator illustrates the strategy of interpretation. An
19655 interpreter written in the native language of a machine configures the machine to execute programs
19656 written in a language (called the source language) that may differ from the native language of the
19657 machine performing the evaluation. The primitive procedures of the source language are implemented
19658 as a library of subroutines written in the native language of the given machine. A program to be
19659 interpreted (called the source program) is represented as a data structure. The interpreter traverses this
19660 data structure, analyzing the source program. As it does so, it simulates the intended behavior of the
19661 source program by calling appropriate primitive subroutines from the library.
19662 In this section, we explore the alternative strategy of compilation. A compiler for a given source
19663 language and machine translates a source program into an equivalent program (called the object
19664 program) written in the machine’s native language. The compiler that we implement in this section
19665 translates programs written in Scheme into sequences of instructions to be executed using the
19666 explicit-control evaluator machine’s data paths. 34
19667 Compared with interpretation, compilation can provide a great increase in the efficiency of program
19668 execution, as we will explain below in the overview of the compiler. On the other hand, an interpreter
19669 provides a more powerful environment for interactive program development and debugging, because
19670 the source program being executed is available at run time to be examined and modified. In addition,
19671 because the entire library of primitives is present, new programs can be constructed and added to the
19672 system during debugging.
19673 In view of the complementary advantages of compilation and interpretation, modern
19674 program-development environments pursue a mixed strategy. Lisp interpreters are generally organized
19675 so that interpreted procedures and compiled procedures can call each other. This enables a
19676 programmer to compile those parts of a program that are assumed to be debugged, thus gaining the
19677 efficiency advantage of compilation, while retaining the interpretive mode of execution for those parts
19678 of the program that are in the flux of interactive development and debugging. In section 5.5.7, after we
19679
19680 \fhave implemented the compiler, we will show how to interface it with our interpreter to produce an
19681 integrated interpreter-compiler development system.
19682
19683 An overview of the compiler
19684 Our compiler is much like our interpreter, both in its structure and in the function it performs.
19685 Accordingly, the mechanisms used by the compiler for analyzing expressions will be similar to those
19686 used by the interpreter. Moreover, to make it easy to interface compiled and interpreted code, we will
19687 design the compiler to generate code that obeys the same conventions of register usage as the
19688 interpreter: The environment will be kept in the env register, argument lists will be accumulated in
19689 argl, a procedure to be applied will be in proc, procedures will return their answers in val, and the
19690 location to which a procedure should return will be kept in continue. In general, the compiler
19691 translates a source program into an object program that performs essentially the same register
19692 operations as would the interpreter in evaluating the same source program.
19693 This description suggests a strategy for implementing a rudimentary compiler: We traverse the
19694 expression in the same way the interpreter does. When we encounter a register instruction that the
19695 interpreter would perform in evaluating the expression, we do not execute the instruction but instead
19696 accumulate it into a sequence. The resulting sequence of instructions will be the object code. Observe
19697 the efficiency advantage of compilation over interpretation. Each time the interpreter evaluates an
19698 expression -- for example, (f 84 96) -- it performs the work of classifying the expression
19699 (discovering that this is a procedure application) and testing for the end of the operand list (discovering
19700 that there are two operands). With a compiler, the expression is analyzed only once, when the
19701 instruction sequence is generated at compile time. The object code produced by the compiler contains
19702 only the instructions that evaluate the operator and the two operands, assemble the argument list, and
19703 apply the procedure (in proc) to the arguments (in argl).
19704 This is the same kind of optimization we implemented in the analyzing evaluator of section 4.1.7. But
19705 there are further opportunities to gain efficiency in compiled code. As the interpreter runs, it follows a
19706 process that must be applicable to any expression in the language. In contrast, a given segment of
19707 compiled code is meant to execute some particular expression. This can make a big difference, for
19708 example in the use of the stack to save registers. When the interpreter evaluates an expression, it must
19709 be prepared for any contingency. Before evaluating a subexpression, the interpreter saves all registers
19710 that will be needed later, because the subexpression might require an arbitrary evaluation. A compiler,
19711 on the other hand, can exploit the structure of the particular expression it is processing to generate
19712 code that avoids unnecessary stack operations.
19713 As a case in point, consider the combination (f 84 96). Before the interpreter evaluates the
19714 operator of the combination, it prepares for this evaluation by saving the registers containing the
19715 operands and the environment, whose values will be needed later. The interpreter then evaluates the
19716 operator to obtain the result in val, restores the saved registers, and finally moves the result from val
19717 to proc. However, in the particular expression we are dealing with, the operator is the symbol f,
19718 whose evaluation is accomplished by the machine operation lookup-variable-value, which
19719 does not alter any registers. The compiler that we implement in this section will take advantage of this
19720 fact and generate code that evaluates the operator using the instruction
19721 (assign proc (op lookup-variable-value) (const f) (reg env))
19722 This code not only avoids the unnecessary saves and restores but also assigns the value of the lookup
19723 directly to proc, whereas the interpreter would obtain the result in val and then move this to proc.
19724
19725 \fA compiler can also optimize access to the environment. Having analyzed the code, the compiler can
19726 in many cases know in which frame a particular variable will be located and access that frame directly,
19727 rather than performing the lookup-variable-value search. We will discuss how to implement
19728 such variable access in section 5.5.6. Until then, however, we will focus on the kind of register and
19729 stack optimizations described above. There are many other optimizations that can be performed by a
19730 compiler, such as coding primitive operations ‘‘in line’’ instead of using a general apply mechanism
19731 (see exercise 5.38); but we will not emphasize these here. Our main goal in this section is to illustrate
19732 the compilation process in a simplified (but still interesting) context.
19733
19734 5.5.1 Structure of the Compiler
19735 In section 4.1.7 we modified our original metacircular interpreter to separate analysis from execution.
19736 We analyzed each expression to produce an execution procedure that took an environment as argument
19737 and performed the required operations. In our compiler, we will do essentially the same analysis.
19738 Instead of producing execution procedures, however, we will generate sequences of instructions to be
19739 run by our register machine.
19740 The procedure compile is the top-level dispatch in the compiler. It corresponds to the eval
19741 procedure of section 4.1.1, the analyze procedure of section 4.1.7, and the eval-dispatch entry
19742 point of the explicit-control-evaluator in section 5.4.1. The compiler, like the interpreters, uses the
19743 expression-syntax procedures defined in section 4.1.2. 35 Compile performs a case analysis on the
19744 syntactic type of the expression to be compiled. For each type of expression, it dispatches to a
19745 specialized code generator:
19746 (define (compile exp target linkage)
19747 (cond ((self-evaluating? exp)
19748 (compile-self-evaluating exp target linkage))
19749 ((quoted? exp) (compile-quoted exp target linkage))
19750 ((variable? exp)
19751 (compile-variable exp target linkage))
19752 ((assignment? exp)
19753 (compile-assignment exp target linkage))
19754 ((definition? exp)
19755 (compile-definition exp target linkage))
19756 ((if? exp) (compile-if exp target linkage))
19757 ((lambda? exp) (compile-lambda exp target linkage))
19758 ((begin? exp)
19759 (compile-sequence (begin-actions exp)
19760 target
19761 linkage))
19762 ((cond? exp) (compile (cond->if exp) target linkage))
19763 ((application? exp)
19764 (compile-application exp target linkage))
19765 (else
19766 (error "Unknown expression type -- COMPILE" exp))))
19767
19768 \fTargets and linkages
19769 Compile and the code generators that it calls take two arguments in addition to the expression to
19770 compile. There is a target, which specifies the register in which the compiled code is to return the
19771 value of the expression. There is also a linkage descriptor, which describes how the code resulting
19772 from the compilation of the expression should proceed when it has finished its execution. The linkage
19773 descriptor can require that the code do one of the following three things:
19774 continue at the next instruction in sequence (this is specified by the linkage descriptor next),
19775 return from the procedure being compiled (this is specified by the linkage descriptor return), or
19776 jump to a named entry point (this is specified by using the designated label as the linkage
19777 descriptor).
19778 For example, compiling the expression 5 (which is self-evaluating) with a target of the val register
19779 and a linkage of next should produce the instruction
19780 (assign val (const 5))
19781 Compiling the same expression with a linkage of return should produce the instructions
19782 (assign val (const 5))
19783 (goto (reg continue))
19784 In the first case, execution will continue with the next instruction in the sequence. In the second case,
19785 we will return from a procedure call. In both cases, the value of the expression will be placed into the
19786 target val register.
19787
19788 Instruction sequences and stack usage
19789 Each code generator returns an instruction sequence containing the object code it has generated for the
19790 expression. Code generation for a compound expression is accomplished by combining the output
19791 from simpler code generators for component expressions, just as evaluation of a compound expression
19792 is accomplished by evaluating the component expressions.
19793 The simplest method for combining instruction sequences is a procedure called
19794 append-instruction-sequences. It takes as arguments any number of instruction sequences
19795 that are to be executed sequentially; it appends them and returns the combined sequence. That is, if
19796 <seq 1 > and <seq 2 > are sequences of instructions, then evaluating
19797 (append-instruction-sequences <seq 1 > <seq 2 >)
19798 produces the sequence
19799 <seq 1 >
19800 <seq 2 >
19801 Whenever registers might need to be saved, the compiler’s code generators use preserving, which
19802 is a more subtle method for combining instruction sequences. Preserving takes three arguments: a
19803 set of registers and two instruction sequences that are to be executed sequentially. It appends the
19804 sequences in such a way that the contents of each register in the set is preserved over the execution of
19805
19806 \fthe first sequence, if this is needed for the execution of the second sequence. That is, if the first
19807 sequence modifies the register and the second sequence actually needs the register’s original contents,
19808 then preserving wraps a save and a restore of the register around the first sequence before
19809 appending the sequences. Otherwise, preserving simply returns the appended instruction
19810 sequences. Thus, for example,
19811 (preserving (list <reg 1 > <reg 2 >) <seq 1 > <seq 2 >)
19812 produces one of the following four sequences of instructions, depending on how <seq 1 > and <seq 2 >
19813 use <reg 1 > and <reg 2 >:
19814
19815 By using preserving to combine instruction sequences the compiler avoids unnecessary stack
19816 operations. This also isolates the details of whether or not to generate save and restore
19817 instructions within the preserving procedure, separating them from the concerns that arise in
19818 writing each of the individual code generators. In fact no save or restore instructions are
19819 explicitly produced by the code generators.
19820 In principle, we could represent an instruction sequence simply as a list of instructions.
19821 Append-instruction-sequences could then combine instruction sequences by performing an
19822 ordinary list append. However, preserving would then be a complex operation, because it would
19823 have to analyze each instruction sequence to determine how the sequence uses its registers.
19824 Preserving would be inefficient as well as complex, because it would have to analyze each of its
19825 instruction sequence arguments, even though these sequences might themselves have been constructed
19826 by calls to preserving, in which case their parts would have already been analyzed. To avoid such
19827 repetitious analysis we will associate with each instruction sequence some information about its
19828 register use. When we construct a basic instruction sequence we will provide this information
19829 explicitly, and the procedures that combine instruction sequences will derive register-use information
19830 for the combined sequence from the information associated with the component sequences.
19831 An instruction sequence will contain three pieces of information:
19832 the set of registers that must be initialized before the instructions in the sequence are executed
19833 (these registers are said to be needed by the sequence),
19834 the set of registers whose values are modified by the instructions in the sequence, and
19835 the actual instructions (also called statements) in the sequence.
19836 We will represent an instruction sequence as a list of its three parts. The constructor for instruction
19837 sequences is thus
19838 (define (make-instruction-sequence needs modifies statements)
19839 (list needs modifies statements))
19840
19841 \fFor example, the two-instruction sequence that looks up the value of the variable x in the current
19842 environment, assigns the result to val, and then returns, requires registers env and continue to
19843 have been initialized, and modifies register val. This sequence would therefore be constructed as
19844 (make-instruction-sequence ’(env continue) ’(val)
19845 ’((assign val
19846 (op lookup-variable-value) (const x) (reg env))
19847 (goto (reg continue))))
19848 We sometimes need to construct an instruction sequence with no statements:
19849 (define (empty-instruction-sequence)
19850 (make-instruction-sequence ’() ’() ’()))
19851 The procedures for combining instruction sequences are shown in section 5.5.4.
19852 Exercise 5.31. In evaluating a procedure application, the explicit-control evaluator always saves and
19853 restores the env register around the evaluation of the operator, saves and restores env around the
19854 evaluation of each operand (except the final one), saves and restores argl around the evaluation of
19855 each operand, and saves and restores proc around the evaluation of the operand sequence. For each
19856 of the following combinations, say which of these save and restore operations are superfluous
19857 and thus could be eliminated by the compiler’s preserving mechanism:
19858 (f ’x ’y)
19859 ((f) ’x ’y)
19860 (f (g ’x) y)
19861 (f (g ’x) ’y)
19862 Exercise 5.32. Using the preserving mechanism, the compiler will avoid saving and restoring
19863 env around the evaluation of the operator of a combination in the case where the operator is a symbol.
19864 We could also build such optimizations into the evaluator. Indeed, the explicit-control evaluator of
19865 section 5.4 already performs a similar optimization, by treating combinations with no operands as a
19866 special case.
19867 a. Extend the explicit-control evaluator to recognize as a separate class of expressions combinations
19868 whose operator is a symbol, and to take advantage of this fact in evaluating such expressions.
19869 b. Alyssa P. Hacker suggests that by extending the evaluator to recognize more and more special cases
19870 we could incorporate all the compiler’s optimizations, and that this would eliminate the advantage of
19871 compilation altogether. What do you think of this idea?
19872
19873 5.5.2 Compiling Expressions
19874 In this section and the next we implement the code generators to which the compile procedure
19875 dispatches.
19876
19877 Compiling linkage code
19878 In general, the output of each code generator will end with instructions -- generated by the procedure
19879 compile-linkage -- that implement the required linkage. If the linkage is return then we must
19880 generate the instruction (goto (reg continue)). This needs the continue register and does
19881 not modify any registers. If the linkage is next, then we needn’t include any additional instructions.
19882
19883 \fOtherwise, the linkage is a label, and we generate a goto to that label, an instruction that does not
19884 need or modify any registers. 36
19885 (define (compile-linkage linkage)
19886 (cond ((eq? linkage ’return)
19887 (make-instruction-sequence ’(continue) ’()
19888 ’((goto (reg continue)))))
19889 ((eq? linkage ’next)
19890 (empty-instruction-sequence))
19891 (else
19892 (make-instruction-sequence ’() ’()
19893 ‘((goto (label ,linkage)))))))
19894 The linkage code is appended to an instruction sequence by preserving the continue register,
19895 since a return linkage will require the continue register: If the given instruction sequence
19896 modifies continue and the linkage code needs it, continue will be saved and restored.
19897 (define (end-with-linkage linkage instruction-sequence)
19898 (preserving ’(continue)
19899 instruction-sequence
19900 (compile-linkage linkage)))
19901
19902 Compiling simple expressions
19903 The code generators for self-evaluating expressions, quotations, and variables construct instruction
19904 sequences that assign the required value to the target register and then proceed as specified by the
19905 linkage descriptor.
19906 (define (compile-self-evaluating exp target linkage)
19907 (end-with-linkage linkage
19908 (make-instruction-sequence ’() (list target)
19909 ‘((assign ,target (const ,exp))))))
19910 (define (compile-quoted exp target linkage)
19911 (end-with-linkage linkage
19912 (make-instruction-sequence ’() (list target)
19913 ‘((assign ,target (const ,(text-of-quotation exp)))))))
19914 (define (compile-variable exp target linkage)
19915 (end-with-linkage linkage
19916 (make-instruction-sequence ’(env) (list target)
19917 ‘((assign ,target
19918 (op lookup-variable-value)
19919 (const ,exp)
19920 (reg env))))))
19921 All these assignment instructions modify the target register, and the one that looks up a variable needs
19922 the env register.
19923 Assignments and definitions are handled much as they are in the interpreter. We recursively generate
19924 code that computes the value to be assigned to the variable, and append to it a two-instruction
19925 sequence that actually sets or defines the variable and assigns the value of the whole expression (the
19926 symbol ok) to the target register. The recursive compilation has target val and linkage next so that
19927 the code will put its result into val and continue with the code that is appended after it. The
19928
19929 \fappending is done preserving env, since the environment is needed for setting or defining the variable
19930 and the code for the variable value could be the compilation of a complex expression that might
19931 modify the registers in arbitrary ways.
19932 (define (compile-assignment exp target linkage)
19933 (let ((var (assignment-variable exp))
19934 (get-value-code
19935 (compile (assignment-value exp) ’val ’next)))
19936 (end-with-linkage linkage
19937 (preserving ’(env)
19938 get-value-code
19939 (make-instruction-sequence ’(env val) (list target)
19940 ‘((perform (op set-variable-value!)
19941 (const ,var)
19942 (reg val)
19943 (reg env))
19944 (assign ,target (const ok))))))))
19945 (define (compile-definition exp target linkage)
19946 (let ((var (definition-variable exp))
19947 (get-value-code
19948 (compile (definition-value exp) ’val ’next)))
19949 (end-with-linkage linkage
19950 (preserving ’(env)
19951 get-value-code
19952 (make-instruction-sequence ’(env val) (list target)
19953 ‘((perform (op define-variable!)
19954 (const ,var)
19955 (reg val)
19956 (reg env))
19957 (assign ,target (const ok))))))))
19958 The appended two-instruction sequence requires env and val and modifies the target. Note that
19959 although we preserve env for this sequence, we do not preserve val, because the
19960 get-value-code is designed to explicitly place its result in val for use by this sequence. (In fact,
19961 if we did preserve val, we would have a bug, because this would cause the previous contents of val
19962 to be restored right after the get-value-code is run.)
19963
19964 Compiling conditional expressions
19965 The code for an if expression compiled with a given target and linkage has the form
19966 <compilation of predicate, target val, linkage next>
19967 (test (op false?) (reg val))
19968 (branch (label false-branch))
19969 true-branch
19970 <compilation of consequent with given target and given linkage or after-if>
19971
19972 false-branch
19973 <compilation of alternative with given target and linkage>
19974 after-if
19975
19976 \fTo generate this code, we compile the predicate, consequent, and alternative, and combine the
19977 resulting code with instructions to test the predicate result and with newly generated labels to mark the
19978 true and false branches and the end of the conditional. 37 In this arrangement of code, we must branch
19979 around the true branch if the test is false. The only slight complication is in how the linkage for the
19980 true branch should be handled. If the linkage for the conditional is return or a label, then the true
19981 and false branches will both use this same linkage. If the linkage is next, the true branch ends with a
19982 jump around the code for the false branch to the label at the end of the conditional.
19983 (define (compile-if exp target linkage)
19984 (let ((t-branch (make-label ’true-branch))
19985 (f-branch (make-label ’false-branch))
19986 (after-if (make-label ’after-if)))
19987 (let ((consequent-linkage
19988 (if (eq? linkage ’next) after-if linkage)))
19989 (let ((p-code (compile (if-predicate exp) ’val ’next))
19990 (c-code
19991 (compile
19992 (if-consequent exp) target consequent-linkage))
19993 (a-code
19994 (compile (if-alternative exp) target linkage)))
19995 (preserving ’(env continue)
19996 p-code
19997 (append-instruction-sequences
19998 (make-instruction-sequence ’(val) ’()
19999 ‘((test (op false?) (reg val))
20000 (branch (label ,f-branch))))
20001 (parallel-instruction-sequences
20002 (append-instruction-sequences t-branch c-code)
20003 (append-instruction-sequences f-branch a-code))
20004 after-if))))))
20005 Env is preserved around the predicate code because it could be needed by the true and false branches,
20006 and continue is preserved because it could be needed by the linkage code in those branches. The
20007 code for the true and false branches (which are not executed sequentially) is appended using a special
20008 combiner parallel-instruction-sequences described in section 5.5.4.
20009 Note that cond is a derived expression, so all that the compiler needs to do handle it is to apply the
20010 cond->if transformer (from section 4.1.2) and compile the resulting if expression.
20011
20012 Compiling sequences
20013 The compilation of sequences (from procedure bodies or explicit begin expressions) parallels their
20014 evaluation. Each expression of the sequence is compiled -- the last expression with the linkage
20015 specified for the sequence, and the other expressions with linkage next (to execute the rest of the
20016 sequence). The instruction sequences for the individual expressions are appended to form a single
20017 instruction sequence, such that env (needed for the rest of the sequence) and continue (possibly
20018 needed for the linkage at the end of the sequence) are preserved.
20019 (define (compile-sequence seq target linkage)
20020 (if (last-exp? seq)
20021 (compile (first-exp seq) target linkage)
20022
20023 \f(preserving ’(env continue)
20024 (compile (first-exp seq) target ’next)
20025 (compile-sequence (rest-exps seq) target linkage))))
20026
20027 Compiling lambda expressions
20028 Lambda expressions construct procedures. The object code for a lambda expression must have the
20029 form
20030 <construct procedure object and assign it to target register>
20031 <linkage>
20032 When we compile the lambda expression, we also generate the code for the procedure body.
20033 Although the body won’t be executed at the time of procedure construction, it is convenient to insert it
20034 into the object code right after the code for the lambda. If the linkage for the lambda expression is a
20035 label or return, this is fine. But if the linkage is next, we will need to skip around the code for the
20036 procedure body by using a linkage that jumps to a label that is inserted after the body. The object code
20037 thus has the form
20038 <construct procedure object and assign it to target register>
20039 <code for given linkage>or (goto (label after-lambda))
20040 <compilation of procedure body>
20041 after-lambda
20042 Compile-lambda generates the code for constructing the procedure object followed by the code for
20043 the procedure body. The procedure object will be constructed at run time by combining the current
20044 environment (the environment at the point of definition) with the entry point to the compiled
20045 procedure body (a newly generated label). 38
20046 (define (compile-lambda exp target linkage)
20047 (let ((proc-entry (make-label ’entry))
20048 (after-lambda (make-label ’after-lambda)))
20049 (let ((lambda-linkage
20050 (if (eq? linkage ’next) after-lambda linkage)))
20051 (append-instruction-sequences
20052 (tack-on-instruction-sequence
20053 (end-with-linkage lambda-linkage
20054 (make-instruction-sequence ’(env) (list target)
20055 ‘((assign ,target
20056 (op make-compiled-procedure)
20057 (label ,proc-entry)
20058 (reg env)))))
20059 (compile-lambda-body exp proc-entry))
20060 after-lambda))))
20061 Compile-lambda uses the special combiner tack-on-instruction-sequence
20062 (section 5.5.4) rather than append-instruction-sequences to append the procedure body to
20063 the lambda expression code, because the body is not part of the sequence of instructions that will be
20064 executed when the combined sequence is entered; rather, it is in the sequence only because that was a
20065 convenient place to put it.
20066
20067 \fCompile-lambda-body constructs the code for the body of the procedure. This code begins with a
20068 label for the entry point. Next come instructions that will cause the run-time evaluation environment to
20069 switch to the correct environment for evaluating the procedure body -- namely, the definition
20070 environment of the procedure, extended to include the bindings of the formal parameters to the
20071 arguments with which the procedure is called. After this comes the code for the sequence of
20072 expressions that makes up the procedure body. The sequence is compiled with linkage return and
20073 target val so that it will end by returning from the procedure with the procedure result in val.
20074 (define (compile-lambda-body exp proc-entry)
20075 (let ((formals (lambda-parameters exp)))
20076 (append-instruction-sequences
20077 (make-instruction-sequence ’(env proc argl) ’(env)
20078 ‘(,proc-entry
20079 (assign env (op compiled-procedure-env) (reg proc))
20080 (assign env
20081 (op extend-environment)
20082 (const ,formals)
20083 (reg argl)
20084 (reg env))))
20085 (compile-sequence (lambda-body exp) ’val ’return))))
20086
20087 5.5.3 Compiling Combinations
20088 The essence of the compilation process is the compilation of procedure applications. The code for a
20089 combination compiled with a given target and linkage has the form
20090 <compilation of operator, target proc, linkage next>
20091 <evaluate operands and construct argument list in argl>
20092 <compilation of procedure call with given target and linkage>
20093 The registers env, proc, and argl may have to be saved and restored during evaluation of the
20094 operator and operands. Note that this is the only place in the compiler where a target other than val is
20095 specified.
20096 The required code is generated by compile-application. This recursively compiles the
20097 operator, to produce code that puts the procedure to be applied into proc, and compiles the operands,
20098 to produce code that evaluates the individual operands of the application. The instruction sequences
20099 for the operands are combined (by construct-arglist) with code that constructs the list of
20100 arguments in argl, and the resulting argument-list code is combined with the procedure code and the
20101 code that performs the procedure call (produced by compile-procedure-call). In appending
20102 the code sequences, the env register must be preserved around the evaluation of the operator (since
20103 evaluating the operator might modify env, which will be needed to evaluate the operands), and the
20104 proc register must be preserved around the construction of the argument list (since evaluating the
20105 operands might modify proc, which will be needed for the actual procedure application). Continue
20106 must also be preserved throughout, since it is needed for the linkage in the procedure call.
20107 (define (compile-application exp target linkage)
20108 (let ((proc-code (compile (operator exp) ’proc ’next))
20109 (operand-codes
20110 (map (lambda (operand) (compile operand ’val ’next))
20111 (operands exp))))
20112
20113 \f(preserving ’(env continue)
20114 proc-code
20115 (preserving ’(proc continue)
20116 (construct-arglist operand-codes)
20117 (compile-procedure-call target linkage)))))
20118 The code to construct the argument list will evaluate each operand into val and then cons that value
20119 onto the argument list being accumulated in argl. Since we cons the arguments onto argl in
20120 sequence, we must start with the last argument and end with the first, so that the arguments will appear
20121 in order from first to last in the resulting list. Rather than waste an instruction by initializing argl to
20122 the empty list to set up for this sequence of evaluations, we make the first code sequence construct the
20123 initial argl. The general form of the argument-list construction is thus as follows:
20124 <compilation of last operand, targeted to val>
20125 (assign argl (op list) (reg val))
20126 <compilation of next operand, targeted to val>
20127 (assign argl (op cons) (reg val) (reg argl))
20128 ...<compilation of first operand, targeted to val>
20129 (assign argl (op cons) (reg val) (reg argl))
20130 Argl must be preserved around each operand evaluation except the first (so that arguments
20131 accumulated so far won’t be lost), and env must be preserved around each operand evaluation except
20132 the last (for use by subsequent operand evaluations).
20133 Compiling this argument code is a bit tricky, because of the special treatment of the first operand to be
20134 evaluated and the need to preserve argl and env in different places. The construct-arglist
20135 procedure takes as arguments the code that evaluates the individual operands. If there are no operands
20136 at all, it simply emits the instruction
20137 (assign argl (const ()))
20138 Otherwise, construct-arglist creates code that initializes argl with the last argument, and
20139 appends code that evaluates the rest of the arguments and adjoins them to argl in succession. In
20140 order to process the arguments from last to first, we must reverse the list of operand code sequences
20141 from the order supplied by compile-application.
20142 (define (construct-arglist operand-codes)
20143 (let ((operand-codes (reverse operand-codes)))
20144 (if (null? operand-codes)
20145 (make-instruction-sequence ’() ’(argl)
20146 ’((assign argl (const ()))))
20147 (let ((code-to-get-last-arg
20148 (append-instruction-sequences
20149 (car operand-codes)
20150 (make-instruction-sequence ’(val) ’(argl)
20151 ’((assign argl (op list) (reg val)))))))
20152 (if (null? (cdr operand-codes))
20153 code-to-get-last-arg
20154 (preserving ’(env)
20155 code-to-get-last-arg
20156 (code-to-get-rest-args
20157 (cdr operand-codes))))))))
20158
20159 \f(define (code-to-get-rest-args operand-codes)
20160 (let ((code-for-next-arg
20161 (preserving ’(argl)
20162 (car operand-codes)
20163 (make-instruction-sequence ’(val argl) ’(argl)
20164 ’((assign argl
20165 (op cons) (reg val) (reg argl)))))))
20166 (if (null? (cdr operand-codes))
20167 code-for-next-arg
20168 (preserving ’(env)
20169 code-for-next-arg
20170 (code-to-get-rest-args (cdr operand-codes))))))
20171
20172 Applying procedures
20173 After evaluating the elements of a combination, the compiled code must apply the procedure in proc
20174 to the arguments in argl. The code performs essentially the same dispatch as the apply procedure
20175 in the metacircular evaluator of section 4.1.1 or the apply-dispatch entry point in the
20176 explicit-control evaluator of section 5.4.1. It checks whether the procedure to be applied is a primitive
20177 procedure or a compiled procedure. For a primitive procedure, it uses
20178 apply-primitive-procedure; we will see shortly how it handles compiled procedures. The
20179 procedure-application code has the following form:
20180 (test (op primitive-procedure?) (reg proc))
20181 (branch (label primitive-branch))
20182 compiled-branch
20183 <code to apply compiled procedure with given target and appropriate linkage>
20184
20185 primitive-branch
20186 (assign <target>
20187 (op apply-primitive-procedure)
20188 (reg proc)
20189 (reg argl))
20190 <linkage>
20191 after-call
20192 Observe that the compiled branch must skip around the primitive branch. Therefore, if the linkage for
20193 the original procedure call was next, the compound branch must use a linkage that jumps to a label
20194 that is inserted after the primitive branch. (This is similar to the linkage used for the true branch in
20195 compile-if.)
20196 (define (compile-procedure-call target linkage)
20197 (let ((primitive-branch (make-label ’primitive-branch))
20198 (compiled-branch (make-label ’compiled-branch))
20199 (after-call (make-label ’after-call)))
20200 (let ((compiled-linkage
20201 (if (eq? linkage ’next) after-call linkage)))
20202 (append-instruction-sequences
20203 (make-instruction-sequence ’(proc) ’()
20204 ‘((test (op primitive-procedure?) (reg proc))
20205 (branch (label ,primitive-branch))))
20206 (parallel-instruction-sequences
20207
20208 \f(append-instruction-sequences
20209 compiled-branch
20210 (compile-proc-appl target compiled-linkage))
20211 (append-instruction-sequences
20212 primitive-branch
20213 (end-with-linkage linkage
20214 (make-instruction-sequence ’(proc argl)
20215 (list target)
20216 ‘((assign ,target
20217 (op apply-primitive-procedure)
20218 (reg proc)
20219 (reg argl)))))))
20220 after-call))))
20221 The primitive and compound branches, like the true and false branches in compile-if, are
20222 appended using parallel-instruction-sequences rather than the ordinary
20223 append-instruction-sequences, because they will not be executed sequentially.
20224
20225 Applying compiled procedures
20226 The code that handles procedure application is the most subtle part of the compiler, even though the
20227 instruction sequences it generates are very short. A compiled procedure (as constructed by
20228 compile-lambda) has an entry point, which is a label that designates where the code for the
20229 procedure starts. The code at this entry point computes a result in val and returns by executing the
20230 instruction (goto (reg continue)). Thus, we might expect the code for a compiled-procedure
20231 application (to be generated by compile-proc-appl) with a given target and linkage to look like
20232 this if the linkage is a label
20233 (assign continue (label proc-return))
20234 (assign val (op compiled-procedure-entry) (reg proc))
20235 (goto (reg val))
20236 proc-return
20237 (assign <target> (reg val))
20238 ; included if target is not val
20239 (goto (label <linkage>))
20240 ; linkage code
20241 or like this if the linkage is return.
20242 (save continue)
20243 (assign continue (label proc-return))
20244 (assign val (op compiled-procedure-entry) (reg proc))
20245 (goto (reg val))
20246 proc-return
20247 (assign <target> (reg val))
20248 ; included if target is not val
20249 (restore continue)
20250 (goto (reg continue))
20251 ; linkage code
20252 This code sets up continue so that the procedure will return to a label proc-return and jumps to
20253 the procedure’s entry point. The code at proc-return transfers the procedure’s result from val to
20254 the target register (if necessary) and then jumps to the location specified by the linkage. (The linkage is
20255 always return or a label, because compile-procedure-call replaces a next linkage for the
20256 compound-procedure branch by an after-call label.)
20257
20258 \fIn fact, if the target is not val, that is exactly the code our compiler will generate. 39 Usually,
20259 however, the target is val (the only time the compiler specifies a different register is when targeting
20260 the evaluation of an operator to proc), so the procedure result is put directly into the target register
20261 and there is no need to return to a special location that copies it. Instead, we simplify the code by
20262 setting up continue so that the procedure will ‘‘return’’ directly to the place specified by the
20263 caller’s linkage:
20264 <set up continue for linkage>
20265 (assign val (op compiled-procedure-entry) (reg proc))
20266 (goto (reg val))
20267 If the linkage is a label, we set up continue so that the procedure will return to that label. (That is,
20268 the (goto (reg continue)) the procedure ends with becomes equivalent to the (goto
20269 (label <linkage>)) at proc-return above.)
20270 (assign continue (label <linkage>))
20271 (assign val (op compiled-procedure-entry) (reg proc))
20272 (goto (reg val))
20273 If the linkage is return, we don’t need to set up continue at all: It already holds the desired
20274 location. (That is, the (goto (reg continue)) the procedure ends with goes directly to the
20275 place where the (goto (reg continue)) at proc-return would have gone.)
20276 (assign val (op compiled-procedure-entry) (reg proc))
20277 (goto (reg val))
20278 With this implementation of the return linkage, the compiler generates tail-recursive code. Calling a
20279 procedure as the final step in a procedure body does a direct transfer, without saving any information
20280 on the stack.
20281 Suppose instead that we had handled the case of a procedure call with a linkage of return and a
20282 target of val as shown above for a non-val target. This would destroy tail recursion. Our system
20283 would still give the same value for any expression. But each time we called a procedure, we would
20284 save continue and return after the call to undo the (useless) save. These extra saves would
20285 accumulate during a nest of procedure calls. 40
20286 Compile-proc-appl generates the above procedure-application code by considering four cases,
20287 depending on whether the target for the call is val and whether the linkage is return. Observe that
20288 the instruction sequences are declared to modify all the registers, since executing the procedure body
20289 can change the registers in arbitrary ways. 41 Also note that the code sequence for the case with target
20290 val and linkage return is declared to need continue: Even though continue is not explicitly
20291 used in the two-instruction sequence, we must be sure that continue will have the correct value
20292 when we enter the compiled procedure.
20293 (define (compile-proc-appl target linkage)
20294 (cond ((and (eq? target ’val) (not (eq? linkage ’return)))
20295 (make-instruction-sequence ’(proc) all-regs
20296 ‘((assign continue (label ,linkage))
20297 (assign val (op compiled-procedure-entry)
20298 (reg proc))
20299 (goto (reg val)))))
20300 ((and (not (eq? target ’val))
20301
20302 \f(not (eq? linkage ’return)))
20303 (let ((proc-return (make-label ’proc-return)))
20304 (make-instruction-sequence ’(proc) all-regs
20305 ‘((assign continue (label ,proc-return))
20306 (assign val (op compiled-procedure-entry)
20307 (reg proc))
20308 (goto (reg val))
20309 ,proc-return
20310 (assign ,target (reg val))
20311 (goto (label ,linkage))))))
20312 ((and (eq? target ’val) (eq? linkage ’return))
20313 (make-instruction-sequence ’(proc continue) all-regs
20314 ’((assign val (op compiled-procedure-entry)
20315 (reg proc))
20316 (goto (reg val)))))
20317 ((and (not (eq? target ’val)) (eq? linkage ’return))
20318 (error "return linkage, target not val -- COMPILE"
20319 target))))
20320
20321 5.5.4 Combining Instruction Sequences
20322 This section describes the details on how instruction sequences are represented and combined. Recall
20323 from section 5.5.1 that an instruction sequence is represented as a list of the registers needed, the
20324 registers modified, and the actual instructions. We will also consider a label (symbol) to be a
20325 degenerate case of an instruction sequence, which doesn’t need or modify any registers. So to
20326 determine the registers needed and modified by instruction sequences we use the selectors
20327 (define (registers-needed s)
20328 (if (symbol? s) ’() (car s)))
20329 (define (registers-modified s)
20330 (if (symbol? s) ’() (cadr s)))
20331 (define (statements s)
20332 (if (symbol? s) (list s) (caddr s)))
20333 and to determine whether a given sequence needs or modifies a given register we use the predicates
20334 (define
20335 (memq
20336 (define
20337 (memq
20338
20339 (needs-register? seq reg)
20340 reg (registers-needed seq)))
20341 (modifies-register? seq reg)
20342 reg (registers-modified seq)))
20343
20344 In terms of these predicates and selectors, we can implement the various instruction sequence
20345 combiners used throughout the compiler.
20346 The basic combiner is append-instruction-sequences. This takes as arguments an arbitrary
20347 number of instruction sequences that are to be executed sequentially and returns an instruction
20348 sequence whose statements are the statements of all the sequences appended together. The subtle point
20349 is to determine the registers that are needed and modified by the resulting sequence. It modifies those
20350 registers that are modified by any of the sequences; it needs those registers that must be initialized
20351 before the first sequence can be run (the registers needed by the first sequence), together with those
20352 registers needed by any of the other sequences that are not initialized (modified) by sequences
20353
20354 \fpreceding it.
20355 The sequences are appended two at a time by append-2-sequences. This takes two instruction
20356 sequences seq1 and seq2 and returns the instruction sequence whose statements are the statements
20357 of seq1 followed by the statements of seq2, whose modified registers are those registers that are
20358 modified by either seq1 or seq2, and whose needed registers are the registers needed by seq1
20359 together with those registers needed by seq2 that are not modified by seq1. (In terms of set
20360 operations, the new set of needed registers is the union of the set of registers needed by seq1 with the
20361 set difference of the registers needed by seq2 and the registers modified by seq1.) Thus,
20362 append-instruction-sequences is implemented as follows:
20363 (define (append-instruction-sequences . seqs)
20364 (define (append-2-sequences seq1 seq2)
20365 (make-instruction-sequence
20366 (list-union (registers-needed seq1)
20367 (list-difference (registers-needed seq2)
20368 (registers-modified seq1)))
20369 (list-union (registers-modified seq1)
20370 (registers-modified seq2))
20371 (append (statements seq1) (statements seq2))))
20372 (define (append-seq-list seqs)
20373 (if (null? seqs)
20374 (empty-instruction-sequence)
20375 (append-2-sequences (car seqs)
20376 (append-seq-list (cdr seqs)))))
20377 (append-seq-list seqs))
20378 This procedure uses some simple operations for manipulating sets represented as lists, similar to the
20379 (unordered) set representation described in section 2.3.3:
20380 (define (list-union s1 s2)
20381 (cond ((null? s1) s2)
20382 ((memq (car s1) s2) (list-union (cdr s1) s2))
20383 (else (cons (car s1) (list-union (cdr s1) s2)))))
20384 (define (list-difference s1 s2)
20385 (cond ((null? s1) ’())
20386 ((memq (car s1) s2) (list-difference (cdr s1) s2))
20387 (else (cons (car s1)
20388 (list-difference (cdr s1) s2)))))
20389 Preserving, the second major instruction sequence combiner, takes a list of registers regs and
20390 two instruction sequences seq1 and seq2 that are to be executed sequentially. It returns an
20391 instruction sequence whose statements are the statements of seq1 followed by the statements of
20392 seq2, with appropriate save and restore instructions around seq1 to protect the registers in
20393 regs that are modified by seq1 but needed by seq2. To accomplish this, preserving first
20394 creates a sequence that has the required saves followed by the statements of seq1 followed by the
20395 required restores. This sequence needs the registers being saved and restored in addition to the
20396 registers needed by seq1, and modifies the registers modified by seq1 except for the ones being
20397 saved and restored. This augmented sequence and seq2 are then appended in the usual way. The
20398 following procedure implements this strategy recursively, walking down the list of registers to be
20399 preserved: 42
20400
20401 \f(define (preserving regs seq1 seq2)
20402 (if (null? regs)
20403 (append-instruction-sequences seq1 seq2)
20404 (let ((first-reg (car regs)))
20405 (if (and (needs-register? seq2 first-reg)
20406 (modifies-register? seq1 first-reg))
20407 (preserving (cdr regs)
20408 (make-instruction-sequence
20409 (list-union (list first-reg)
20410 (registers-needed seq1))
20411 (list-difference (registers-modified seq1)
20412 (list first-reg))
20413 (append ‘((save ,first-reg))
20414 (statements seq1)
20415 ‘((restore ,first-reg))))
20416 seq2)
20417 (preserving (cdr regs) seq1 seq2)))))
20418 Another sequence combiner, tack-on-instruction-sequence, is used by
20419 compile-lambda to append a procedure body to another sequence. Because the procedure body is
20420 not ‘‘in line’’ to be executed as part of the combined sequence, its register use has no impact on the
20421 register use of the sequence in which it is embedded. We thus ignore the procedure body’s sets of
20422 needed and modified registers when we tack it onto the other sequence.
20423 (define (tack-on-instruction-sequence seq body-seq)
20424 (make-instruction-sequence
20425 (registers-needed seq)
20426 (registers-modified seq)
20427 (append (statements seq) (statements body-seq))))
20428 Compile-if and compile-procedure-call use a special combiner called
20429 parallel-instruction-sequences to append the two alternative branches that follow a test.
20430 The two branches will never be executed sequentially; for any particular evaluation of the test, one
20431 branch or the other will be entered. Because of this, the registers needed by the second branch are still
20432 needed by the combined sequence, even if these are modified by the first branch.
20433 (define (parallel-instruction-sequences seq1 seq2)
20434 (make-instruction-sequence
20435 (list-union (registers-needed seq1)
20436 (registers-needed seq2))
20437 (list-union (registers-modified seq1)
20438 (registers-modified seq2))
20439 (append (statements seq1) (statements seq2))))
20440
20441 5.5.5 An Example of Compiled Code
20442 Now that we have seen all the elements of the compiler, let us examine an example of compiled code
20443 to see how things fit together. We will compile the definition of a recursive factorial procedure by
20444 calling compile:
20445
20446 \f(compile
20447 ’(define (factorial n)
20448 (if (= n 1)
20449 1
20450 (* (factorial (- n 1)) n)))
20451 ’val
20452 ’next)
20453 We have specified that the value of the define expression should be placed in the val register. We
20454 don’t care what the compiled code does after executing the define, so our choice of next as the
20455 linkage descriptor is arbitrary.
20456 Compile determines that the expression is a definition, so it calls compile-definition to
20457 compile code to compute the value to be assigned (targeted to val), followed by code to install the
20458 definition, followed by code to put the value of the define (which is the symbol ok) into the target
20459 register, followed finally by the linkage code. Env is preserved around the computation of the value,
20460 because it is needed in order to install the definition. Because the linkage is next, there is no linkage
20461 code in this case. The skeleton of the compiled code is thus
20462 <save env if modified by code to compute value>
20463 <compilation of definition value, target val, linkage next>
20464 <restore env if saved above>
20465 (perform (op define-variable!)
20466 (const factorial)
20467 (reg val)
20468 (reg env))
20469 (assign val (const ok))
20470 The expression that is to be compiled to produce the value for the variable factorial is a lambda
20471 expression whose value is the procedure that computes factorials. Compile handles this by calling
20472 compile-lambda, which compiles the procedure body, labels it as a new entry point, and generates
20473 the instruction that will combine the procedure body at the new entry point with the run-time
20474 environment and assign the result to val. The sequence then skips around the compiled procedure
20475 code, which is inserted at this point. The procedure code itself begins by extending the procedure’s
20476 definition environment by a frame that binds the formal parameter n to the procedure argument. Then
20477 comes the actual procedure body. Since this code for the value of the variable doesn’t modify the env
20478 register, the optional save and restore shown above aren’t generated. (The procedure code at
20479 entry2 isn’t executed at this point, so its use of env is irrelevant.) Therefore, the skeleton for the
20480 compiled code becomes
20481 (assign val (op make-compiled-procedure)
20482 (label entry2)
20483 (reg env))
20484 (goto (label after-lambda1))
20485 entry2
20486 (assign env (op compiled-procedure-env) (reg proc))
20487 (assign env (op extend-environment)
20488 (const (n))
20489 (reg argl)
20490 (reg env))
20491 <compilation of procedure body>
20492
20493 \fafter-lambda1
20494 (perform (op define-variable!)
20495 (const factorial)
20496 (reg val)
20497 (reg env))
20498 (assign val (const ok))
20499 A procedure body is always compiled (by compile-lambda-body) as a sequence with target val
20500 and linkage return. The sequence in this case consists of a single if expression:
20501 (if (= n 1)
20502 1
20503 (* (factorial (- n 1)) n))
20504 Compile-if generates code that first computes the predicate (targeted to val), then checks the
20505 result and branches around the true branch if the predicate is false. Env and continue are preserved
20506 around the predicate code, since they may be needed for the rest of the if expression. Since the if
20507 expression is the final expression (and only expression) in the sequence making up the procedure
20508 body, its target is val and its linkage is return, so the true and false branches are both compiled
20509 with target val and linkage return. (That is, the value of the conditional, which is the value
20510 computed by either of its branches, is the value of the procedure.)
20511 <save continue, env if modified by predicate and needed by branches>
20512 <compilation of predicate, target val, linkage next>
20513 <restore continue, env if saved above>
20514 (test (op false?) (reg val))
20515 (branch (label false-branch4))
20516 true-branch5
20517 <compilation of true branch, target val, linkage return>
20518 false-branch4
20519 <compilation of false branch, target val, linkage return>
20520 after-if3
20521
20522 The predicate (= n 1) is a procedure call. This looks up the operator (the symbol =) and places this
20523 value in proc. It then assembles the arguments 1 and the value of n into argl. Then it tests whether
20524 proc contains a primitive or a compound procedure, and dispatches to a primitive branch or a
20525 compound branch accordingly. Both branches resume at the after-call label. The requirements to
20526 preserve registers around the evaluation of the operator and operands don’t result in any saving of
20527 registers, because in this case those evaluations don’t modify the registers in question.
20528 (assign proc
20529 (op lookup-variable-value) (const =) (reg env))
20530 (assign val (const 1))
20531 (assign argl (op list) (reg val))
20532 (assign val (op lookup-variable-value) (const n) (reg env))
20533 (assign argl (op cons) (reg val) (reg argl))
20534 (test (op primitive-procedure?) (reg proc))
20535 (branch (label primitive-branch17))
20536 compiled-branch16
20537 (assign continue (label after-call15))
20538 (assign val (op compiled-procedure-entry) (reg proc))
20539
20540 \f(goto (reg val))
20541 primitive-branch17
20542 (assign val (op apply-primitive-procedure)
20543 (reg proc)
20544 (reg argl))
20545 after-call15
20546 The true branch, which is the constant 1, compiles (with target val and linkage return) to
20547 (assign val (const 1))
20548 (goto (reg continue))
20549 The code for the false branch is another a procedure call, where the procedure is the value of the
20550 symbol *, and the arguments are n and the result of another procedure call (a call to factorial).
20551 Each of these calls sets up proc and argl and its own primitive and compound branches.
20552 Figure 5.17 shows the complete compilation of the definition of the factorial procedure. Notice
20553 that the possible save and restore of continue and env around the predicate, shown above, are
20554 in fact generated, because these registers are modified by the procedure call in the predicate and
20555 needed for the procedure call and the return linkage in the branches.
20556 Exercise 5.33. Consider the following definition of a factorial procedure, which is slightly different
20557 from the one given above:
20558 (define (factorial-alt n)
20559 (if (= n 1)
20560 1
20561 (* n (factorial-alt (- n 1)))))
20562 Compile this procedure and compare the resulting code with that produced for factorial. Explain
20563 any differences you find. Does either program execute more efficiently than the other?
20564 Exercise 5.34. Compile the iterative factorial procedure
20565 (define (factorial n)
20566 (define (iter product counter)
20567 (if (> counter n)
20568 product
20569 (iter (* counter product)
20570 (+ counter 1))))
20571 (iter 1 1))
20572 Annotate the resulting code, showing the essential difference between the code for iterative and
20573 recursive versions of factorial that makes one process build up stack space and the other run in
20574 constant stack space.
20575
20576 \f;; construct the procedure and skip over code for the procedure body
20577 (assign val
20578 (op make-compiled-procedure) (label entry2) (reg env))
20579 (goto (label after-lambda1))
20580 entry2
20581 ; calls to factorial will enter here
20582 (assign env (op compiled-procedure-env) (reg proc))
20583 (assign env
20584 (op extend-environment) (const (n)) (reg argl) (reg env))
20585 ;; begin actual procedure body
20586 (save continue)
20587 (save env)
20588 ;; compute (= n 1)
20589 (assign proc (op lookup-variable-value) (const =) (reg env))
20590 (assign val (const 1))
20591 (assign argl (op list) (reg val))
20592 (assign val (op lookup-variable-value) (const n) (reg env))
20593 (assign argl (op cons) (reg val) (reg argl))
20594 (test (op primitive-procedure?) (reg proc))
20595 (branch (label primitive-branch17))
20596 compiled-branch16
20597 (assign continue (label after-call15))
20598 (assign val (op compiled-procedure-entry) (reg proc))
20599 (goto (reg val))
20600 primitive-branch17
20601 (assign val (op apply-primitive-procedure) (reg proc) (reg argl))
20602 after-call15
20603 ; val now contains result of (= n 1)
20604 (restore env)
20605 (restore continue)
20606 (test (op false?) (reg val))
20607 (branch (label false-branch4))
20608 true-branch5 ; return 1
20609 (assign val (const 1))
20610 (goto (reg continue))
20611 false-branch4
20612 ;; compute and return (* (factorial (- n 1)) n)
20613 (assign proc (op lookup-variable-value) (const *) (reg env))
20614 (save continue)
20615 (save proc)
20616 ; save * procedure
20617 (assign val (op lookup-variable-value) (const n) (reg env))
20618 (assign argl (op list) (reg val))
20619 (save argl)
20620 ; save partial argument list for *
20621 ;; compute (factorial (- n 1)), which is the other argument for *
20622 (assign proc
20623 (op lookup-variable-value) (const factorial) (reg env))
20624 (save proc) ; save factorial procedure
20625 Figure 5.17: Compilation of the definition of the factorial procedure (continued on next page).
20626 Figure 5.17: Compilation of the definition of the factorial procedure (continued on next page).
20627
20628 \f;; compute (- n 1), which is the argument for factorial
20629 Exercise
20630 5.35. What
20631 was compiled to produce the code
20632 shown-)
20633 in figure
20634 (assign
20635 procexpression
20636 (op lookup-variable-value)
20637 (const
20638 (reg5.18?
20639 env))
20640 (assign val (const 1))
20641 val (op
20642 (label entry16)
20643 (assign argl
20644 (op make-compiled-procedure)
20645 list) (reg val))
20646 (reg n)
20647 env))
20648 (assign val (op lookup-variable-value) (const
20649 (reg env))
20650 (goto (label
20651 after-lambda15))
20652 (assign
20653 argl (op
20654 cons) (reg val) (reg argl))
20655 entry16
20656 (test (op primitive-procedure?) (reg proc))
20657 (assign (label
20658 env (opprimitive-branch8))
20659 compiled-procedure-env) (reg proc))
20660 (branch
20661 (assign env
20662 compiled-branch7
20663 (op extend-environment)
20664 (const (x)) (reg argl) (reg env))
20665 (assign continue
20666 (label after-call6))
20667 (assign val
20668 (reg proc))
20669 proc(op
20670 (opcompiled-procedure-entry)
20671 lookup-variable-value) (const
20672 +) (reg env))
20673 (goto
20674 val))
20675 (save (reg
20676 continue)
20677 primitive-branch8
20678 (save proc)
20679 (assign
20680 val (op apply-primitive-procedure) (reg proc) (reg argl))
20681 (save env)
20682 after-call6
20683 val lookup-variable-value)
20684 now contains result of (const
20685 (- n 1)g) (reg env))
20686 (assign proc; (op
20687 (assign
20688 argl (op list) (reg val))
20689 (save proc)
20690 (restore
20691 proc)
20692 ; restore
20693 factorial
20694 (assign proc
20695 (op
20696 lookup-variable-value)
20697 (const +) (reg env))
20698 ;;(assign
20699 apply factorial
20700 val (const 2))
20701 (test
20702 (op
20703 primitive-procedure?)
20704 (reg proc))
20705 (assign
20706 argl
20707 (op list) (reg val))
20708 (branch
20709 (assign (label
20710 val (opprimitive-branch11))
20711 lookup-variable-value) (const x) (reg env))
20712 compiled-branch10
20713 (assign argl (op cons) (reg val) (reg argl))
20714 (assign
20715 continue
20716 (label after-call9))
20717 (test (op
20718 primitive-procedure?)
20719 (reg proc))
20720 (assign
20721 (op primitive-branch19))
20722 compiled-procedure-entry) (reg proc))
20723 (branch val
20724 (label
20725 (goto (reg val))
20726 compiled-branch18
20727 primitive-branch11
20728 (assign continue (label after-call17))
20729 (assign val (op apply-primitive-procedure)
20730 (reg proc))
20731 proc) (reg argl))
20732 compiled-procedure-entry) (reg
20733 after-call9
20734 ; val now contains result of (factorial (- n 1))
20735 (goto (reg val))
20736 (restore argl) ; restore partial argument list for *
20737 primitive-branch19
20738 (assign argl
20739 (op apply-primitive-procedure)
20740 cons) (reg val) (reg argl))(reg proc) (reg argl))
20741 val (op
20742 (restore proc) ; restore *
20743 after-call17
20744 (restore
20745 continue)
20746 (assign argl
20747 (op list) (reg val))
20748 ;;(restore
20749 apply * and
20750 return
20751 its value
20752 proc)
20753 (test (op primitive-procedure?)
20754 primitive-procedure?) (reg
20755 (reg proc))
20756 proc))
20757 (branch (label primitive-branch14))
20758 primitive-branch22))
20759 compiled-branch13
20760 compiled-branch21
20761 ;;(assign
20762 note that
20763 a compound
20764 procedure
20765 here is called tail-recursively
20766 continue
20767 (label
20768 after-call20))
20769 (assign val (op compiled-procedure-entry)
20770 compiled-procedure-entry) (reg
20771 (reg proc))
20772 proc))
20773 (goto (reg val))
20774 primitive-branch14
20775 primitive-branch22
20776 (assign val (op apply-primitive-procedure)
20777 apply-primitive-procedure) (reg
20778 (reg proc)
20779 proc) (reg
20780 (reg argl))
20781 argl))
20782 (goto (reg continue))
20783 Figure 5.18: An example of compiler output (continued on next page). See exercise 5.35.
20784 after-call12
20785 after-if3
20786 Figure
20787 5.18: An example of compiler output (continued on next page). See exercise 5.35.
20788 after-lambda1
20789 ;; assign the procedure to the variable factorial
20790 (perform
20791 (op define-variable!) (const factorial) (reg val) (reg env))
20792 (assign val (const ok))
20793 Figure 5.17: (continued)
20794 Figure 5.17: (continued)
20795
20796 \fafter-call20
20797 (assign argl (op list) (reg val))
20798 (restore env)
20799 (assign val (op lookup-variable-value) (const x) (reg env))
20800 (assign argl (op cons) (reg val) (reg argl))
20801 (restore proc)
20802 (restore continue)
20803 (test (op primitive-procedure?) (reg proc))
20804 (branch (label primitive-branch25))
20805 compiled-branch24
20806 (assign val (op compiled-procedure-entry) (reg proc))
20807 (goto (reg val))
20808 primitive-branch25
20809 (assign val (op apply-primitive-procedure) (reg proc) (reg argl))
20810 (goto (reg continue))
20811 after-call23
20812 after-lambda15
20813 (perform (op define-variable!) (const f) (reg val) (reg env))
20814 (assign val (const ok))
20815 Figure 5.18: (continued)
20816 Figure 5.18: (continued)
20817
20818 Exercise 5.36. What order of evaluation does our compiler produce for operands of a combination? Is
20819 it left-to-right, right-to-left, or some other order? Where in the compiler is this order determined?
20820 Modify the compiler so that it produces some other order of evaluation. (See the discussion of order of
20821 evaluation for the explicit-control evaluator in section 5.4.1.) How does changing the order of operand
20822 evaluation affect the efficiency of the code that constructs the argument list?
20823 Exercise 5.37. One way to understand the compiler’s preserving mechanism for optimizing stack
20824 usage is to see what extra operations would be generated if we did not use this idea. Modify
20825 preserving so that it always generates the save and restore operations. Compile some simple
20826 expressions and identify the unnecessary stack operations that are generated. Compare the code to that
20827 generated with the preserving mechanism intact.
20828 Exercise 5.38. Our compiler is clever about avoiding unnecessary stack operations, but it is not clever
20829 at all when it comes to compiling calls to the primitive procedures of the language in terms of the
20830 primitive operations supplied by the machine. For example, consider how much code is compiled to
20831 compute (+ a 1): The code sets up an argument list in argl, puts the primitive addition procedure
20832 (which it finds by looking up the symbol + in the environment) into proc, and tests whether the
20833 procedure is primitive or compound. The compiler always generates code to perform the test, as well
20834 as code for primitive and compound branches (only one of which will be executed). We have not
20835 shown the part of the controller that implements primitives, but we presume that these instructions
20836 make use of primitive arithmetic operations in the machine’s data paths. Consider how much less code
20837 would be generated if the compiler could open-code primitives -- that is, if it could generate code to
20838 directly use these primitive machine operations. The expression (+ a 1) might be compiled into
20839 something as simple as 43
20840
20841 \f(assign val (op lookup-variable-value) (const a) (reg env))
20842 (assign val (op +) (reg val) (const 1))
20843 In this exercise we will extend our compiler to support open coding of selected primitives.
20844 Special-purpose code will be generated for calls to these primitive procedures instead of the general
20845 procedure-application code. In order to support this, we will augment our machine with special
20846 argument registers arg1 and arg2. The primitive arithmetic operations of the machine will take their
20847 inputs from arg1 and arg2. The results may be put into val, arg1, or arg2.
20848 The compiler must be able to recognize the application of an open-coded primitive in the source
20849 program. We will augment the dispatch in the compile procedure to recognize the names of these
20850 primitives in addition to the reserved words (the special forms) it currently recognizes. 44 For each
20851 special form our compiler has a code generator. In this exercise we will construct a family of code
20852 generators for the open-coded primitives.
20853 a. The open-coded primitives, unlike the special forms, all need their operands evaluated. Write a
20854 code generator spread-arguments for use by all the open-coding code generators.
20855 Spread-arguments should take an operand list and compile the given operands targeted to
20856 successive argument registers. Note that an operand may contain a call to an open-coded primitive, so
20857 argument registers will have to be preserved during operand evaluation.
20858 b. For each of the primitive procedures =, *, -, and +, write a code generator that takes a combination
20859 with that operator, together with a target and a linkage descriptor, and produces code to spread the
20860 arguments into the registers and then perform the operation targeted to the given target with the given
20861 linkage. You need only handle expressions with two operands. Make compile dispatch to these code
20862 generators.
20863 c. Try your new compiler on the factorial example. Compare the resulting code with the result
20864 produced without open coding.
20865 d. Extend your code generators for + and * so that they can handle expressions with arbitrary
20866 numbers of operands. An expression with more than two operands will have to be compiled into a
20867 sequence of operations, each with only two inputs.
20868
20869 5.5.6 Lexical Addressing
20870 One of the most common optimizations performed by compilers is the optimization of variable lookup.
20871 Our compiler, as we have implemented it so far, generates code that uses the
20872 lookup-variable-value operation of the evaluator machine. This searches for a variable by
20873 comparing it with each variable that is currently bound, working frame by frame outward through the
20874 run-time environment. This search can be expensive if the frames are deeply nested or if there are
20875 many variables. For example, consider the problem of looking up the value of x while evaluating the
20876 expression (* x y z) in an application of the procedure that is returned by
20877 (let ((x 3) (y 4))
20878 (lambda (a b c d e)
20879 (let ((y (* a b x))
20880 (z (+ c d x)))
20881 (* x y z))))
20882
20883 \fSince a let expression is just syntactic sugar for a lambda combination, this expression is
20884 equivalent to
20885 ((lambda (x y)
20886 (lambda (a b c d e)
20887 ((lambda (y z) (* x y z))
20888 (* a b x)
20889 (+ c d x))))
20890 3
20891 4)
20892 Each time lookup-variable-value searches for x, it must determine that the symbol x is not
20893 eq? to y or z (in the first frame), nor to a, b, c, d, or e (in the second frame). We will assume, for the
20894 moment, that our programs do not use define -- that variables are bound only with lambda.
20895 Because our language is lexically scoped, the run-time environment for any expression will have a
20896 structure that parallels the lexical structure of the program in which the expression appears. 45 Thus,
20897 the compiler can know, when it analyzes the above expression, that each time the procedure is applied
20898 the variable x in (* x y z) will be found two frames out from the current frame and will be the
20899 first variable in that frame.
20900 We can exploit this fact by inventing a new kind of variable-lookup operation,
20901 lexical-address-lookup, that takes as arguments an environment and a lexical address that
20902 consists of two numbers: a frame number, which specifies how many frames to pass over, and a
20903 displacement number, which specifies how many variables to pass over in that frame.
20904 Lexical-address-lookup will produce the value of the variable stored at that lexical address
20905 relative to the current environment. If we add the lexical-address-lookup operation to our
20906 machine, we can make the compiler generate code that references variables using this operation, rather
20907 than lookup-variable-value. Similarly, our compiled code can use a new
20908 lexical-address-set! operation instead of set-variable-value!.
20909 In order to generate such code, the compiler must be able to determine the lexical address of a variable
20910 it is about to compile a reference to. The lexical address of a variable in a program depends on where
20911 one is in the code. For example, in the following program, the address of x in expression <e1> is (2,0)
20912 -- two frames back and the first variable in the frame. At that point y is at address (0,0) and c is at
20913 address (1,2). In expression <e2>, x is at (1,0), y is at (1,1), and c is at (0,2).
20914 ((lambda (x y)
20915 (lambda (a b c d e)
20916 ((lambda (y z) <e1>)
20917 <e2>
20918 (+ c d x))))
20919 3
20920 4)
20921 One way for the compiler to produce code that uses lexical addressing is to maintain a data structure
20922 called a compile-time environment. This keeps track of which variables will be at which positions in
20923 which frames in the run-time environment when a particular variable-access operation is executed. The
20924 compile-time environment is a list of frames, each containing a list of variables. (There will of course
20925 be no values bound to the variables, since values are not computed at compile time.) The compile-time
20926 environment becomes an additional argument to compile and is passed along to each code
20927 generator. The top-level call to compile uses an empty compile-time environment. When a lambda
20928
20929 \fbody is compiled, compile-lambda-body extends the compile-time environment by a frame
20930 containing the procedure’s parameters, so that the sequence making up the body is compiled with that
20931 extended environment. At each point in the compilation, compile-variable and
20932 compile-assignment use the compile-time environment in order to generate the appropriate
20933 lexical addresses.
20934 Exercises 5.39 through 5.43 describe how to complete this sketch of the lexical-addressing strategy in
20935 order to incorporate lexical lookup into the compiler. Exercise 5.44 describes another use for the
20936 compile-time environment.
20937 Exercise 5.39. Write a procedure lexical-address-lookup that implements the new lookup
20938 operation. It should take two arguments -- a lexical address and a run-time environment -- and return
20939 the value of the variable stored at the specified lexical address. Lexical-address-lookup
20940 should signal an error if the value of the variable is the symbol *unassigned*. 46 Also write a
20941 procedure lexical-address-set! that implements the operation that changes the value of the
20942 variable at a specified lexical address.
20943 Exercise 5.40. Modify the compiler to maintain the compile-time environment as described above.
20944 That is, add a compile-time-environment argument to compile and the various code generators, and
20945 extend it in compile-lambda-body.
20946 Exercise 5.41. Write a procedure find-variable that takes as arguments a variable and a
20947 compile-time environment and returns the lexical address of the variable with respect to that
20948 environment. For example, in the program fragment that is shown above, the compile-time
20949 environment during the compilation of expression <e1> is ((y z) (a b c d e) (x y)).
20950 Find-variable should produce
20951 (find-variable ’c ’((y z) (a b c d e) (x y)))
20952 (1 2)
20953 (find-variable ’x ’((y z) (a b c d e) (x y)))
20954 (2 0)
20955 (find-variable ’w ’((y z) (a b c d e) (x y)))
20956 not-found
20957 Exercise 5.42. Using find-variable from exercise 5.41, rewrite compile-variable and
20958 compile-assignment to output lexical-address instructions. In cases where find-variable
20959 returns not-found (that is, where the variable is not in the compile-time environment), you should
20960 have the code generators use the evaluator operations, as before, to search for the binding. (The only
20961 place a variable that is not found at compile time can be is in the global environment, which is part of
20962 the run-time environment but is not part of the compile-time environment. 47 Thus, if you wish, you
20963 may have the evaluator operations look directly in the global environment, which can be obtained with
20964 the operation (op get-global-environment), instead of having them search the whole
20965 run-time environment found in env.) Test the modified compiler on a few simple cases, such as the
20966 nested lambda combination at the beginning of this section.
20967 Exercise 5.43. We argued in section 4.1.6 that internal definitions for block structure should not be
20968 considered ‘‘real’’ defines. Rather, a procedure body should be interpreted as if the internal
20969 variables being defined were installed as ordinary lambda variables initialized to their correct values
20970 using set!. Section 4.1.6 and exercise 4.16 showed how to modify the metacircular interpreter to
20971 accomplish this by scanning out internal definitions. Modify the compiler to perform the same
20972 transformation before it compiles a procedure body.
20973
20974 \fExercise 5.44. In this section we have focused on the use of the compile-time environment to produce
20975 lexical addresses. But there are other uses for compile-time environments. For instance, in
20976 exercise 5.38 we increased the efficiency of compiled code by open-coding primitive procedures. Our
20977 implementation treated the names of open-coded procedures as reserved words. If a program were to
20978 rebind such a name, the mechanism described in exercise 5.38 would still open-code it as a primitive,
20979 ignoring the new binding. For example, consider the procedure
20980 (lambda (+ * a b x y)
20981 (+ (* a x) (* b y)))
20982 which computes a linear combination of x and y. We might call it with arguments +matrix,
20983 *matrix, and four matrices, but the open-coding compiler would still open-code the + and the * in
20984 (+ (* a x) (* b y)) as primitive + and *. Modify the open-coding compiler to consult the
20985 compile-time environment in order to compile the correct code for expressions involving the names of
20986 primitive procedures. (The code will work correctly as long as the program does not define or set!
20987 these names.)
20988
20989 5.5.7 Interfacing Compiled Code to the Evaluator
20990 We have not yet explained how to load compiled code into the evaluator machine or how to run it. We
20991 will assume that the explicit-control-evaluator machine has been defined as in section 5.4.4, with the
20992 additional operations specified in footnote 38. We will implement a procedure compile-and-go
20993 that compiles a Scheme expression, loads the resulting object code into the evaluator machine, and
20994 causes the machine to run the code in the evaluator global environment, print the result, and enter the
20995 evaluator’s driver loop. We will also modify the evaluator so that interpreted expressions can call
20996 compiled procedures as well as interpreted ones. We can then put a compiled procedure into the
20997 machine and use the evaluator to call it:
20998 (compile-and-go
20999 ’(define (factorial n)
21000 (if (= n 1)
21001 1
21002 (* (factorial (- n 1)) n))))
21003 ;;; EC-Eval value:
21004 ok
21005 ;;; EC-Eval input:
21006 (factorial 5)
21007 ;;; EC-Eval value:
21008 120
21009 To allow the evaluator to handle compiled procedures (for example, to evaluate the call to
21010 factorial above), we need to change the code at apply-dispatch (section 5.4.1) so that it
21011 recognizes compiled procedures (as distinct from compound or primitive procedures) and transfers
21012 control directly to the entry point of the compiled code: 48
21013 apply-dispatch
21014 (test (op primitive-procedure?) (reg proc))
21015 (branch (label primitive-apply))
21016 (test (op compound-procedure?) (reg proc))
21017 (branch (label compound-apply))
21018 (test (op compiled-procedure?) (reg proc))
21019
21020 \f(branch (label compiled-apply))
21021 (goto (label unknown-procedure-type))
21022 compiled-apply
21023 (restore continue)
21024 (assign val (op compiled-procedure-entry) (reg proc))
21025 (goto (reg val))
21026 Note the restore of continue at compiled-apply. Recall that the evaluator was arranged so that
21027 at apply-dispatch, the continuation would be at the top of the stack. The compiled code entry
21028 point, on the other hand, expects the continuation to be in continue, so continue must be
21029 restored before the compiled code is executed.
21030 To enable us to run some compiled code when we start the evaluator machine, we add a branch
21031 instruction at the beginning of the evaluator machine, which causes the machine to go to a new entry
21032 point if the flag register is set. 49
21033 (branch (label external-entry))
21034 read-eval-print-loop
21035 (perform (op initialize-stack))
21036 ...
21037
21038 ; branches if flag is set
21039
21040 External-entry assumes that the machine is started with val containing the location of an
21041 instruction sequence that puts a result into val and ends with (goto (reg continue)). Starting
21042 at this entry point jumps to the location designated by val, but first assigns continue so that
21043 execution will return to print-result, which prints the value in val and then goes to the
21044 beginning of the evaluator’s read-eval-print loop. 50
21045 external-entry
21046 (perform (op initialize-stack))
21047 (assign env (op get-global-environment))
21048 (assign continue (label print-result))
21049 (goto (reg val))
21050 Now we can use the following procedure to compile a procedure definition, execute the compiled
21051 code, and run the read-eval-print loop so we can try the procedure. Because we want the compiled
21052 code to return to the location in continue with its result in val, we compile the expression with a
21053 target of val and a linkage of return. In order to transform the object code produced by the
21054 compiler into executable instructions for the evaluator register machine, we use the procedure
21055 assemble from the register-machine simulator (section 5.2.2). We then initialize the val register to
21056 point to the list of instructions, set the flag so that the evaluator will go to external-entry, and
21057 start the evaluator.
21058 (define (compile-and-go expression)
21059 (let ((instructions
21060 (assemble (statements
21061 (compile expression ’val ’return))
21062 eceval)))
21063 (set! the-global-environment (setup-environment))
21064 (set-register-contents! eceval ’val instructions)
21065 (set-register-contents! eceval ’flag true)
21066 (start eceval)))
21067
21068 \fIf we have set up stack monitoring, as at the end of section 5.4.4, we can examine the stack usage of
21069 compiled code:
21070 (compile-and-go
21071 ’(define (factorial n)
21072 (if (= n 1)
21073 1
21074 (* (factorial (- n 1)) n))))
21075 (total-pushes = 0 maximum-depth = 0)
21076 ;;; EC-Eval value:
21077 ok
21078 ;;; EC-Eval input:
21079 (factorial 5)
21080 (total-pushes = 31 maximum-depth = 14)
21081 ;;; EC-Eval value:
21082 120
21083 Compare this example with the evaluation of (factorial 5) using the interpreted version of the
21084 same procedure, shown at the end of section 5.4.4. The interpreted version required 144 pushes and a
21085 maximum stack depth of 28. This illustrates the optimization that results from our compilation
21086 strategy.
21087
21088 Interpretation and compilation
21089 With the programs in this section, we can now experiment with the alternative execution strategies of
21090 interpretation and compilation. 51 An interpreter raises the machine to the level of the user program; a
21091 compiler lowers the user program to the level of the machine language. We can regard the Scheme
21092 language (or any programming language) as a coherent family of abstractions erected on the machine
21093 language. Interpreters are good for interactive program development and debugging because the steps
21094 of program execution are organized in terms of these abstractions, and are therefore more intelligible
21095 to the programmer. Compiled code can execute faster, because the steps of program execution are
21096 organized in terms of the machine language, and the compiler is free to make optimizations that cut
21097 across the higher-level abstractions. 52
21098 The alternatives of interpretation and compilation also lead to different strategies for porting languages
21099 to new computers. Suppose that we wish to implement Lisp for a new machine. One strategy is to
21100 begin with the explicit-control evaluator of section 5.4 and translate its instructions to instructions for
21101 the new machine. A different strategy is to begin with the compiler and change the code generators so
21102 that they generate code for the new machine. The second strategy allows us to run any Lisp program
21103 on the new machine by first compiling it with the compiler running on our original Lisp system, and
21104 linking it with a compiled version of the run-time library. 53 Better yet, we can compile the compiler
21105 itself, and run this on the new machine to compile other Lisp programs. 54 Or we can compile one of
21106 the interpreters of section 4.1 to produce an interpreter that runs on the new machine.
21107 Exercise 5.45. By comparing the stack operations used by compiled code to the stack operations used
21108 by the evaluator for the same computation, we can determine the extent to which the compiler
21109 optimizes use of the stack, both in speed (reducing the total number of stack operations) and in space
21110 (reducing the maximum stack depth). Comparing this optimized stack use to the performance of a
21111 special-purpose machine for the same computation gives some indication of the quality of the
21112 compiler.
21113
21114 \fa. Exercise 5.27 asked you to determine, as a function of n, the number of pushes and the maximum
21115 stack depth needed by the evaluator to compute n! using the recursive factorial procedure given above.
21116 Exercise 5.14 asked you to do the same measurements for the special-purpose factorial machine shown
21117 in figure 5.11. Now perform the same analysis using the compiled factorial procedure.
21118 Take the ratio of the number of pushes in the compiled version to the number of pushes in the
21119 interpreted version, and do the same for the maximum stack depth. Since the number of operations and
21120 the stack depth used to compute n! are linear in n, these ratios should approach constants as n becomes
21121 large. What are these constants? Similarly, find the ratios of the stack usage in the special-purpose
21122 machine to the usage in the interpreted version.
21123 Compare the ratios for special-purpose versus interpreted code to the ratios for compiled versus
21124 interpreted code. You should find that the special-purpose machine does much better than the
21125 compiled code, since the hand-tailored controller code should be much better than what is produced by
21126 our rudimentary general-purpose compiler.
21127 b. Can you suggest improvements to the compiler that would help it generate code that would come
21128 closer in performance to the hand-tailored version?
21129 Exercise 5.46. Carry out an analysis like the one in exercise 5.45 to determine the effectiveness of
21130 compiling the tree-recursive Fibonacci procedure
21131 (define (fib n)
21132 (if (< n 2)
21133 n
21134 (+ (fib (- n 1)) (fib (- n 2)))))
21135 compared to the effectiveness of using the special-purpose Fibonacci machine of figure 5.12. (For
21136 measurement of the interpreted performance, see exercise 5.29.) For Fibonacci, the time resource used
21137 is not linear in n; hence the ratios of stack operations will not approach a limiting value that is
21138 independent of n.
21139 Exercise 5.47. This section described how to modify the explicit-control evaluator so that interpreted
21140 code can call compiled procedures. Show how to modify the compiler so that compiled procedures can
21141 call not only primitive procedures and compiled procedures, but interpreted procedures as well. This
21142 requires modifying compile-procedure-call to handle the case of compound (interpreted)
21143 procedures. Be sure to handle all the same target and linkage combinations as in
21144 compile-proc-appl. To do the actual procedure application, the code needs to jump to the
21145 evaluator’s compound-apply entry point. This label cannot be directly referenced in object code
21146 (since the assembler requires that all labels referenced by the code it is assembling be defined there),
21147 so we will add a register called compapp to the evaluator machine to hold this entry point, and add an
21148 instruction to initialize it:
21149 (assign compapp (label compound-apply))
21150 (branch (label external-entry))
21151 ; branches if flag is set
21152 read-eval-print-loop
21153 ...
21154 To test your code, start by defining a procedure f that calls a procedure g. Use compile-and-go to
21155 compile the definition of f and start the evaluator. Now, typing at the evaluator, define g and try to
21156 call f.
21157
21158 \fExercise 5.48. The compile-and-go interface implemented in this section is awkward, since the
21159 compiler can be called only once (when the evaluator machine is started). Augment the
21160 compiler-interpreter interface by providing a compile-and-run primitive that can be called from
21161 within the explicit-control evaluator as follows:
21162 ;;; EC-Eval input:
21163 (compile-and-run
21164 ’(define (factorial n)
21165 (if (= n 1)
21166 1
21167 (* (factorial (- n 1)) n))))
21168 ;;; EC-Eval value:
21169 ok
21170 ;;; EC-Eval input:
21171 (factorial 5)
21172 ;;; EC-Eval value:
21173 120
21174 Exercise 5.49. As an alternative to using the explicit-control evaluator’s read-eval-print loop, design a
21175 register machine that performs a read-compile-execute-print loop. That is, the machine should run a
21176 loop that reads an expression, compiles it, assembles and executes the resulting code, and prints the
21177 result. This is easy to run in our simulated setup, since we can arrange to call the procedures
21178 compile and assemble as ‘‘register-machine operations.’’
21179 Exercise 5.50. Use the compiler to compile the metacircular evaluator of section 4.1 and run this
21180 program using the register-machine simulator. (To compile more than one definition at a time, you can
21181 package the definitions in a begin.) The resulting interpreter will run very slowly because of the
21182 multiple levels of interpretation, but getting all the details to work is an instructive exercise.
21183 Exercise 5.51. Develop a rudimentary implementation of Scheme in C (or some other low-level
21184 language of your choice) by translating the explicit-control evaluator of section 5.4 into C. In order to
21185 run this code you will need to also provide appropriate storage-allocation routines and other run-time
21186 support.
21187 Exercise 5.52. As a counterpoint to exercise 5.51, modify the compiler so that it compiles Scheme
21188 procedures into sequences of C instructions. Compile the metacircular evaluator of section 4.1 to
21189 produce a Scheme interpreter written in C.
21190 33 This is a theoretical statement. We are not claiming that the evaluator’s data paths are a particularly
21191
21192 convenient or efficient set of data paths for a general-purpose computer. For example, they are not
21193 very good for implementing high-performance floating-point calculations or calculations that
21194 intensively manipulate bit vectors.
21195 34 Actually, the machine that runs compiled code can be simpler than the interpreter machine, because
21196
21197 we won’t use the exp and unev registers. The interpreter used these to hold pieces of unevaluated
21198 expressions. With the compiler, however, these expressions get built into the compiled code that the
21199 register machine will run. For the same reason, we don’t need the machine operations that deal with
21200 expression syntax. But compiled code will use a few additional machine operations (to represent
21201 compiled procedure objects) that didn’t appear in the explicit-control evaluator machine.
21202
21203 \f35 Notice, however, that our compiler is a Scheme program, and the syntax procedures that it uses to
21204
21205 manipulate expressions are the actual Scheme procedures used with the metacircular evaluator. For the
21206 explicit-control evaluator, in contrast, we assumed that equivalent syntax operations were available as
21207 operations for the register machine. (Of course, when we simulated the register machine in Scheme,
21208 we used the actual Scheme procedures in our register machine simulation.)
21209 36 This procedure uses a feature of Lisp called backquote (or quasiquote) that is handy for
21210
21211 constructing lists. Preceding a list with a backquote symbol is much like quoting it, except that
21212 anything in the list that is flagged with a comma is evaluated.
21213 For example, if the value of linkage is the symbol branch25, then the expression ‘((goto
21214 (label ,linkage))) evaluates to the list ((goto (label branch25))). Similarly, if the
21215 value of x is the list (a b c), then ‘(1 2 ,(car x)) evaluates to the list (1 2 a).
21216 37 We can’t just use the labels true-branch, false-branch, and after-if as shown above,
21217
21218 because there might be more than one if in the program. The compiler uses the procedure
21219 make-label to generate labels. Make-label takes a symbol as argument and returns a new
21220 symbol that begins with the given symbol. For example, successive calls to (make-label ’a)
21221 would return a1, a2, and so on. Make-label can be implemented similarly to the generation of
21222 unique variable names in the query language, as follows:
21223 (define label-counter 0)
21224 (define (new-label-number)
21225 (set! label-counter (+ 1 label-counter))
21226 label-counter)
21227 (define (make-label name)
21228 (string->symbol
21229 (string-append (symbol->string name)
21230 (number->string (new-label-number)))))
21231 38 We need machine operations to implement a data structure for representing compiled procedures,
21232
21233 analogous to the structure for compound procedures described in section 4.1.3:
21234 (define (make-compiled-procedure entry env)
21235 (list ’compiled-procedure entry env))
21236 (define (compiled-procedure? proc)
21237 (tagged-list? proc ’compiled-procedure))
21238 (define (compiled-procedure-entry c-proc) (cadr c-proc))
21239 (define (compiled-procedure-env c-proc) (caddr c-proc))
21240 39 Actually, we signal an error when the target is not val and the linkage is return, since the only
21241
21242 place we request return linkages is in compiling procedures, and our convention is that procedures
21243 return their values in val.
21244 40 Making a compiler generate tail-recursive code might seem like a straightforward idea. But most
21245
21246 compilers for common languages, including C and Pascal, do not do this, and therefore these
21247 languages cannot represent iterative processes in terms of procedure call alone. The difficulty with tail
21248 recursion in these languages is that their implementations use the stack to store procedure arguments
21249 and local variables as well as return addresses. The Scheme implementations described in this book
21250 store arguments and variables in memory to be garbage-collected. The reason for using the stack for
21251 variables and arguments is that it avoids the need for garbage collection in languages that would not
21252 otherwise require it, and is generally believed to be more efficient. Sophisticated Lisp compilers can,
21253
21254 \fin fact, use the stack for arguments without destroying tail recursion. (See Hanson 1990 for a
21255 description.) There is also some debate about whether stack allocation is actually more efficient than
21256 garbage collection in the first place, but the details seem to hinge on fine points of computer
21257 architecture. (See Appel 1987 and Miller and Rozas 1994 for opposing views on this issue.)
21258 41 The variable all-regs is bound to the list of names of all the registers:
21259
21260 (define all-regs ’(env proc val argl continue))
21261 42 Note that preserving calls append with three arguments. Though the definition of append
21262
21263 shown in this book accepts only two arguments, Scheme standardly provides an append procedure
21264 that takes an arbitrary number of arguments.
21265 43 We have used the same symbol + here to denote both the source-language procedure and the
21266
21267 machine operation. In general there will not be a one-to-one correspondence between primitives of the
21268 source language and primitives of the machine.
21269 44 Making the primitives into reserved words is in general a bad idea, since a user cannot then rebind
21270
21271 these names to different procedures. Moreover, if we add reserved words to a compiler that is in use,
21272 existing programs that define procedures with these names will stop working. See exercise 5.44 for
21273 ideas on how to avoid this problem.
21274 45 This is not true if we allow internal definitions, unless we scan them out. See exercise 5.43.
21275 46 This is the modification to variable lookup required if we implement the scanning method to
21276
21277 eliminate internal definitions (exercise 5.43). We will need to eliminate these definitions in order for
21278 lexical addressing to work.
21279 47 Lexical addresses cannot be used to access variables in the global environment, because these
21280
21281 names can be defined and redefined interactively at any time. With internal definitions scanned out, as
21282 in exercise 5.43, the only definitions the compiler sees are those at top level, which act on the global
21283 environment. Compilation of a definition does not cause the defined name to be entered in the
21284 compile-time environment.
21285 48 Of course, compiled procedures as well as interpreted procedures are compound (nonprimitive).
21286
21287 For compatibility with the terminology used in the explicit-control evaluator, in this section we will
21288 use ‘‘compound’’ to mean interpreted (as opposed to compiled).
21289 49 Now that the evaluator machine starts with a branch, we must always initialize the flag register
21290
21291 before starting the evaluator machine. To start the machine at its ordinary read-eval-print loop, we
21292 could use
21293 (define (start-eceval)
21294 (set! the-global-environment (setup-environment))
21295 (set-register-contents! eceval ’flag false)
21296 (start eceval))
21297 50 Since a compiled procedure is an object that the system may try to print, we also modify the system
21298
21299 print operation user-print (from section 4.1.4) so that it will not attempt to print the components
21300 of a compiled procedure:
21301
21302 \f(define (user-print object)
21303 (cond ((compound-procedure? object)
21304 (display (list ’compound-procedure
21305 (procedure-parameters object)
21306 (procedure-body object)
21307 ’<procedure-env>)))
21308 ((compiled-procedure? object)
21309 (display ’<compiled-procedure>))
21310 (else (display object))))
21311 51 We can do even better by extending the compiler to allow compiled code to call interpreted
21312
21313 procedures. See exercise 5.47.
21314 52 Independent of the strategy of execution, we incur significant overhead if we insist that errors
21315
21316 encountered in execution of a user program be detected and signaled, rather than being allowed to kill
21317 the system or produce wrong answers. For example, an out-of-bounds array reference can be detected
21318 by checking the validity of the reference before performing it. The overhead of checking, however, can
21319 be many times the cost of the array reference itself, and a programmer should weigh speed against
21320 safety in determining whether such a check is desirable. A good compiler should be able to produce
21321 code with such checks, should avoid redundant checks, and should allow programmers to control the
21322 extent and type of error checking in the compiled code.
21323 Compilers for popular languages, such as C and C++, put hardly any error-checking operations into
21324 running code, so as to make things run as fast as possible. As a result, it falls to programmers to
21325 explicitly provide error checking. Unfortunately, people often neglect to do this, even in critical
21326 applications where speed is not a constraint. Their programs lead fast and dangerous lives. For
21327 example, the notorious ‘‘Worm’’ that paralyzed the Internet in 1988 exploited the UNIX TM operating
21328 system’s failure to check whether the input buffer has overflowed in the finger daemon. (See Spafford
21329 1989.)
21330 53 Of course, with either the interpretation or the compilation strategy we must also implement for the
21331
21332 new machine storage allocation, input and output, and all the various operations that we took as
21333 ‘‘primitive’’ in our discussion of the evaluator and compiler. One strategy for minimizing work here is
21334 to write as many of these operations as possible in Lisp and then compile them for the new machine.
21335 Ultimately, everything reduces to a small kernel (such as garbage collection and the mechanism for
21336 applying actual machine primitives) that is hand-coded for the new machine.
21337 54 This strategy leads to amusing tests of correctness of the compiler, such as checking whether the
21338
21339 compilation of a program on the new machine, using the compiled compiler, is identical with the
21340 compilation of the program on the original Lisp system. Tracking down the source of differences is
21341 fun but often frustrating, because the results are extremely sensitive to minuscule details.
21342 [Go to first, previous, next page; contents; index]
21343
21344 \f[Go to first, previous, next page; contents; index]
21345
21346 References
21347 Abelson, Harold, Andrew Berlin, Jacob Katzenelson, William McAllister, Guillermo Rozas, Gerald
21348 Jay Sussman, and Jack Wisdom. 1992. The Supercomputer Toolkit: A general framework for
21349 special-purpose computing. International Journal of High-Speed Electronics 3(3):337-361.
21350 Allen, John. 1978. Anatomy of Lisp. New York: McGraw-Hill.
21351 ANSI X3.226-1994. American National Standard for Information Systems -- Programming Language
21352 -- Common Lisp.
21353 Appel, Andrew W. 1987. Garbage collection can be faster than stack allocation. Information
21354 Processing Letters 25(4):275-279.
21355 Backus, John. 1978. Can programming be liberated from the von Neumann style? Communications of
21356 the ACM 21(8):613-641.
21357 Baker, Henry G., Jr. 1978. List processing in real time on a serial computer. Communications of the
21358 ACM 21(4):280-293.
21359 Batali, John, Neil Mayle, Howard Shrobe, Gerald Jay Sussman, and Daniel Weise. 1982. The
21360 Scheme-81 architecture -- System and chip. In Proceedings of the MIT Conference on Advanced
21361 Research in VLSI, edited by Paul Penfield, Jr. Dedham, MA: Artech House.
21362 Borning, Alan. 1977. ThingLab -- An object-oriented system for building simulations using
21363 constraints. In Proceedings of the 5th International Joint Conference on Artificial Intelligence.
21364 Borodin, Alan, and Ian Munro. 1975. The Computational Complexity of Algebraic and Numeric
21365 Problems. New York: American Elsevier.
21366 Chaitin, Gregory J. 1975. Randomness and mathematical proof. Scientific American 232(5):47-52.
21367 Church, Alonzo. 1941. The Calculi of Lambda-Conversion. Princeton, N.J.: Princeton University
21368 Press.
21369 Clark, Keith L. 1978. Negation as failure. In Logic and Data Bases. New York: Plenum Press, pp.
21370 293-322.
21371 Clinger, William. 1982. Nondeterministic call by need is neither lazy nor by name. In Proceedings of
21372 the ACM Symposium on Lisp and Functional Programming, pp. 226-234.
21373 Clinger, William, and Jonathan Rees. 1991. Macros that work. In Proceedings of the 1991 ACM
21374 Conference on Principles of Programming Languages, pp. 155-162.
21375 Colmerauer A., H. Kanoui, R. Pasero, and P. Roussel. 1973. Un système de communication
21376 homme-machine en français. Technical report, Groupe Intelligence Artificielle, Université d’Aix
21377 Marseille, Luminy.
21378
21379 \fCormen, Thomas, Charles Leiserson, and Ronald Rivest. 1990. Introduction to Algorithms.
21380 Cambridge, MA: MIT Press.
21381 Darlington, John, Peter Henderson, and David Turner. 1982. Functional Programming and Its
21382 Applications. New York: Cambridge University Press.
21383 Dijkstra, Edsger W. 1968a. The structure of the ‘‘THE’’ multiprogramming system. Communications
21384 of the ACM 11(5):341-346.
21385 Dijkstra, Edsger W. 1968b. Cooperating sequential processes. In Programming Languages, edited by
21386 F. Genuys. New York: Academic Press, pp. 43-112.
21387 Dinesman, Howard P. 1968. Superior Mathematical Puzzles. New York: Simon and Schuster.
21388 deKleer, Johan, Jon Doyle, Guy Steele, and Gerald J. Sussman. 1977. AMORD: Explicit control of
21389 reasoning. In Proceedings of the ACM Symposium on Artificial Intelligence and Programming
21390 Languages, pp. 116-125.
21391 Doyle, Jon. 1979. A truth maintenance system. Artificial Intelligence 12:231-272.
21392 Feigenbaum, Edward, and Howard Shrobe. 1993. The Japanese National Fifth Generation Project:
21393 Introduction, survey, and evaluation. In Future Generation Computer Systems, vol. 9, pp. 105-117.
21394 Feeley, Marc. 1986. Deux approches à l’implantation du language Scheme. Masters thesis, Université
21395 de Montréal.
21396 Feeley, Marc and Guy Lapalme. 1987. Using closures for code generation. Journal of Computer
21397 Languages 12(1):47-66.
21398 Feller, William. 1957. An Introduction to Probability Theory and Its Applications, volume 1. New
21399 York: John Wiley & Sons.
21400 Fenichel, R., and J. Yochelson. 1969. A Lisp garbage collector for virtual memory computer systems.
21401 Communications of the ACM 12(11):611-612.
21402 Floyd, Robert. 1967. Nondeterministic algorithms. JACM, 14(4):636-644.
21403 Forbus, Kenneth D., and Johan deKleer. 1993. Building Problem Solvers. Cambridge, MA: MIT Press.
21404 Friedman, Daniel P., and David S. Wise. 1976. CONS should not evaluate its arguments. In Automata,
21405 Languages, and Programming: Third International Colloquium, edited by S. Michaelson and R.
21406 Milner, pp. 257-284.
21407 Friedman, Daniel P., Mitchell Wand, and Christopher T. Haynes. 1992. Essentials of Programming
21408 Languages. Cambridge, MA: MIT Press/McGraw-Hill.
21409 Gabriel, Richard P. 1988. The Why of Y. Lisp Pointers 2(2):15-25.
21410 Goldberg, Adele, and David Robson. 1983. Smalltalk-80: The Language and Its Implementation.
21411 Reading, MA: Addison-Wesley.
21412
21413 \fGordon, Michael, Robin Milner, and Christopher Wadsworth. 1979. Edinburgh LCF. Lecture Notes in
21414 Computer Science, volume 78. New York: Springer-Verlag.
21415 Gray, Jim, and Andreas Reuter. 1993. Transaction Processing: Concepts and Models. San Mateo, CA:
21416 Morgan-Kaufman.
21417 Green, Cordell. 1969. Application of theorem proving to problem solving. In Proceedings of the
21418 International Joint Conference on Artificial Intelligence, pp. 219-240.
21419 Green, Cordell, and Bertram Raphael. 1968. The use of theorem-proving techniques in
21420 question-answering systems. In Proceedings of the ACM National Conference, pp. 169-181.
21421 Griss, Martin L. 1981. Portable Standard Lisp, a brief overview. Utah Symbolic Computation Group
21422 Operating Note 58, University of Utah.
21423 Guttag, John V. 1977. Abstract data types and the development of data structures. Communications of
21424 the ACM 20(6):397-404.
21425 Hamming, Richard W. 1980. Coding and Information Theory. Englewood Cliffs, N.J.: Prentice-Hall.
21426 Hanson, Christopher P. 1990. Efficient stack allocation for tail-recursive languages. In Proceedings of
21427 ACM Conference on Lisp and Functional Programming, pp. 106-118.
21428 Hanson, Christopher P. 1991. A syntactic closures macro facility. Lisp Pointers, 4(3).
21429 Hardy, Godfrey H. 1921. Srinivasa Ramanujan. Proceedings of the London Mathematical Society
21430 XIX(2).
21431 Hardy, Godfrey H., and E. M. Wright. 1960. An Introduction to the Theory of Numbers. 4th edition.
21432 New York: Oxford University Press.
21433 Havender, J. 1968. Avoiding deadlocks in multi-tasking systems. IBM Systems Journal 7(2):74-84.
21434 Hearn, Anthony C. 1969. Standard Lisp. Technical report AIM-90, Artificial Intelligence Project,
21435 Stanford University.
21436 Henderson, Peter. 1980. Functional Programming: Application and Implementation. Englewood
21437 Cliffs, N.J.: Prentice-Hall.
21438 Henderson. Peter. 1982. Functional Geometry. In Conference Record of the 1982 ACM Symposium on
21439 Lisp and Functional Programming, pp. 179-187.
21440 Hewitt, Carl E. 1969. PLANNER: A language for proving theorems in robots. In Proceedings of the
21441 International Joint Conference on Artificial Intelligence, pp. 295-301.
21442 Hewitt, Carl E. 1977. Viewing control structures as patterns of passing messages. Journal of Artificial
21443 Intelligence 8(3):323-364.
21444 Hoare, C. A. R. 1972. Proof of correctness of data representations. Acta Informatica 1(1).
21445 Hodges, Andrew. 1983. Alan Turing: The Enigma. New York: Simon and Schuster.
21446
21447 \fHofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic
21448 Books.
21449 Hughes, R. J. M. 1990. Why functional programming matters. In Research Topics in Functional
21450 Programming, edited by David Turner. Reading, MA: Addison-Wesley, pp. 17-42.
21451 IEEE Std 1178-1990. 1990. IEEE Standard for the Scheme Programming Language.
21452 Ingerman, Peter, Edgar Irons, Kirk Sattley, and Wallace Feurzeig; assisted by M. Lind, Herbert
21453 Kanner, and Robert Floyd. 1960. THUNKS: A way of compiling procedure statements, with some
21454 comments on procedure declarations. Unpublished manuscript. (Also, private communication from
21455 Wallace Feurzeig.)
21456 Kaldewaij, Anne. 1990. Programming: The Derivation of Algorithms. New York: Prentice-Hall.
21457 Kohlbecker, Eugene Edmund, Jr. 1986. Syntactic extensions in the programming language Lisp. Ph.D.
21458 thesis, Indiana University.
21459 Konopasek, Milos, and Sundaresan Jayaraman. 1984. The TK!Solver Book: A Guide to
21460 Problem-Solving in Science, Engineering, Business, and Education. Berkeley, CA:
21461 Osborne/McGraw-Hill.
21462 Knuth, Donald E. 1973. Fundamental Algorithms. Volume 1 of The Art of Computer Programming.
21463 2nd edition. Reading, MA: Addison-Wesley.
21464 Knuth, Donald E. 1981. Seminumerical Algorithms. Volume 2 of The Art of Computer Programming.
21465 2nd edition. Reading, MA: Addison-Wesley.
21466 Kowalski, Robert. 1973. Predicate logic as a programming language. Technical report 70, Department
21467 of Computational Logic, School of Artificial Intelligence, University of Edinburgh.
21468 Kowalski, Robert. 1979. Logic for Problem Solving. New York: North-Holland.
21469 Lamport, Leslie. 1978. Time, clocks, and the ordering of events in a distributed system.
21470 Communications of the ACM 21(7):558-565.
21471 Lampson, Butler, J. J. Horning, R. London, J. G. Mitchell, and G. K. Popek. 1981. Report on the
21472 programming language Euclid. Technical report, Computer Systems Research Group, University of
21473 Toronto.
21474 Landin, Peter. 1965. A correspondence between Algol 60 and Church’s lambda notation: Part I.
21475 Communications of the ACM 8(2):89-101.
21476 Lieberman, Henry, and Carl E. Hewitt. 1983. A real-time garbage collector based on the lifetimes of
21477 objects. Communications of the ACM 26(6):419-429.
21478 Liskov, Barbara H., and Stephen N. Zilles. 1975. Specification techniques for data abstractions. IEEE
21479 Transactions on Software Engineering 1(1):7-19.
21480 McAllester, David Allen. 1978. A three-valued truth-maintenance system. Memo 473, MIT Artificial
21481 Intelligence Laboratory.
21482
21483 \fMcAllester, David Allen. 1980. An outlook on truth maintenance. Memo 551, MIT Artificial
21484 Intelligence Laboratory.
21485 McCarthy, John. 1960. Recursive functions of symbolic expressions and their computation by
21486 machine. Communications of the ACM 3(4):184-195.
21487 McCarthy, John. 1967. A basis for a mathematical theory of computation. In Computer Programing
21488 and Formal Systems, edited by P. Braffort and D. Hirschberg. North-Holland.
21489 McCarthy, John. 1978. The history of Lisp. In Proceedings of the ACM SIGPLAN Conference on the
21490 History of Programming Languages.
21491 McCarthy, John, P. W. Abrahams, D. J. Edwards, T. P. Hart, and M. I. Levin. 1965. Lisp 1.5
21492 Programmer’s Manual. 2nd edition. Cambridge, MA: MIT Press.
21493 McDermott, Drew, and Gerald Jay Sussman. 1972. Conniver reference manual. Memo 259, MIT
21494 Artificial Intelligence Laboratory.
21495 Miller, Gary L. 1976. Riemann’s Hypothesis and tests for primality. Journal of Computer and System
21496 Sciences 13(3):300-317.
21497 Miller, James S., and Guillermo J. Rozas. 1994. Garbage collection is fast, but a stack is faster. Memo
21498 1462, MIT Artificial Intelligence Laboratory.
21499 Moon, David. 1978. MacLisp reference manual, Version 0. Technical report, MIT Laboratory for
21500 Computer Science.
21501 Moon, David, and Daniel Weinreb. 1981. Lisp machine manual. Technical report, MIT Artificial
21502 Intelligence Laboratory.
21503 Morris, J. H., Eric Schmidt, and Philip Wadler. 1980. Experience with an applicative string processing
21504 language. In Proceedings of the 7th Annual ACM SIGACT/SIGPLAN Symposium on the Principles of
21505 Programming Languages.
21506 Phillips, Hubert. 1934. The Sphinx Problem Book. London: Faber and Faber.
21507 Pitman, Kent. 1983. The revised MacLisp Manual (Saturday evening edition). Technical report 295,
21508 MIT Laboratory for Computer Science.
21509 Rabin, Michael O. 1980. Probabilistic algorithm for testing primality. Journal of Number Theory
21510 12:128-138.
21511 Raymond, Eric. 1993. The New Hacker’s Dictionary. 2nd edition. Cambridge, MA: MIT Press.
21512 Raynal, Michel. 1986. Algorithms for Mutual Exclusion. Cambridge, MA: MIT Press.
21513 Rees, Jonathan A., and Norman I. Adams IV. 1982. T: A dialect of Lisp or, lambda: The ultimate
21514 software tool. In Conference Record of the 1982 ACM Symposium on Lisp and Functional
21515 Programming, pp. 114-122.
21516 Rees, Jonathan, and William Clinger (eds). 1991. The revised 4 report on the algorithmic language
21517 Scheme. Lisp Pointers, 4(3).
21518
21519 \fRivest, Ronald, Adi Shamir, and Leonard Adleman. 1977. A method for obtaining digital signatures
21520 and public-key cryptosystems. Technical memo LCS/TM82, MIT Laboratory for Computer Science.
21521 Robinson, J. A. 1965. A machine-oriented logic based on the resolution principle. Journal of the ACM
21522 12(1):23.
21523 Robinson, J. A. 1983. Logic programming -- Past, present, and future. New Generation Computing
21524 1:107-124.
21525 Spafford, Eugene H. 1989. The Internet Worm: Crisis and aftermath. Communications of the ACM
21526 32(6):678-688.
21527 Steele, Guy Lewis, Jr. 1977. Debunking the ‘‘expensive procedure call’’ myth. In Proceedings of the
21528 National Conference of the ACM, pp. 153-62.
21529 Steele, Guy Lewis, Jr. 1982. An overview of Common Lisp. In Proceedings of the ACM Symposium
21530 on Lisp and Functional Programming, pp. 98-107.
21531 Steele, Guy Lewis, Jr. 1990. Common Lisp: The Language. 2nd edition. Digital Press.
21532 Steele, Guy Lewis, Jr., and Gerald Jay Sussman. 1975. Scheme: An interpreter for the extended
21533 lambda calculus. Memo 349, MIT Artificial Intelligence Laboratory.
21534 Steele, Guy Lewis, Jr., Donald R. Woods, Raphael A. Finkel, Mark R. Crispin, Richard M. Stallman,
21535 and Geoffrey S. Goodfellow. 1983. The Hacker’s Dictionary. New York: Harper & Row.
21536 Stoy, Joseph E. 1977. Denotational Semantics. Cambridge, MA: MIT Press.
21537 Sussman, Gerald Jay, and Richard M. Stallman. 1975. Heuristic techniques in computer-aided circuit
21538 analysis. IEEE Transactions on Circuits and Systems CAS-22(11):857-865.
21539 Sussman, Gerald Jay, and Guy Lewis Steele Jr. 1980. Constraints -- A language for expressing
21540 almost-hierachical descriptions. AI Journal 14:1-39.
21541 Sussman, Gerald Jay, and Jack Wisdom. 1992. Chaotic evolution of the solar system. Science
21542 257:256-262.
21543 Sussman, Gerald Jay, Terry Winograd, and Eugene Charniak. 1971. Microplanner reference manual.
21544 Memo 203A, MIT Artificial Intelligence Laboratory.
21545 Sutherland, Ivan E. 1963. SKETCHPAD: A man-machine graphical communication system. Technical
21546 report 296, MIT Lincoln Laboratory.
21547 Teitelman, Warren. 1974. Interlisp reference manual. Technical report, Xerox Palo Alto Research
21548 Center.
21549 Thatcher, James W., Eric G. Wagner, and Jesse B. Wright. 1978. Data type specification:
21550 Parameterization and the power of specification techniques. In Conference Record of the Tenth Annual
21551 ACM Symposium on Theory of Computing, pp. 119-132. Turner, David. 1981. The future of
21552 applicative languages. In Proceedings of the 3rd European Conference on Informatics, Lecture Notes
21553 in Computer Science, volume 123. New York: Springer-Verlag, pp. 334-348.
21554
21555 \fWand, Mitchell. 1980. Continuation-based program transformation strategies. Journal of the ACM
21556 27(1):164-180.
21557 Waters, Richard C. 1979. A method for analyzing loop programs. IEEE Transactions on Software
21558 Engineering 5(3):237-247.
21559 Winograd, Terry. 1971. Procedures as a representation for data in a computer program for
21560 understanding natural language. Technical report AI TR-17, MIT Artificial Intelligence Laboratory.
21561 Winston, Patrick. 1992. Artificial Intelligence. 3rd edition. Reading, MA: Addison-Wesley.
21562 Zabih, Ramin, David McAllester, and David Chapman. 1987. Non-deterministic Lisp with
21563 dependency-directed backtracking. AAAI-87, pp. 59-64.
21564 Zippel, Richard. 1979. Probabilistic algorithms for sparse polynomials. Ph.D. dissertation, Department
21565 of Electrical Engineering and Computer Science, MIT.
21566 Zippel, Richard. 1993. Effective Polynomial Computation. Boston, MA: Kluwer Academic Publishers.
21567 [Go to first, previous, next page; contents; index]
21568
21569 \f[Go to first, previous, next page; contents; index]
21570
21571 List of Exercises
21572 1.1
21573 1.2
21574 1.3
21575 1.4
21576 1.5
21577 1.6
21578 1.7
21579 1.8
21580 1.9
21581 1.10
21582 1.11
21583 1.12
21584 1.13
21585 1.14
21586 1.15
21587 1.16
21588 1.17
21589 1.18
21590 1.19
21591 1.20
21592 1.21
21593 1.22
21594 1.23
21595 1.24
21596 1.25
21597 1.26
21598 1.27
21599 1.28
21600 1.29
21601 1.30
21602 1.31
21603 1.32
21604 1.33
21605 1.34
21606 1.35
21607 1.36
21608 1.37
21609 1.38
21610 1.39
21611 1.40
21612 1.41
21613 1.42
21614 1.43
21615
21616 \f1.44
21617 1.45
21618 1.46
21619 2.1
21620 2.2
21621 2.3
21622 2.4
21623 2.5
21624 2.6
21625 2.7
21626 2.8
21627 2.9
21628 2.10
21629 2.11
21630 2.12
21631 2.13
21632 2.14
21633 2.15
21634 2.16
21635 2.17
21636 2.18
21637 2.19
21638 2.20
21639 2.21
21640 2.22
21641 2.23
21642 2.24
21643 2.25
21644 2.26
21645 2.27
21646 2.28
21647 2.29
21648 2.30
21649 2.31
21650 2.32
21651 2.33
21652 2.34
21653 2.35
21654 2.36
21655 2.37
21656 2.38
21657 2.39
21658 2.40
21659 2.41
21660 2.42
21661 2.43
21662 2.44
21663 2.45
21664 2.46
21665 2.47
21666
21667 \f2.48
21668 2.49
21669 2.50
21670 2.51
21671 2.52
21672 2.53
21673 2.54
21674 2.55
21675 2.56
21676 2.57
21677 2.58
21678 2.59
21679 2.60
21680 2.61
21681 2.62
21682 2.63
21683 2.64
21684 2.65
21685 2.66
21686 2.67
21687 2.68
21688 2.69
21689 2.70
21690 2.71
21691 2.72
21692 2.73
21693 2.74
21694 2.75
21695 2.76
21696 2.77
21697 2.78
21698 2.79
21699 2.80
21700 2.81
21701 2.82
21702 2.83
21703 2.84
21704 2.85
21705 2.86
21706 2.87
21707 2.88
21708 2.89
21709 2.90
21710 2.91
21711 2.92
21712 2.93
21713 2.94
21714 2.95
21715 2.96
21716 2.97
21717
21718 \f3.1
21719 3.2
21720 3.3
21721 3.4
21722 3.5
21723 3.6
21724 3.7
21725 3.8
21726 3.9
21727 3.10
21728 3.11
21729 3.12
21730 3.13
21731 3.14
21732 3.15
21733 3.16
21734 3.17
21735 3.18
21736 3.19
21737 3.20
21738 3.21
21739 3.22
21740 3.23
21741 3.24
21742 3.25
21743 3.26
21744 3.27
21745 3.28
21746 3.29
21747 3.30
21748 3.31
21749 3.32
21750 3.33
21751 3.34
21752 3.35
21753 3.36
21754 3.37
21755 3.38
21756 3.39
21757 3.40
21758 3.41
21759 3.42
21760 3.43
21761 3.44
21762 3.45
21763 3.46
21764 3.47
21765 3.48
21766 3.49
21767 3.50
21768
21769 \f3.51
21770 3.52
21771 3.53
21772 3.54
21773 3.55
21774 3.56
21775 3.57
21776 3.58
21777 3.59
21778 3.60
21779 3.61
21780 3.62
21781 3.63
21782 3.64
21783 3.65
21784 3.66
21785 3.67
21786 3.68
21787 3.69
21788 3.70
21789 3.71
21790 3.72
21791 3.73
21792 3.74
21793 3.75
21794 3.76
21795 3.77
21796 3.78
21797 3.79
21798 3.80
21799 3.81
21800 3.82
21801 4.1
21802 4.2
21803 4.3
21804 4.4
21805 4.5
21806 4.6
21807 4.7
21808 4.8
21809 4.9
21810 4.10
21811 4.11
21812 4.12
21813 4.13
21814 4.14
21815 4.15
21816 4.16
21817 4.17
21818 4.18
21819
21820 \f4.19
21821 4.20
21822 4.21
21823 4.22
21824 4.23
21825 4.24
21826 4.25
21827 4.26
21828 4.27
21829 4.28
21830 4.29
21831 4.30
21832 4.31
21833 4.32
21834 4.33
21835 4.34
21836 4.35
21837 4.36
21838 4.37
21839 4.38
21840 4.39
21841 4.40
21842 4.41
21843 4.42
21844 4.43
21845 4.44
21846 4.45
21847 4.46
21848 4.47
21849 4.48
21850 4.49
21851 4.50
21852 4.51
21853 4.52
21854 4.53
21855 4.54
21856 4.55
21857 4.56
21858 4.57
21859 4.58
21860 4.59
21861 4.60
21862 4.61
21863 4.62
21864 4.63
21865 4.64
21866 4.65
21867 4.66
21868 4.67
21869 4.68
21870
21871 \f4.69
21872 4.70
21873 4.71
21874 4.72
21875 4.73
21876 4.74
21877 4.75
21878 4.76
21879 4.77
21880 4.78
21881 4.79
21882 5.1
21883 5.2
21884 5.3
21885 5.4
21886 5.5
21887 5.6
21888 5.7
21889 5.8
21890 5.9
21891 5.10
21892 5.11
21893 5.12
21894 5.13
21895 5.14
21896 5.15
21897 5.16
21898 5.17
21899 5.18
21900 5.19
21901 5.20
21902 5.21
21903 5.22
21904 5.23
21905 5.24
21906 5.25
21907 5.26
21908 5.27
21909 5.28
21910 5.29
21911 5.30
21912 5.31
21913 5.32
21914 5.33
21915 5.34
21916 5.35
21917 5.36
21918 5.37
21919 5.38
21920 5.39
21921
21922 \f5.40
21923 5.41
21924 5.42
21925 5.43
21926 5.44
21927 5.45
21928 5.46
21929 5.47
21930 5.48
21931 5.49
21932 5.50
21933 5.51
21934 5.52
21935 [Go to first, previous, next page; contents; index]
21936
21937 \f[Go to first, previous, next page; contents; index]
21938
21939 Index
21940 Any inaccuracies in this index may be explained by the
21941 fact that it has been prepared with the help of a computer.
21942 Donald E. Knuth, Fundamental Algorithms (Volume 1 of
21943 The Art of Computer Programming)
21944 ! in names
21945 " (double quote)
21946 calculus, see lambda calculus
21947 notation for mathematical function
21948 , see pi
21949 sum (sigma) notation
21950 , see theta
21951 ’ (single quote)
21952 read and, [2]
21953 * (primitive multiplication procedure)
21954 + (primitive addition procedure)
21955 , (comma, used with backquote)
21956 - (primitive subtraction procedure)
21957 as negation
21958 / (primitive division procedure)
21959 < (primitive numeric comparison predicate)
21960 = (primitive numeric equality predicate)
21961 =number?
21962 =zero? (generic)
21963 for polynomials
21964 > (primitive numeric comparison predicate)
21965 >=, [2]
21966 ? , in predicate names
21967 #f
21968 #t
21969 ‘ (backquote)
21970 ;, see semicolon
21971 Abelson, Harold
21972 abs, [2], [3]
21973 absolute value
21974 abstract data, see also data abstraction
21975 abstract models for data
21976 abstract syntax
21977 in metacircular evaluator
21978 in query interpreter
21979 abstraction, see also means of abstraction; data abstraction; higher-order procedures
21980
21981 \fcommon pattern and
21982 metalinguistic
21983 procedural
21984 in register-machine design
21985 of search in nondeterministic programming
21986 abstraction barriers, [2], [3]
21987 in complex-number system
21988 in generic arithmetic system
21989 accelerated-sequence
21990 accumulate, [2]
21991 same as fold-right
21992 accumulate-n
21993 accumulator, [2]
21994 Áchárya, Bháscara
21995 Ackermann’s function
21996 acquire a mutex
21997 actions, in register machine
21998 actual-value
21999 Ada
22000 recursive procedures
22001 Adams, Norman I., IV
22002 add (generic)
22003 used for polynomial coefficients, [2]
22004 add-action!, [2]
22005 add-binding-to-frame!
22006 add-complex
22007 add-complex-to-schemenum
22008 add-interval
22009 add-lists
22010 add-poly
22011 add-rat
22012 add-rule-or-assertion!
22013 add-streams
22014 add-terms
22015 add-to-agenda!, [2]
22016 add-vect
22017 addend
22018 adder
22019 full
22020 half
22021 ripple-carry
22022 adder (primitive constraint)
22023 additivity, [2], [3], [4]
22024 address
22025 address arithmetic
22026 Adelman, Leonard
22027 adjoin-arg
22028 adjoin-set
22029 binary-tree representation
22030 ordered-list representation
22031
22032 \funordered-list representation
22033 for weighted sets
22034 adjoin-term, [2]
22035 advance-pc
22036 after-delay, [2]
22037 agenda, see digital-circuit simulation
22038 A’h-mose
22039 algebra, symbolic, see symbolic algebra
22040 algebraic expression
22041 differentiating
22042 representing
22043 simplifying
22044 algebraic specification for data
22045 Algol
22046 block structure
22047 call-by-name argument passing, [2]
22048 thunks, [2]
22049 weakness in handling compound objects
22050 algorithm
22051 optimal
22052 probabilistic, [2]
22053 aliasing
22054 all-regs (compiler)
22055 Allen, John
22056 alternative of if
22057 always-true
22058 amb
22059 amb evaluator, see nondeterministic evaluator
22060 ambeval
22061 an-element-of
22062 an-integer-starting-from
22063 analog computer
22064 analyze
22065 metacircular
22066 nondeterministic
22067 analyze-...
22068 metacircular, [2]
22069 nondeterministic
22070 analyze-amb
22071 analyzing evaluator
22072 as basis for nondeterministic evaluator
22073 let
22074 and (query language)
22075 evaluation of, [2], [3]
22076 and (special form)
22077 evaluation of
22078 why a special form
22079 with no subexpressions
22080 and-gate
22081 and-gate
22082
22083 \fangle
22084 data-directed
22085 polar representation
22086 rectangular representation
22087 with tagged data
22088 angle-polar
22089 angle-rectangular
22090 announce-output
22091 APL
22092 Appel, Andrew W.
22093 append, [2], [3]
22094 as accumulation
22095 append! vs.
22096 with arbitrary number of arguments
22097 as register machine
22098 ‘‘what is’’ (rules) vs. ‘‘how to’’ (procedure)
22099 append!
22100 as register machine
22101 append-instruction-sequences, [2]
22102 append-to-form (rules)
22103 application?
22104 applicative-order evaluation
22105 in Lisp
22106 normal order vs., [2], [3]
22107 apply (lazy)
22108 apply (metacircular)
22109 primitive apply vs.
22110 apply (primitive procedure)
22111 apply-dispatch
22112 modified for compiled code
22113 apply-generic
22114 with coercion, [2]
22115 with coercion by raising
22116 with coercion of multiple arguments
22117 with coercion to simplify
22118 with message passing
22119 with tower of types
22120 apply-primitive-procedure, [2], [3]
22121 apply-rules
22122 arbiter
22123 arctangent
22124 argl register
22125 argument passing, see call-by-name argument passing; call-by-need argument passing
22126 argument(s)
22127 arbitrary number of, [2]
22128 delayed
22129 Aristotle’s De caelo (Buridan’s commentary on)
22130 arithmetic
22131 address arithmetic
22132 generic, see also generic arithmetic operations
22133
22134 \fon complex numbers
22135 on intervals
22136 on polynomials, see polynomial arithmetic
22137 on power series, [2]
22138 on rational numbers
22139 primitive procedures for
22140 articles
22141 ASCII code
22142 assemble, [2]
22143 assembler, [2]
22144 assert! (query interpreter)
22145 assertion
22146 implicit
22147 assign (in register machine)
22148 simulating
22149 storing label in register
22150 assign-reg-name
22151 assign-value-exp
22152 assignment, see also set!
22153 benefits of
22154 bugs associated with, [2]
22155 costs of
22156 assignment operator, see also set!
22157 assignment-value
22158 assignment-variable
22159 assignment?
22160 assoc
22161 atan (primitive procedure)
22162 atomic operations supported in hardware
22163 atomic requirement for test-and-set!
22164 attach-tag
22165 using Scheme data types
22166 augend
22167 automagically
22168 automatic search, see also search
22169 history of
22170 automatic storage allocation
22171 average
22172 average damping
22173 average-damp
22174 averager (constraint)
22175 B-tree
22176 backquote
22177 backtracking, see also nondeterministic computing
22178 Backus, John
22179 Baker, Henry G., Jr.
22180 balanced binary tree, see also binary tree
22181 balanced mobile
22182 bank account, [2]
22183 exchanging balances
22184
22185 \fjoint, [2]
22186 joint, modeled with streams
22187 joint, with concurrent access
22188 password-protected
22189 serialized
22190 stream model
22191 transferring money
22192 barrier synchronization
22193 Barth, John
22194 Basic
22195 restrictions on compound data
22196 weakness in handling compound objects
22197 Batali, John Dean
22198 begin (special form)
22199 implicit in consequent of cond and in procedure body
22200 begin-actions
22201 begin?
22202 below, [2]
22203 Bertrand’s Hypothesis
22204 beside, [2]
22205 bignum
22206 binary numbers, addition of, see adder
22207 binary search
22208 binary tree
22209 balanced
22210 converting a list to a
22211 converting to a list
22212 for Huffman encoding
22213 represented with lists
22214 sets represented as
22215 table structured as
22216 bind
22217 binding
22218 deep
22219 binomial coefficients
22220 black box
22221 block structure, [2]
22222 in environment model
22223 in query language
22224 blocked process
22225 body of a procedure
22226 Bolt Beranek and Newman Inc.
22227 Borning, Alan
22228 Borodin, Alan
22229 bound variable
22230 box-and-pointer notation
22231 end-of-list marker
22232 branch (in register machine)
22233 simulating
22234 branch of a tree
22235
22236 \fbranch-dest
22237 breakpoint
22238 broken heart
22239 bug
22240 capturing a free variable
22241 order of assignments
22242 side effect with aliasing
22243 bureaucracy
22244 Buridan, Jean
22245 busy-waiting
22246 C
22247 compiling Scheme into
22248 error handling, [2]
22249 recursive procedures
22250 restrictions on compound data
22251 Scheme interpreter written in, [2]
22252 ca...r
22253 cache-coherence protocols
22254 cadr
22255 calculator, fixed points with
22256 call-by-name argument passing, [2]
22257 call-by-need argument passing, [2]
22258 memoization and
22259 call-each
22260 cancer of the semicolon
22261 canonical form, for polynomials
22262 capturing a free variable
22263 car (primitive procedure)
22264 axiom for
22265 implemented with vectors
22266 as list operation
22267 origin of the name
22268 procedural implementation of, [2], [3], [4], [5]
22269 Carmichael numbers, [2]
22270 case analysis
22271 data-directed programming vs.
22272 general, see also cond
22273 with two cases (if)
22274 cd...r
22275 cdr (primitive procedure)
22276 axiom for
22277 implemented with vectors
22278 as list operation
22279 origin of the name
22280 procedural implementation of, [2], [3], [4], [5]
22281 cdr down a list
22282 cell, in serializer implementation
22283 celsius-fahrenheit-converter
22284 expression-oriented
22285 center
22286
22287 \fCesàro, Ernesto
22288 cesaro-stream
22289 cesaro-test
22290 Chaitin, Gregory
22291 Chandah-sutra
22292 change and sameness
22293 meaning of
22294 shared data and
22295 changing money, see counting change
22296 chaos in the Solar System
22297 Chapman, David
22298 character strings
22299 primitive procedures for, [2]
22300 quotation of
22301 character, ASCII encoding
22302 Charniak, Eugene
22303 Chebyshev, Pafnutii L’vovich
22304 chess, eight-queens puzzle, [2]
22305 chip implementation of Scheme, [2]
22306 chronological backtracking
22307 Chu Shih-chieh
22308 Church numerals
22309 Church, Alonzo, [2]
22310 Church-Turing thesis
22311 circuit
22312 digital, see digital-circuit simulation
22313 modeled with streams, [2]
22314 Clark, Keith L.
22315 clause, of a cond
22316 additional syntax
22317 Clinger, William, [2]
22318 closed world assumption
22319 closure
22320 in abstract algebra
22321 closure property of cons
22322 closure property of picture-language operations, [2]
22323 lack of in many languages
22324 coal, bituminous
22325 code
22326 ASCII
22327 fixed-length
22328 Huffman, see Huffman code
22329 Morse
22330 prefix
22331 variable-length
22332 code generator
22333 arguments of
22334 value of
22335 coeff, [2]
22336 coercion
22337
22338 \fin algebraic manipulation
22339 in polynomial arithmetic
22340 procedure
22341 table
22342 Colmerauer, Alain
22343 combination
22344 combination as operator of
22345 compound expression as operator of
22346 evaluation of
22347 lambda expression as operator of
22348 as operator of combination
22349 as a tree
22350 combination, means of, see also closure
22351 comma, used with backquote
22352 comments in programs
22353 Common Lisp
22354 treatment of nil
22355 compacting garbage collector
22356 compilation, see compiler
22357 compile
22358 compile-and-go, [2]
22359 compile-and-run
22360 compile-application
22361 compile-assignment
22362 compile-definition
22363 compile-if
22364 compile-lambda
22365 compile-linkage
22366 compile-proc-appl
22367 compile-procedure-call
22368 compile-quoted
22369 compile-self-evaluating
22370 compile-sequence
22371 compile-time environment, [2], [3]
22372 open coding and
22373 compile-variable
22374 compiled-apply
22375 compiled-procedure-entry
22376 compiled-procedure-env
22377 compiled-procedure?
22378 compiler
22379 interpreter vs., [2]
22380 tail recursion, stack allocation, and garbage-collection
22381 compiler for Scheme, see also code generator; compile-time environment; instruction sequence;
22382 linkage descriptor; target register
22383 analyzing evaluator vs., [2]
22384 assignments
22385 code generators, see compile-...
22386 combinations
22387 conditionals
22388
22389 \fdefinitions
22390 efficiency
22391 example compilation
22392 explicit-control evaluator vs., [2], [3]
22393 expression-syntax procedures
22394 interfacing to evaluator
22395 label generation
22396 lambda expressions
22397 lexical addressing
22398 linkage code
22399 machine-operation use
22400 monitoring performance (stack use) of compiled code, [2], [3]
22401 open coding of primitives, [2]
22402 order of operand evaluation
22403 procedure applications
22404 quotations
22405 register use, [2], [3]
22406 running compiled code
22407 scanning out internal definitions, [2]
22408 self-evaluating expressions
22409 sequences of expressions
22410 stack usage, [2], [3]
22411 structure of
22412 tail-recursive code generated by
22413 variables
22414 complex package
22415 complex numbers
22416 polar representation
22417 rectangular representation
22418 rectangular vs. polar form
22419 represented as tagged data
22420 complex->complex
22421 complex-number arithmetic
22422 interfaced to generic arithmetic system
22423 structure of system
22424 composition of functions
22425 compound data, need for
22426 compound expression, see also combination; special form
22427 as operator of combination
22428 compound procedure, see also procedure
22429 used like primitive procedure
22430 compound query
22431 processing, [2], [3], [4], [5]
22432 compound-apply
22433 compound-procedure?
22434 computability, [2]
22435 computational process, see also process
22436 computer science, [2]
22437 mathematics vs., [2]
22438 concrete data representation
22439
22440 \fconcurrency
22441 correctness of concurrent programs
22442 deadlock
22443 functional programming and
22444 mechanisms for controlling
22445 cond (special form)
22446 additional clause syntax
22447 clause
22448 evaluation of
22449 if vs.
22450 implicit begin in consequent
22451 cond->if
22452 cond-actions
22453 cond-clauses
22454 cond-else-clause?
22455 cond-predicate
22456 cond?
22457 conditional expression
22458 cond
22459 if
22460 congruent modulo n
22461 conjoin
22462 connect, [2]
22463 connector(s), in constraint system
22464 operations on
22465 representing
22466 Conniver
22467 cons (primitive procedure)
22468 axiom for
22469 closure property of
22470 implemented with mutators
22471 implemented with vectors
22472 as list operation
22473 meaning of the name
22474 procedural implementation of, [2], [3], [4], [5], [6]
22475 cons up a list
22476 cons-stream (special form), [2]
22477 lazy evaluation and
22478 why a special form
22479 consciousness, expansion of
22480 consequent
22481 of cond clause
22482 of if
22483 const (in register machine)
22484 simulating
22485 syntax of
22486 constant (primitive constraint)
22487 constant, specifying in register machine
22488 constant-exp
22489 constant-exp-value
22490
22491 \fconstraint network
22492 constraint(s)
22493 primitive
22494 propagation of
22495 construct-arglist
22496 constructor
22497 as abstraction barrier
22498 contents
22499 using Scheme data types
22500 continuation
22501 in nondeterministic evaluator, [2], see also failure continuation; success continuation
22502 in register-machine simulator
22503 continue register
22504 in explicit-control evaluator
22505 recursion and
22506 continued fraction
22507 e as
22508 golden ratio as
22509 tangent as
22510 control structure
22511 controller for register machine
22512 controller diagram
22513 conventional interface
22514 sequence as
22515 Cormen, Thomas H.
22516 corner-split
22517 correctness of a program
22518 cos (primitive procedure)
22519 cosine
22520 fixed point of
22521 power series for
22522 cosmic radiation
22523 count-change
22524 count-leaves, [2]
22525 as accumulation
22526 as register machine
22527 count-pairs
22528 counting change, [2]
22529 credit-card accounts, international
22530 Cressey, David
22531 cross-type operations
22532 cryptography
22533 cube, [2], [3]
22534 cube root
22535 as fixed point
22536 by Newton’s method
22537 cube-root
22538 current time, for simulation agenda
22539 current-time, [2]
22540 cycle in list
22541
22542 \fdetecting
22543 Darlington, John
22544 data, [2]
22545 abstract, see also data abstraction
22546 abstract models for
22547 algebraic specification for
22548 compound
22549 concrete representation of
22550 hierarchical, [2]
22551 list-structured
22552 meaning of
22553 mutable, see mutable data objects
22554 numerical
22555 procedural representation of
22556 as program
22557 shared
22558 symbolic
22559 tagged, [2]
22560 data abstraction, [2], [3], [4], [5], see also metacircular evaluator
22561 for queue
22562 data base
22563 data-directed programming and
22564 indexing, [2]
22565 Insatiable Enterprises personnel
22566 logic programming and
22567 Microshaft personnel
22568 as set of records
22569 data paths for register machine
22570 data-path diagram
22571 data types
22572 in Lisp
22573 in strongly typed languages
22574 data-directed programming, [2]
22575 case analysis vs.
22576 in metacircular evaluator
22577 in query interpreter
22578 data-directed recursion
22579 deadlock
22580 avoidance
22581 recovery
22582 debug
22583 decimal point in numbers
22584 declarative vs. imperative knowledge, [2]
22585 logic programming and, [2]
22586 nondeterministic computing and
22587 decode
22588 decomposition of program into parts
22589 deep binding
22590 deep-reverse
22591 deferred operations
22592
22593 \fdefine (special form)
22594 with dotted-tail notation
22595 environment model of
22596 lambda vs.
22597 for procedures, [2]
22598 syntactic sugar
22599 value of
22600 why a special form
22601 define (special form)
22602 internal, see internal definition
22603 define-variable!, [2]
22604 definite integral
22605 estimated with Monte Carlo simulation, [2]
22606 definition, see define; internal definition
22607 definition-value
22608 definition-variable
22609 definition?
22610 deKleer, Johan, [2]
22611 delay (special form)
22612 explicit
22613 explicit vs. automatic
22614 implementation using lambda
22615 lazy evaluation and
22616 memoized, [2]
22617 why a special form
22618 delay, in digital circuit
22619 delay-it
22620 delayed argument
22621 delayed evaluation, [2]
22622 assignment and
22623 explicit vs. automatic
22624 in lazy evaluator
22625 normal-order evaluation and
22626 printing and
22627 streams and
22628 delayed object
22629 delete-queue!, [2]
22630 denom, [2]
22631 axiom for
22632 reducing to lowest terms
22633 dense polynomial
22634 dependency-directed backtracking
22635 deposit , with external serializer
22636 deposit message for bank account
22637 depth-first search
22638 deque
22639 deriv (numerical)
22640 deriv (symbolic)
22641 data-directed
22642 derivative of a function
22643
22644 \fderived expressions in evaluator
22645 adding to explicit-control evaluator
22646 design, stratified
22647 differential equation, see also solve
22648 second-order, [2]
22649 differentiation
22650 numerical
22651 rules for, [2]
22652 symbolic, [2]
22653 diffusion, simulation of
22654 digital signal
22655 digital-circuit simulation
22656 agenda
22657 agenda implementation
22658 primitive function boxes
22659 representing wires
22660 sample simulation
22661 Dijkstra, Edsger Wybe
22662 Dinesman, Howard P.
22663 Diophantus’s Arithmetic, Fermat’s copy of
22664 disjoin
22665 dispatching
22666 comparing different styles
22667 on type, see also data-directed programming
22668 display (primitive procedure), [2]
22669 display-line
22670 display-stream
22671 distinct?
22672 div (generic)
22673 div-complex
22674 div-interval
22675 division by zero
22676 div-poly
22677 div-rat
22678 div-series
22679 div-terms
22680 divides?
22681 divisible?
22682 division of integers
22683 dog, perfectly rational, behavior of
22684 DOS/Windows
22685 dot-product
22686 dotted-tail notation
22687 for procedure parameters, [2]
22688 in query pattern, [2]
22689 in query-language rule
22690 read and
22691 Doyle, Jon
22692 draw-line
22693 driver loop
22694
22695 \fin explicit-control evaluator
22696 in lazy evaluator
22697 in metacircular evaluator
22698 in nondeterministic evaluator, [2]
22699 in query interpreter, [2]
22700 driver-loop
22701 for lazy evaluator
22702 for metacircular evaluator
22703 for nondeterministic evaluator
22704 e
22705 as continued fraction
22706 as solution to differential equation
22707 e x , power series for
22708 Earth, measuring circumference of
22709 edge1-frame
22710 edge2-frame
22711 efficiency, see also order of growth, see also order of growth
22712 of compilation
22713 of data-base access
22714 of evaluation
22715 of Lisp
22716 of query processing
22717 of tree-recursive process
22718 EIEIO
22719 eight-queens puzzle, [2]
22720 electrical circuits, modeled with streams, [2]
22721 element-of-set?
22722 binary-tree representation
22723 ordered-list representation
22724 unordered-list representation
22725 else (special symbol in cond)
22726 embedded language, language design using
22727 empty list
22728 denoted as ’()
22729 recognizing with null?
22730 empty stream
22731 empty-agenda?, [2]
22732 empty-arglist
22733 empty-instruction-sequence
22734 empty-queue?, [2]
22735 empty-termlist?, [2]
22736 encapsulated name
22737 enclosing environment
22738 enclosing-environment
22739 encode
22740 end-of-list marker
22741 end-segment, [2]
22742 end-with-linkage
22743 engineering vs. mathematics
22744 entry
22745
22746 \fenumerate-interval
22747 enumerate-tree
22748 enumerator
22749 env register
22750 environment, [2]
22751 compile-time, see compile-time environment
22752 as context for evaluation
22753 enclosing
22754 global, see global environment
22755 lexical scoping and
22756 in query interpreter
22757 renaming vs.
22758 environment model of evaluation, [2]
22759 environment structure
22760 internal definitions
22761 local state
22762 message passing
22763 metacircular evaluator and
22764 procedure-application example
22765 rules for evaluation
22766 tail recursion and
22767 eq? (primitive procedure)
22768 for arbitrary objects
22769 as equality of pointers, [2]
22770 implementation for symbols
22771 numerical equality and
22772 equ? (generic predicate)
22773 equal-rat?
22774 equal?
22775 equality
22776 in generic arithmetic system
22777 of lists
22778 of numbers, [2], [3]
22779 referential transparency and
22780 of symbols
22781 equation, solving, see half-interval method; Newton’s method; solve
22782 Eratosthenes
22783 error (primitive procedure)
22784 error handling
22785 in compiled code
22786 in explicit-control evaluator, [2]
22787 Escher, Maurits Cornelis
22788 estimate-integral
22789 estimate-pi, [2]
22790 Euclid’s Algorithm, [2], see also greatest common divisor
22791 order of growth
22792 for polynomials
22793 Euclid’s Elements
22794 Euclid’s proof of infinite number of primes
22795 Euclidean ring
22796
22797 \fEuler, Leonhard
22798 proof of Fermat’s Little Theorem
22799 series accelerator
22800 euler-transform
22801 ev-application
22802 ev-assignment
22803 ev-begin
22804 ev-definition
22805 ev-if
22806 ev-lambda
22807 ev-quoted
22808 ev-self-eval
22809 ev-sequence
22810 with tail recursion
22811 without tail recursion
22812 ev-variable
22813 eval (lazy)
22814 eval (metacircular), [2]
22815 analyzing version
22816 data-directed
22817 primitive eval vs.
22818 eval (primitive procedure)
22819 MIT Scheme
22820 used in query interpreter
22821 eval-assignment
22822 eval-definition
22823 eval-dispatch
22824 eval-if (lazy)
22825 eval-if (metacircular)
22826 eval-sequence
22827 evaluation
22828 applicative-order, see applicative-order evaluation
22829 delayed, see delayed evaluation
22830 environment model of, see environment model of evaluation
22831 models of
22832 normal-order, see normal-order evaluation
22833 of a combination
22834 of and
22835 of cond
22836 of if
22837 of or
22838 of primitive expressions
22839 of special forms
22840 order of subexpression evaluation, see order of evaluation
22841 substitution model of, see substitution model of procedure application
22842 evaluator, see also interpreter
22843 as abstract machine
22844 metacircular
22845 as universal machine
22846 evaluators, see metacircular evaluator; analyzing evaluator; lazy evaluator; nondeterministic evaluator;
22847
22848 \fquery interpreter; explicit-control evaluator
22849 even-fibs, [2]
22850 even?
22851 evening star, see Venus
22852 event-driven simulation
22853 evlis tail recursion
22854 exact integer
22855 exchange
22856 exclamation point in names
22857 execute
22858 execute-application
22859 metacircular
22860 nondeterministic
22861 execution procedure
22862 in analyzing evaluator
22863 in nondeterministic evaluator, [2], [3]
22864 in register-machine simulator, [2]
22865 exp register
22866 expand-clauses
22867 explicit-control evaluator for Scheme
22868 assignments
22869 combinations
22870 compound procedures
22871 conditionals
22872 controller
22873 data paths
22874 definitions
22875 derived expressions
22876 driver loop
22877 error handling, [2]
22878 expressions with no subexpressions to evaluate
22879 as machine-language program
22880 machine model
22881 modified for compiled code
22882 monitoring performance (stack use)
22883 normal-order evaluation
22884 operand evaluation
22885 operations
22886 optimizations (additional)
22887 primitive procedures
22888 procedure application
22889 registers
22890 running
22891 sequences of expressions
22892 special forms (additional), [2]
22893 stack usage
22894 tail recursion, [2], [3]
22895 as universal machine
22896 expmod, [2], [3]
22897 exponential growth
22898
22899 \fof tree-recursive Fibonacci-number computation
22900 exponentiation
22901 modulo n
22902 expression, see also compound expression; primitive expression
22903 algebraic, see algebraic expressions
22904 self-evaluating
22905 symbolic, see also symbol(s)
22906 expression-oriented vs. imperative programming style
22907 expt
22908 linear iterative version
22909 linear recursive version
22910 register machine for
22911 extend-environment, [2]
22912 extend-if-consistent
22913 extend-if-possible
22914 external-entry
22915 extract-labels, [2]
22916 #f
22917 factorial, see also factorial
22918 infinite stream
22919 with letrec
22920 without letrec or define
22921 factorial
22922 as an abstract machine
22923 compilation of, [2]
22924 environment structure in evaluating
22925 linear iterative version
22926 linear recursive version
22927 register machine for (iterative), [2]
22928 register machine for (recursive), [2]
22929 stack usage, compiled
22930 stack usage, interpreted, [2]
22931 stack usage, register machine
22932 with assignment
22933 with higher-order procedures
22934 failure continuation (nondeterministic evaluator), [2]
22935 constructed by amb
22936 constructed by assignment
22937 constructed by driver loop
22938 failure, in nondeterministic computation
22939 bug vs.
22940 searching and
22941 false
22942 false
22943 false?
22944 fast-expt
22945 fast-prime?
22946 feedback loop, modeled with streams
22947 Feeley, Marc
22948 Feigenbaum, Edward
22949
22950 \fFenichel, Robert
22951 Fermat, Pierre de
22952 Fermat test for primality
22953 variant of
22954 Fermat’s Little Theorem
22955 alternate form
22956 proof
22957 fermat-test
22958 fetch-assertions
22959 fetch-rules
22960 fib
22961 linear iterative version
22962 logarithmic version
22963 register machine for (tree-recursive), [2]
22964 stack usage, compiled
22965 stack usage, interpreted
22966 tree-recursive version, [2]
22967 with memoization
22968 with named let
22969 Fibonacci numbers, see also fib
22970 Euclid’s GCD algorithm and
22971 infinite stream of, see fibs
22972 fibs (infinite stream)
22973 implicit definition
22974 FIFO buffer
22975 filter, [2]
22976 filter
22977 filtered-accumulate
22978 find-assertions
22979 find-divisor
22980 first-agenda-item, [2]
22981 first-class elements in language
22982 first-exp
22983 first-frame
22984 first-operand
22985 first-segment
22986 first-term, [2]
22987 fixed point
22988 computing with calculator
22989 of cosine
22990 cube root as
22991 fourth root as
22992 golden ratio as
22993 as iterative improvement
22994 in Newton’s method
22995 nth root as
22996 square root as, [2], [3]
22997 of transformed function
22998 unification and
22999 fixed-length code
23000
23001 \ffixed-point
23002 as iterative improvement
23003 fixed-point-of-transform
23004 flag register
23005 flatmap
23006 flatten-stream
23007 flip-horiz, [2]
23008 flip-vert, [2]
23009 flipped-pairs, [2], [3]
23010 Floyd, Robert
23011 fold-left
23012 fold-right
23013 for-each, [2]
23014 for-each-except
23015 Forbus, Kenneth D.
23016 force, [2]
23017 forcing a thunk vs.
23018 force a thunk
23019 force-it
23020 memoized version
23021 forget-value!, [2]
23022 formal parameters
23023 names of
23024 scope of
23025 formatting input expressions
23026 Fortran, [2]
23027 inventor of
23028 restrictions on compound data
23029 forwarding address
23030 fourth root, as fixed point
23031 fraction, see rational number(s)
23032 frame (environment model)
23033 as repository of local state
23034 global
23035 frame (picture language), [2]
23036 coordinate map
23037 frame (query interpreter), see also pattern matching; unification
23038 representation
23039 frame-coord-map
23040 frame-values
23041 frame-variables
23042 framed-stack discipline
23043 Franz Lisp
23044 free register, [2]
23045 free list
23046 free variable
23047 capturing
23048 in internal definition
23049 Friedman, Daniel P., [2]
23050 fringe
23051
23052 \fas a tree enumeration
23053 front-ptr
23054 front-queue, [2]
23055 full-adder
23056 full-adder
23057 function (mathematical)
23058 notation for
23059 Ackermann’s
23060 composition of
23061 derivative of
23062 fixed point of
23063 procedure vs.
23064 rational
23065 repeated application of
23066 smoothing of
23067 function box, in digital circuit
23068 functional programming, [2]
23069 concurrency and
23070 functional programming languages
23071 time and
23072 Gabriel, Richard P.
23073 garbage collection
23074 memoization and
23075 mutation and
23076 tail recursion and
23077 garbage collector
23078 compacting
23079 mark-sweep
23080 stop-and-copy
23081 GCD, see greatest common divisor
23082 gcd
23083 register machine for, [2]
23084 gcd-terms
23085 general-purpose computer, as universal machine
23086 generate-huffman-tree
23087 generating sentences
23088 generic arithmetic operations
23089 structure of system
23090 generic operation
23091 generic procedure, [2]
23092 generic selector, [2]
23093 Genesis
23094 get, [2]
23095 get-contents
23096 get-global-environment
23097 get-register
23098 get-register-contents, [2]
23099 get-signal, [2]
23100 get-value, [2]
23101 glitch
23102
23103 \fglobal environment, [2]
23104 in metacircular evaluator
23105 global frame
23106 Goguen, Joseph
23107 golden ratio
23108 as continued fraction
23109 as fixed point
23110 Gordon, Michael
23111 goto (in register machine)
23112 label as destination
23113 simulating
23114 goto-dest
23115 grammar
23116 graphics, see picture language
23117 Gray, Jim
23118 greatest common divisor, see also gcd
23119 generic
23120 of polynomials
23121 used to estimate
23122 used in rational-number arithmetic
23123 Green, Cordell
23124 Griss, Martin Lewis
23125 Guttag, John Vogel
23126 half-adder
23127 half-adder
23128 simulation of
23129 half-interval method
23130 half-interval-method
23131 Newton’s method vs.
23132 halting problem
23133 Halting Theorem
23134 Hamming, Richard Wesley, [2]
23135 Hanson, Christopher P., [2]
23136 Hardy, Godfrey Harold, [2]
23137 has-value?, [2]
23138 Hassle
23139 Havender, J.
23140 Haynes, Christopher T.
23141 headed list, [2]
23142 Hearn, Anthony C.
23143 Henderson, Peter, [2], [3]
23144 Henderson diagram
23145 Heraclitus
23146 Heron of Alexandria
23147 Hewitt, Carl Eddie, [2], [3], [4]
23148 hiding principle
23149 hierarchical data structures, [2]
23150 hierarchy of types
23151 in symbolic algebra
23152 inadequacy of
23153
23154 \fhigh-level language, machine language vs.
23155 higher-order procedures
23156 in metacircular evaluator
23157 procedure as argument
23158 procedure as general method
23159 procedure as returned value
23160 strong typing and
23161 Hilfinger, Paul
23162 Hoare, Charles Antony Richard
23163 Hodges, Andrew
23164 Hofstadter, Douglas R.
23165 Horner, W. G.
23166 Horner’s rule
23167 ‘‘how to’’ vs. ‘‘what is’’ description, see imperative vs. declarative knowledge
23168 Huffman code
23169 optimality of
23170 order of growth of encoding
23171 Huffman, David
23172 Hughes, R. J. M.
23173 IBM 704
23174 identity
23175 if (special form)
23176 cond vs.
23177 evaluation of
23178 normal-order evaluation of
23179 one-armed (without alternative)
23180 predicate, consequent, and alternative of
23181 why a special form
23182 if-alternative
23183 if-consequent
23184 if-predicate
23185 if?
23186 imag-part
23187 data-directed
23188 polar representation
23189 rectangular representation
23190 with tagged data
23191 imag-part-polar
23192 imag-part-rectangular
23193 imperative programming
23194 imperative vs. declarative knowledge, [2]
23195 logic programming and, [2]
23196 nondeterministic computing and
23197 imperative vs. expression-oriented programming style
23198 implementation dependencies, see also unspecified values
23199 numbers
23200 order of subexpression evaluation
23201 inc
23202 incremental development of programs
23203 indeterminate of a polynomial
23204
23205 \findexing a data base, [2]
23206 inference, method of
23207 infinite series
23208 infinite stream(s)
23209 merging, [2], [3], [4]
23210 merging as a relation
23211 of factorials
23212 of Fibonacci numbers, see fibs
23213 of integers, see integers
23214 of pairs
23215 of prime numbers, see primes
23216 of random numbers
23217 representing power series
23218 to model signals
23219 to sum a series
23220 infix notation, prefix notation vs.
23221 inform-about-no-value
23222 inform-about-value
23223 information retrieval, see data base
23224 Ingerman, Peter
23225 initialize-stack operation in register machine, [2]
23226 insert!
23227 in one-dimensional table
23228 in two-dimensional table
23229 insert-queue!, [2]
23230 install-complex-package
23231 install-polar-package
23232 install-polynomial-package
23233 install-rational-package
23234 install-rectangular-package
23235 install-scheme-number-package
23236 instantiate
23237 instantiate a pattern
23238 instruction counting
23239 instruction execution procedure
23240 instruction sequence, [2]
23241 instruction tracing
23242 instruction-execution-proc
23243 instruction-text
23244 integer(s)
23245 dividing
23246 exact
23247 integerizing factor
23248 integers (infinite stream)
23249 implicit definition
23250 lazy-list version
23251 integers-starting-from
23252 integral, see also definite integral; Monte Carlo integration
23253 of a power series
23254 integral, [2], [3]
23255
23256 \fwith delayed argument
23257 with lambda
23258 lazy-list version
23259 need for delayed evaluation
23260 integrate-series
23261 integrated-circuit implementation of Scheme, [2]
23262 integrator, for signals
23263 interleave
23264 interleave-delayed
23265 Interlisp
23266 internal definition
23267 in environment model
23268 free variable in
23269 let vs.
23270 in nondeterministic evaluator
23271 position of
23272 restrictions on
23273 scanning out
23274 scope of name
23275 Internet ‘‘Worm’’
23276 interning symbols
23277 interpreter, see also evaluator
23278 compiler vs., [2]
23279 read-eval-print loop
23280 intersection-set
23281 binary-tree representation
23282 ordered-list representation
23283 unordered-list representation
23284 interval arithmetic
23285 invariant quantity of an iterative process
23286 inverter
23287 inverter
23288 iteration contructs, see looping constructs
23289 iterative improvement
23290 iterative process
23291 as a stream process
23292 design of algorithm
23293 implemented by procedure call, [2], [3], see also tail recursion
23294 linear, [2]
23295 recursive process vs., [2], [3], [4]
23296 register machine for
23297 Jayaraman, Sundaresan
23298 Kaldewaij, Anne
23299 Karr, Alphonse
23300 Kepler, Johannes
23301 key
23302 key of a record
23303 in a data base
23304 in a table
23305
23306 \ftesting equality of
23307 Khayyam, Omar
23308 Knuth, Donald E., [2], [3], [4], [5], [6], [7]
23309 Kohlbecker, Eugene Edmund, Jr.
23310 Kolmogorov, A. N.
23311 Konopasek, Milos
23312 Kowalski, Robert
23313 KRC, [2]
23314 label (in register machine)
23315 simulating
23316 label-exp
23317 label-exp-label
23318 Lagrange interpolation formula
23319 calculus (lambda calculus)
23320 lambda (special form)
23321 define vs.
23322 with dotted-tail notation
23323 lambda expression
23324 as operator of combination
23325 value of
23326 lambda-body
23327 lambda-parameters
23328 lambda?
23329 Lambert, J.H.
23330 Lamé, Gabriel
23331 Lamé’s Theorem
23332 Lamport, Leslie
23333 Lampson, Butler
23334 Landin, Peter, [2]
23335 language, see natural language; programming language
23336 Lapalme, Guy
23337 last-exp?
23338 last-operand?
23339 last-pair, [2]
23340 rules
23341 lazy evaluation
23342 lazy evaluator
23343 lazy list
23344 lazy pair
23345 lazy tree
23346 leaf?
23347 least commitment, principle of
23348 lecture, something to do during
23349 left-branch, [2]
23350 Leibniz, Baron Gottfried Wilhelm von
23351 proof of Fermat’s Little Theorem
23352 series for , [2]
23353 Leiserson, Charles E., [2]
23354 length
23355 as accumulation
23356
23357 \fiterative version
23358 recursive version
23359 let (special form)
23360 evaluation model
23361 internal definition vs.
23362 named
23363 scope of variables
23364 as syntactic sugar, [2]
23365 let* (special form)
23366 letrec (special form)
23367 lexical addressing
23368 lexical address
23369 lexical scoping
23370 environment structure and
23371 lexical-address-lookup, [2]
23372 lexical-address-set!, [2]
23373 Lieberman, Henry
23374 LIFO buffer, see stack
23375 line segment
23376 represented as pair of points
23377 represented as pair of vectors
23378 linear growth, [2]
23379 linear iterative process
23380 order of growth
23381 linear recursive process
23382 order of growth
23383 linkage descriptor
23384 Liskov, Barbara Huberman
23385 Lisp
23386 acronym for LISt Processing
23387 applicative-order evaluation in
23388 on DEC PDP-1
23389 efficiency of, [2]
23390 first-class procedures in
23391 Fortran vs.
23392 history of
23393 internal type system
23394 original implementation on IBM 704
23395 Pascal vs.
23396 suitability for writing evaluators
23397 unique features of
23398 Lisp dialects
23399 Common Lisp
23400 Franz Lisp
23401 Interlisp
23402 MacLisp
23403 MDL
23404 Portable Standard Lisp
23405 Scheme
23406 Zetalisp
23407
23408 \flisp-value (query interpreter)
23409 lisp-value (query language), [2]
23410 evaluation of, [2], [3]
23411 list (primitive procedure)
23412 list structure
23413 list vs.
23414 mutable
23415 represented using vectors
23416 list(s)
23417 backquote with
23418 cdring down
23419 combining with append
23420 consing up
23421 converting a binary tree to a
23422 converting to a binary tree
23423 empty, see empty list
23424 equality of
23425 headed, [2]
23426 last pair of
23427 lazy
23428 length of
23429 list structure vs.
23430 manipulation with car, cdr, and cons
23431 mapping over
23432 nth element of
23433 operations on
23434 printed representation of
23435 quotation of
23436 reversing
23437 techniques for manipulating
23438 list->tree
23439 list-difference
23440 list-of-arg-values
23441 list-of-delayed-args
23442 list-of-values
23443 list-ref, [2]
23444 list-structured memory
23445 list-union
23446 lives-near (rule), [2]
23447 local evolution of a process
23448 local name, [2]
23449 local state
23450 maintained in frames
23451 local state variable
23452 local variable
23453 location
23454 Locke, John
23455 log (primitive procedure)
23456 logarithm, approximating ln 2
23457 logarithmic growth, [2], [3]
23458
23459 \flogic programming, see also query language; query interpreter
23460 computers for
23461 history of, [2]
23462 in Japan
23463 logic programming languages
23464 mathematical logic vs.
23465 logic puzzles
23466 logical and
23467 logical or
23468 logical-not
23469 lookup
23470 in one-dimensional table
23471 in set of records
23472 in two-dimensional table
23473 lookup-label
23474 lookup-prim
23475 lookup-variable-value, [2]
23476 for scanned-out definitions
23477 looping constructs, [2]
23478 implementing in metacircular evaluator
23479 lower-bound
23480 machine language
23481 high-level language vs.
23482 Macintosh
23483 MacLisp
23484 macro, see also reader macro character
23485 magician, see numerical analyst
23486 magnitude
23487 data-directed
23488 polar representation
23489 rectangular representation
23490 with tagged data
23491 magnitude-polar
23492 magnitude-rectangular
23493 make-account
23494 in environment model
23495 with serialization, [2], [3]
23496 make-account-and-serializer
23497 make-accumulator
23498 make-agenda, [2]
23499 make-assign
23500 make-begin
23501 make-branch
23502 make-center-percent
23503 make-center-width
23504 make-code-tree
23505 make-compiled-procedure
23506 make-complex-from-mag-ang
23507 make-complex-from-real-imag
23508 make-connector
23509
23510 \fmake-cycle
23511 make-decrementer
23512 make-execution-procedure
23513 make-frame, [2], [3]
23514 make-from-mag-ang, [2]
23515 message-passing
23516 polar representation
23517 rectangular representation
23518 make-from-mag-ang-polar
23519 make-from-mag-ang-rectangular
23520 make-from-real-imag, [2]
23521 message-passing
23522 polar representation
23523 rectangular representation
23524 make-from-real-imag-polar
23525 make-from-real-imag-rectangular
23526 make-goto
23527 make-if
23528 make-instruction
23529 make-instruction-sequence
23530 make-interval, [2]
23531 make-joint
23532 make-label
23533 make-label-entry
23534 make-lambda
23535 make-leaf
23536 make-leaf-set
23537 make-machine, [2]
23538 make-monitored
23539 make-mutex
23540 make-new-machine
23541 make-operation-exp
23542 make-perform
23543 make-point
23544 make-poly
23545 make-polynomial
23546 make-primitive-exp
23547 make-procedure
23548 make-product, [2]
23549 make-queue, [2]
23550 make-rat, [2], [3]
23551 axiom for
23552 reducing to lowest terms
23553 make-rational
23554 make-register
23555 make-restore
23556 make-save
23557 make-scheme-number
23558 make-segment, [2]
23559 make-serializer
23560
23561 \fmake-simplified-withdraw, [2]
23562 make-stack
23563 with monitored stack
23564 make-sum, [2]
23565 make-table
23566 message-passing implementation
23567 one-dimensional table
23568 make-tableau
23569 make-term, [2]
23570 make-test
23571 make-time-segment
23572 make-tree
23573 make-vect
23574 make-wire, [2], [3]
23575 make-withdraw
23576 in environment model
23577 using let
23578 making change, see counting change
23579 map, [2]
23580 as accumulation
23581 with multiple arguments
23582 map-over-symbols
23583 map-successive-pairs
23584 mapping
23585 over lists
23586 nested, [2]
23587 as a transducer
23588 over trees
23589 mark-sweep garbage collector
23590 mathematical function, see function (mathematical)
23591 mathematics
23592 computer science vs., [2]
23593 engineering vs.
23594 matrix, represented as sequence
23595 matrix-*-matrix
23596 matrix-*-vector
23597 max (primitive procedure)
23598 McAllester, David Allen, [2]
23599 McCarthy, John, [2], [3], [4]
23600 McDermott, Drew
23601 MDL
23602 means of abstraction
23603 define
23604 means of combination, see also closure
23605 measure in a Euclidean ring
23606 member
23607 memo-fib
23608 memo-proc
23609 memoization, [2]
23610 call-by-need and
23611
23612 \fby delay
23613 garbage collection and
23614 of thunks
23615 memoize
23616 memory
23617 in 1964
23618 list-structured
23619 memq
23620 merge
23621 merge-weighted
23622 merging infinite streams, see infinite stream(s)
23623 message passing, [2]
23624 environment model and
23625 in bank account
23626 in digital-circuit simulation
23627 tail recursion and
23628 metacircular evaluator
23629 metacircular evaluator for Scheme
23630 analyzing version
23631 combinations (procedure applications)
23632 compilation of, [2]
23633 data abstraction in, [2], [3], [4]
23634 data-directed eval
23635 derived expressions
23636 driver loop
23637 efficiency of
23638 environment model of evaluation in
23639 environment operations
23640 eval and apply
23641 eval-apply cycle, [2]
23642 expression representation, [2]
23643 global environment
23644 higher-order procedures in
23645 implemented language vs. implementation language
23646 job of
23647 order of operand evaluation
23648 primitive procedures
23649 representation of environments
23650 representation of procedures
23651 representation of true and false
23652 running
23653 special forms (additional), [2], [3], [4], [5], [6]
23654 special forms as derived expressions
23655 symbolic differentiation and
23656 syntax of evaluated language, [2], [3]
23657 tail recursiveness unspecified in
23658 true and false
23659 metalinguistic abstraction
23660 MicroPlanner
23661 Microshaft
23662
23663 \fmidpoint-segment
23664 Miller, Gary L.
23665 Miller, James S.
23666 Miller-Rabin test for primality
23667 Milner, Robin
23668 min (primitive procedure)
23669 Minsky, Marvin Lee, [2]
23670 Miranda
23671 MIT
23672 Artificial Intelligence Laboratory
23673 early history of
23674 Project MAC
23675 Research Laboratory of Electronics, [2]
23676 MIT Scheme
23677 the empty stream
23678 eval
23679 internal definitions
23680 numbers
23681 random
23682 user-initial-environment
23683 without-interrupts
23684 ML
23685 mobile
23686 modeling
23687 as a design strategy
23688 in science and engineering
23689 models of evaluation
23690 modified registers, see instruction sequence
23691 modifies-register?
23692 modularity, [2]
23693 along object boundaries
23694 functional programs vs. objects
23695 hiding principle
23696 streams and
23697 through dispatching on type
23698 through infinite streams
23699 through modeling with objects
23700 modulo n
23701 modus ponens
23702 money, changing, see counting change
23703 monitored procedure
23704 Monte Carlo integration
23705 stream formulation
23706 Monte Carlo simulation
23707 stream formulation
23708 monte-carlo
23709 infinite stream
23710 Moon, David A., [2]
23711 morning star, see evening star
23712 Morris, J. H.
23713
23714 \fMorse code
23715 Mouse, Minnie and Mickey
23716 mul (generic)
23717 used for polynomial coefficients
23718 mul-complex
23719 mul-interval
23720 more efficient version
23721 mul-poly
23722 mul-rat
23723 mul-series
23724 mul-streams
23725 mul-terms
23726 Multics time-sharing system
23727 multiple-dwelling
23728 multiplicand
23729 multiplication by Russian peasant method
23730 multiplier
23731 primitive constraint
23732 selector
23733 Munro, Ian
23734 mutable data objects, see also queue; table
23735 implemented with assignment
23736 list structure
23737 pairs
23738 procedural representation of
23739 shared data
23740 mutator
23741 mutex
23742 mutual exclusion
23743 mystery
23744 name, see also local name; variable; local variable
23745 encapsulated
23746 of a formal parameter
23747 of a procedure
23748 named let (special form)
23749 naming
23750 of computational objects
23751 of procedures
23752 naming conventions
23753 ! for assignment and mutation
23754 ? for predicates
23755 native language of machine
23756 natural language
23757 parsing, see parsing natural language
23758 quotation in
23759 needed registers, see instruction sequence
23760 needs-register?
23761 negate
23762 nested applications of car and cdr
23763 nested combinations
23764
23765 \fnested definitions, see internal definition
23766 nested mappings, see mapping
23767 new register
23768 new-cars register
23769 new-cdrs register
23770 new-withdraw
23771 newline (primitive procedure), [2]
23772 Newton’s method
23773 for cube roots
23774 for differentiable functions
23775 half-interval method vs.
23776 for square roots, [2], [3]
23777 newton-transform
23778 newtons-method
23779 next (linkage descriptor)
23780 next-to (rules)
23781 nil
23782 dispensing with
23783 as empty list
23784 as end-of-list marker
23785 as ordinary variable in Scheme
23786 no-more-exps?
23787 no-operands?
23788 node of a tree
23789 non-computable
23790 non-strict
23791 nondeterminism, in behavior of concurrent programs, [2]
23792 nondeterministic choice point
23793 nondeterministic computing
23794 nondeterministic evaluator
23795 order of operand evaluation
23796 nondeterministic programming vs. Scheme programming, [2], [3], [4]
23797 nondeterministic programs
23798 logic puzzles
23799 pairs with prime sums
23800 parsing natural language
23801 Pythagorean triples, [2], [3]
23802 normal-order evaluation
23803 applicative order vs., [2], [3]
23804 delayed evaluation and
23805 in explicit-control evaluator
23806 of if
23807 normal-order evaluator, see lazy evaluator
23808 not (primitive procedure)
23809 not (query language), [2]
23810 evaluation of, [2], [3]
23811 notation in this book
23812 italic symbols in expression syntax
23813 slanted characters for interpreter response
23814 nouns
23815
23816 \fnth root, as fixed point
23817 null? (primitive procedure)
23818 implemented with typed pointers
23819 number theory
23820 number(s)
23821 comparison of
23822 decimal point in
23823 equality of, [2], [3]
23824 in generic arithmetic system
23825 implementation dependencies
23826 integer vs. real number
23827 integer, exact
23828 in Lisp
23829 rational number
23830 number? (primitive procedure)
23831 data types and
23832 implemented with typed pointers
23833 numer, [2]
23834 axiom for
23835 reducing to lowest terms
23836 numerical analysis
23837 numerical analyst
23838 numerical data
23839 obarray
23840 object program
23841 object(s)
23842 benefits of modeling with
23843 with time-varying state
23844 object-oriented programming languages
23845 old register
23846 oldcr register
23847 ones (infinite stream)
23848 lazy-list version
23849 op (in register machine)
23850 simulating
23851 open coding of primitives, [2]
23852 operands
23853 operands of a combination
23854 operation
23855 cross-type
23856 generic
23857 in register machine
23858 operation-and-type table
23859 assignment needed for
23860 implementing
23861 operation-exp
23862 operation-exp-op
23863 operation-exp-operands
23864 operator
23865 operator of a combination
23866
23867 \fcombination as
23868 compound expression as
23869 lambda expression as
23870 optimality
23871 of Horner’s rule
23872 of Huffman code
23873 or (query language)
23874 evaluation of, [2]
23875 or (special form)
23876 evaluation of
23877 why a special form
23878 with no subexpressions
23879 or-gate
23880 or-gate, [2]
23881 order, [2]
23882 order notation
23883 order of evaluation
23884 assignment and
23885 implementation-dependent
23886 in compiler
23887 in explicit-control evaluator
23888 in metacircular evaluator
23889 in Scheme
23890 order of events
23891 decoupling apparent from actual
23892 indeterminacy in concurrent systems
23893 order of growth
23894 linear iterative process
23895 linear recursive process
23896 logarithmic
23897 tree-recursive process
23898 order of subexpression evaluation, see order of evaluation
23899 ordered-list representation of sets
23900 ordinary numbers (in generic arithmetic system)
23901 origin-frame
23902 Ostrowski, A. M.
23903 outranked-by (rule), [2]
23904 P operation on semaphore
23905 package
23906 complex-number
23907 polar representation
23908 polynomial
23909 rational-number
23910 rectangular representation
23911 Scheme-number
23912 painter(s)
23913 higher-order operations
23914 operations
23915 represented as procedures
23916 transforming and combining
23917
23918 \fpair(s)
23919 axiomatic definition of
23920 box-and-pointer notation for
23921 infinite stream of
23922 lazy
23923 mutable
23924 procedural representation of, [2], [3]
23925 represented using vectors
23926 used to represent sequence
23927 used to represent tree
23928 pair? (primitive procedure)
23929 implemented with typed pointers
23930 pairs
23931 Pan, V. Y.
23932 parallel-execute
23933 parallel-instruction-sequences
23934 parallelism, see concurrency
23935 parameter, see formal parameters
23936 parameter passing, see call-by-name argument passing; call-by-need argument passing
23937 parentheses
23938 delimiting combination
23939 delimiting cond clauses
23940 in procedure definition
23941 parse
23942 parse-...
23943 parsing natural language
23944 real language understanding vs. toy parser
23945 partial-sums
23946 Pascal
23947 lack of higher-order procedures
23948 recursive procedures
23949 restrictions on compound data
23950 weakness in handling compound objects
23951 Pascal, Blaise
23952 Pascal’s triangle
23953 password-protected bank account
23954 pattern
23955 pattern matching
23956 implementation
23957 unification vs., [2]
23958 pattern variable
23959 representation of, [2]
23960 pattern-match
23961 pc register
23962 perform (in register machine)
23963 simulating
23964 perform-action
23965 Perlis, Alan J., [2]
23966 quips, [2]
23967 permutations of a set
23968
23969 \fpermutations
23970 Phillips, Hubert
23971 (pi)
23972 approximation with half-interval method
23973 approximation with Monte Carlo integration, [2]
23974 Cesàro estimate for, [2]
23975 Leibniz’s series for, [2]
23976 stream of approximations
23977 Wallis’s formula for
23978 pi-stream
23979 pi-sum
23980 with higher-order procedures
23981 with lambda
23982 picture language
23983 Pingala, Áchárya
23984 pipelining
23985 Pitman, Kent M.
23986 Planner
23987 point, represented as a pair
23988 pointer
23989 in box-and-pointer notation
23990 typed
23991 polar package
23992 polar?
23993 poly
23994 polynomial package
23995 polynomial arithmetic
23996 addition
23997 division
23998 Euclid’s Algorithm
23999 greatest common divisor, [2]
24000 interfaced to generic arithmetic system
24001 multiplication
24002 probabilistic algorithm for GCD
24003 rational functions
24004 subtraction
24005 polynomial(s)
24006 canonical form
24007 dense
24008 evaluating with Horner’s rule
24009 hierarchy of types
24010 indeterminate of
24011 sparse
24012 univariate
24013 pop
24014 Portable Standard Lisp
24015 porting a language
24016 power series, as stream
24017 adding
24018 dividing
24019
24020 \fintegrating
24021 multiplying
24022 PowerPC
24023 predicate
24024 of cond clause
24025 of if
24026 naming convention for
24027 prefix code
24028 prefix notation
24029 infix notation vs.
24030 prepositions
24031 preserving, [2], [3], [4]
24032 pretty-printing
24033 prime number(s)
24034 cryptography and
24035 Eratosthenes’s sieve for
24036 Fermat test for
24037 infinite stream of, see primes
24038 Miller-Rabin test for
24039 testing for
24040 prime-sum-pair
24041 prime-sum-pairs
24042 infinite stream
24043 prime?, [2]
24044 primes (infinite stream)
24045 implicit definition
24046 primitive constraints
24047 primitive expression
24048 evaluation of
24049 name of primitive procedure
24050 name of variable
24051 number
24052 primitive procedures (those marked ns are not in the IEEE Scheme standard)
24053 *
24054 +
24055 -, [2]
24056 /
24057 <
24058 =
24059 >
24060 apply
24061 atan
24062 car
24063 cdr
24064 cons
24065 cos
24066 display
24067 eq?
24068 error (ns)
24069 eval (ns)
24070
24071 \flist
24072 log
24073 max
24074 min
24075 newline
24076 not
24077 null?
24078 number?
24079 pair?
24080 quotient
24081 random (ns), [2]
24082 read
24083 remainder
24084 round
24085 runtime (ns)
24086 set-car!
24087 set-cdr!
24088 sin
24089 symbol?
24090 vector-ref
24091 vector-set!
24092 primitive query, see simple query
24093 primitive-apply
24094 primitive-implementation
24095 primitive-procedure-names
24096 primitive-procedure-objects
24097 primitive-procedure?, [2]
24098 principle of least commitment
24099 print operation in register machine
24100 print-point
24101 print-queue
24102 print-rat
24103 print-result
24104 monitored-stack version
24105 print-stack-statistics operation in register machine
24106 printing, primitives for
24107 probabilistic algorithm, [2], [3]
24108 probe
24109 in constraint system
24110 in digital-circuit simulator
24111 proc register
24112 procedural abstraction
24113 procedural representation of data
24114 mutable data
24115 procedure, [2]
24116 anonymous
24117 arbitrary number of arguments, [2]
24118 as argument
24119 as black box
24120 body of
24121
24122 \fcompound
24123 creating with define
24124 creating with lambda, [2], [3]
24125 as data
24126 definition of
24127 first-class in Lisp
24128 formal parameters of
24129 as general method
24130 generic, [2]
24131 higher-order, see higher-order procedure
24132 implicit begin in body of
24133 mathematical function vs.
24134 memoized
24135 monitored
24136 name of
24137 naming (with define)
24138 as pattern for local evolution of a process
24139 as returned value
24140 returning multiple values
24141 scope of formal parameters
24142 special form vs., [2]
24143 procedure application
24144 combination denoting
24145 environment model of
24146 substitution model of, see substitution model of procedure application
24147 procedure-body
24148 procedure-environment
24149 procedure-parameters
24150 process
24151 iterative
24152 linear iterative
24153 linear recursive
24154 local evolution of
24155 order of growth of
24156 recursive
24157 resources required by
24158 shape of
24159 tree-recursive
24160 product
24161 as accumulation
24162 product?
24163 program
24164 as abstract machine
24165 comments in
24166 as data
24167 incremental development of
24168 structure of, [2], [3], see also abstraction barriers
24169 structured with subroutines
24170 program counter
24171 programming
24172
24173 \fdata-directed, see data-directed programming
24174 demand-driven
24175 elements of
24176 functional, see functional programming
24177 imperative
24178 odious style
24179 programming language
24180 design of
24181 functional
24182 logic
24183 object-oriented
24184 strongly typed
24185 very high-level
24186 Prolog, [2]
24187 prompt-for-input
24188 prompts
24189 explicit-control evaluator
24190 lazy evaluator
24191 metacircular evaluator
24192 nondeterministic evaluator
24193 query interpreter
24194 propagate
24195 propagation of constraints
24196 proving programs correct
24197 pseudo-random sequence
24198 pseudodivision of polynomials
24199 pseudoremainder of polynomials
24200 push
24201 put, [2]
24202 puzzles
24203 eight-queens puzzle, [2]
24204 logic puzzles
24205 Pythagorean triples
24206 with nondeterministic programs, [2], [3]
24207 with streams
24208 qeval, [2]
24209 quantum mechanics
24210 quasiquote
24211 queens
24212 query, see also simple query; compound query
24213 query interpreter
24214 adding rule or assertion
24215 compound query, see compound query
24216 data base
24217 driver loop, [2]
24218 environment structure in
24219 frame, [2]
24220 improvements to, [2], [3]
24221 infinite loops, [2]
24222 instantiation
24223
24224 \fLisp interpreter vs., [2], [3]
24225 overview
24226 pattern matching, [2]
24227 pattern-variable representation, [2]
24228 problems with not and lisp-value, [2]
24229 query evaluator, [2]
24230 rule, see rule
24231 simple query, see simple query
24232 stream operations
24233 streams of frames, [2]
24234 syntax of query language
24235 unification, [2]
24236 query language, [2]
24237 abstraction in
24238 compound query, see compound query
24239 data base
24240 equality testing in
24241 extensions to, [2]
24242 logical deductions
24243 mathematical logic vs.
24244 rule, see rule
24245 simple query, see simple query
24246 query-driver-loop
24247 question mark, in predicate names
24248 queue
24249 double-ended
24250 front of
24251 operations on
24252 procedural implementation of
24253 rear of
24254 in simulation agenda
24255 quotation
24256 of character strings
24257 of Lisp data objects
24258 in natural language
24259 quotation mark, single vs. double
24260 quote (special form)
24261 read and, [2]
24262 quoted?
24263 quotient (primitive procedure)
24264 Rabin, Michael O.
24265 radicand
24266 Ramanujan numbers
24267 Ramanujan, Srinivasa
24268 rand
24269 with reset
24270 random (primitive procedure)
24271 assignment needed for
24272 MIT Scheme
24273 random-in-range
24274
24275 \frandom-number generator, [2]
24276 in Monte Carlo simulation
24277 in primality testing
24278 with reset
24279 with reset, stream version
24280 random-numbers (infinite stream)
24281 Raphael, Bertram
24282 rational package
24283 rational function
24284 reducing to lowest terms
24285 rational number(s)
24286 arithmetic operations on
24287 in MIT Scheme
24288 printing
24289 reducing to lowest terms, [2]
24290 represented as pairs
24291 rational-number arithmetic
24292 interfaced to generic arithmetic system
24293 need for compound data
24294 Raymond, Eric, [2]
24295 RC circuit
24296 read (primitive procedure)
24297 dotted-tail notation handling by
24298 macro characters
24299 read operation in register machine
24300 read-eval-print loop, see also driver loop
24301 read-eval-print-loop
24302 reader macro character
24303 real number
24304 real-part
24305 data-directed
24306 polar representation
24307 rectangular representation
24308 with tagged data
24309 real-part-polar
24310 real-part-rectangular
24311 rear-ptr
24312 receive procedure
24313 record, in a data base
24314 rectangle, representing
24315 rectangular package
24316 rectangular?
24317 recursion
24318 data-directed
24319 expressing complicated process
24320 in rules
24321 in working with trees
24322 recursion equations
24323 recursion theory
24324 recursive procedure
24325
24326 \frecursive procedure definition
24327 recursive process vs.
24328 specifying without define
24329 recursive process
24330 iterative process vs., [2], [3], [4]
24331 linear, [2]
24332 recursive procedure vs.
24333 register machine for
24334 tree, [2]
24335 red-black tree
24336 reducing to lowest terms, [2], [3]
24337 Rees, Jonathan A., [2]
24338 referential transparency
24339 reg (in register machine)
24340 simulating
24341 register machine
24342 actions
24343 controller
24344 controller diagram
24345 data paths
24346 data-path diagram
24347 design of
24348 language for describing
24349 monitoring performance
24350 simulator
24351 stack
24352 subroutine
24353 test operation
24354 register table, in simulator
24355 register(s)
24356 representing
24357 tracing
24358 register-exp
24359 register-exp-reg
24360 register-machine language
24361 assign, [2]
24362 branch, [2]
24363 const, [2], [3]
24364 entry point
24365 goto, [2]
24366 instructions, [2]
24367 label
24368 label, [2]
24369 op, [2]
24370 perform, [2]
24371 reg, [2]
24372 restore, [2]
24373 save, [2]
24374 test, [2]
24375 register-machine simulator
24376
24377 \fregisters-modified
24378 registers-needed
24379 relations, computing in terms of, [2]
24380 relatively prime
24381 relativity, theory of
24382 release a mutex
24383 remainder (primitive procedure)
24384 remainder modulo n
24385 remainder-terms
24386 remove
24387 remove-first-agenda-item!, [2]
24388 require
24389 as a special form
24390 reserved words, [2]
24391 resistance
24392 formula for parallel resistors, [2]
24393 tolerance of resistors
24394 resolution principle
24395 resolution, Horn-clause
24396 rest-exps
24397 rest-operands
24398 rest-segments
24399 rest-terms, [2]
24400 restore (in register machine), [2]
24401 implementing
24402 simulating
24403 return (linkage descriptor)
24404 returning multiple values
24405 Reuter, Andreas
24406 reverse
24407 as folding
24408 rules
24409 Rhind Papyrus
24410 right-branch, [2]
24411 right-split
24412 ripple-carry adder
24413 Rivest, Ronald L., [2]
24414 RLC circuit
24415 Robinson, J. A.
24416 robustness
24417 rock songs, 1950s
24418 Rogers, William Barton
24419 root register
24420 roots of equation, see half-interval method; Newton’s method
24421 rotate90
24422 round (primitive procedure)
24423 roundoff error, [2]
24424 Rozas, Guillermo Juan
24425 RSA algorithm
24426 rule (query language)
24427
24428 \fapplying, [2], [3]
24429 without body, [2], [3]
24430 Runkle, John Daniel
24431 runtime (primitive procedure)
24432 Russian peasant method of multiplication
24433 same (rule)
24434 same-variable?, [2]
24435 sameness and change
24436 meaning of
24437 shared data and
24438 satisfy a compound query
24439 satisfy a pattern (simple query)
24440 save (in register machine), [2]
24441 implementing
24442 simulating
24443 scale-list, [2], [3]
24444 scale-stream
24445 scale-tree, [2]
24446 scale-vect
24447 scan register
24448 scan-out-defines
24449 scanning out internal definitions
24450 in compiler, [2]
24451 Scheme
24452 history of
24453 Scheme chip, [2]
24454 scheme-number package
24455 scheme-number->complex
24456 scheme-number->scheme-number
24457 Schmidt, Eric
24458 scope of a variable, see also lexical scoping
24459 internal define
24460 in let
24461 procedure’s formal parameters
24462 search
24463 of binary tree
24464 depth-first
24465 systematic
24466 search
24467 secretary, importance of
24468 segment-queue
24469 segment-time
24470 segments
24471 segments->painter
24472 selector
24473 as abstraction barrier
24474 generic, [2]
24475 self-evaluating expression
24476 self-evaluating?
24477 semaphore
24478
24479 \fof size n
24480 semicolon
24481 comment introduced by
24482 separator code
24483 sequence accelerator
24484 sequence of expressions
24485 in consequent of cond
24486 in procedure body
24487 sequence(s)
24488 as conventional interface
24489 as source of modularity
24490 operations on
24491 represented by pairs
24492 sequence->exp
24493 serialized-exchange
24494 with deadlock avoidance
24495 serializer
24496 implementing
24497 with multiple shared resources
24498 series, summation of
24499 accelerating sequence of approximations
24500 with streams
24501 set
24502 set
24503 (special form), see also assignment
24504 data base as
24505 operations on
24506 permutations of
24507 represented as binary tree
24508 represented as ordered list
24509 represented as unordered list
24510 subsets of
24511 set! (special form)
24512 environment model of
24513 value of
24514 set-car! (primitive procedure)
24515 implemented with vectors
24516 procedural implementation of
24517 value of
24518 set-cdr! (primitive procedure)
24519 implemented with vectors
24520 procedural implementation of
24521 value of
24522 set-contents!
24523 set-current-time!
24524 set-front-ptr!
24525 set-instruction-execution-proc!
24526 set-rear-ptr!
24527 set-register-contents!, [2]
24528 set-segments!
24529
24530 \fset-signal!, [2]
24531 set-value!, [2]
24532 set-variable-value!, [2]
24533 setup-environment
24534 shadow a binding
24535 Shamir, Adi
24536 shape of a process
24537 shared data
24538 shared resources
24539 shared state
24540 shrink-to-upper-right
24541 Shrobe, Howard E.
24542 side-effect bug
24543 sieve of Eratosthenes
24544 sieve
24545 sum (sigma) notation
24546 signal processing
24547 smoothing a function
24548 smoothing a signal, [2]
24549 stream model of
24550 zero crossings of a signal, [2], [3]
24551 signal, digital
24552 signal-error
24553 signal-flow diagram, [2]
24554 signal-processing view of computation
24555 simple query
24556 processing, [2], [3], [4]
24557 simple-query
24558 simplification of algebraic expressions
24559 Simpson’s Rule for numerical integration
24560 simulation
24561 of digital circuit, see digital-circuit simulation
24562 event-driven
24563 as machine-design tool
24564 for monitoring performance of register machine
24565 Monte Carlo, see Monte Carlo simulation
24566 of register machine, see register-machine simulator
24567 sin (primitive procedure)
24568 sine
24569 approximation for small angle
24570 power series for
24571 singleton-stream
24572 SKETCHPAD
24573 smallest-divisor
24574 more efficient version
24575 Smalltalk
24576 smoothing a function
24577 smoothing a signal, [2]
24578 snarf
24579 Solar System’s chaotic dynamics
24580
24581 \fSolomonoff, Ray
24582 solve differential equation, [2]
24583 lazy-list version
24584 with scanned-out definitions
24585 solving equation, see half-interval method; Newton’s method; solve
24586 source language
24587 source program
24588 Spafford, Eugene H.
24589 sparse polynomial
24590 special form
24591 as derived expression in evaluator
24592 need for
24593 procedure vs., [2]
24594 special forms (those marked ns are not in the IEEE Scheme standard)
24595 and
24596 begin
24597 cond
24598 cons-stream (ns)
24599 define, [2]
24600 delay (ns)
24601 if
24602 lambda
24603 let
24604 let*
24605 letrec
24606 named let
24607 or
24608 quote
24609 set!
24610 split
24611 sqrt
24612 block structured
24613 in environment model
24614 as fixed point, [2], [3], [4]
24615 as iterative improvement
24616 with Newton’s method, [2]
24617 register machine for
24618 as stream limit
24619 sqrt-stream
24620 square
24621 in environment model
24622 square root, see also sqrt
24623 stream of approximations
24624 square-limit, [2]
24625 square-of-four
24626 squarer (constraint), [2]
24627 squash-inwards
24628 stack
24629 framed
24630 for recursion in register machine
24631
24632 \frepresenting, [2]
24633 stack allocation and tail recursion
24634 stack-inst-reg-name
24635 Stallman, Richard M., [2]
24636 start register machine, [2]
24637 start-eceval
24638 start-segment, [2]
24639 state
24640 local, see local state
24641 shared
24642 vanishes in stream formulation
24643 state variable, [2]
24644 local
24645 statements, see instruction sequence
24646 statements
24647 Steele, Guy Lewis Jr., [2], [3], [4], [5], [6]
24648 stop-and-copy garbage collector
24649 Stoy, Joseph E., [2], [3]
24650 Strachey, Christopher
24651 stratified design
24652 stream(s), [2]
24653 delayed evaluation and
24654 empty
24655 implemented as delayed lists
24656 implemented as lazy lists
24657 implicit definition
24658 infinite, see infinite streams
24659 used in query interpreter, [2]
24660 stream-append
24661 stream-append-delayed
24662 stream-car, [2]
24663 stream-cdr, [2]
24664 stream-enumerate-interval
24665 stream-filter
24666 stream-flatmap, [2]
24667 stream-for-each
24668 stream-limit
24669 stream-map
24670 with multiple arguments
24671 stream-null?
24672 in MIT Scheme
24673 stream-ref
24674 stream-withdraw
24675 strict
24676 string, see character string
24677 strongly typed language
24678 sub (generic)
24679 sub-complex
24680 sub-interval
24681 sub-rat
24682
24683 \fsub-vect
24684 subroutine in register machine
24685 subsets of a set
24686 substitution model of procedure application, [2]
24687 inadequacy of
24688 shape of process
24689 subtype
24690 multiple
24691 success continuation (nondeterministic evaluator), [2]
24692 successive squaring
24693 sum
24694 as accumulation
24695 iterative version
24696 sum-cubes
24697 with higher-order procedures
24698 sum-integers
24699 with higher-order procedures
24700 sum-odd-squares, [2]
24701 sum-of-squares
24702 in environment model
24703 sum-primes, [2]
24704 sum?
24705 summation of a series
24706 with streams
24707 supertype
24708 multiple
24709 Sussman, Gerald Jay, [2], [3], [4], [5], [6], [7]
24710 Sussman, Julie Esther Mazel, nieces of
24711 Sutherland, Ivan
24712 symbol(s)
24713 equality of
24714 interning
24715 quotation of
24716 representation of
24717 uniqueness of
24718 symbol-leaf
24719 symbol? (primitive procedure)
24720 data types and
24721 implemented with typed pointers
24722 symbolic algebra
24723 symbolic differentiation, [2]
24724 symbolic expression, see also symbol(s)
24725 symbols
24726 SYNC
24727 synchronization, see concurrency
24728 syntactic analysis, separated from execution
24729 in metacircular evaluator
24730 in register-machine simulator, [2]
24731 syntactic sugar
24732 define
24733
24734 \flet as
24735 looping constructs as
24736 procedure vs. data as
24737 syntax, see also special forms
24738 abstract, see abstract syntax
24739 of expressions, describing
24740 of a programming language
24741 syntax interface
24742 systematic search
24743 #t
24744 table
24745 backbone of
24746 for coercion
24747 for data-directed programming
24748 local
24749 n-dimensional
24750 one-dimensional
24751 operation-and-type, see operation-and-type table
24752 represented as binary tree vs. unordered list
24753 testing equality of keys
24754 two-dimensional
24755 used in simulation agenda
24756 used to store computed values
24757 tableau
24758 tabulation, [2]
24759 tack-on-instruction-sequence
24760 tagged architecture
24761 tagged data, [2]
24762 tagged-list?
24763 tail recursion
24764 compiler and
24765 environment model of evaluation and
24766 explicit-control evaluator and, [2], [3]
24767 garbage collection and
24768 metacircular evaluator and
24769 in Scheme
24770 tail-recursive evaluator
24771 tangent
24772 as continued fraction
24773 power series for
24774 target register
24775 Technological University of Eindhoven
24776 Teitelman, Warren
24777 term list of polynomial
24778 representing
24779 term-list
24780 terminal node of a tree
24781 test (in register machine)
24782 simulating
24783 test operation in register machine
24784
24785 \ftest-and-set!, [2]
24786 test-condition
24787 text-of-quotation
24788 Thatcher, James W.
24789 THE Multiprogramming System
24790 the-cars
24791 register, [2]
24792 vector
24793 the-cdrs
24794 register, [2]
24795 vector
24796 the-empty-stream
24797 in MIT Scheme
24798 the-empty-termlist, [2]
24799 the-global-environment, [2]
24800 theorem proving (automatic)
24801 (f(n)) (theta of f(n))
24802 thunk
24803 call-by-name
24804 call-by-need
24805 forcing
24806 implementation of
24807 origin of name
24808 time
24809 assignment and
24810 communication and
24811 in concurrent systems
24812 functional programming and
24813 in nondeterministic computing, [2]
24814 purpose of
24815 time segment, in agenda
24816 time slicing
24817 timed-prime-test
24818 timing diagram
24819 TK!Solver
24820 tower of types
24821 tracing
24822 instruction execution
24823 register assignment
24824 transform-painter
24825 transparency, referential
24826 transpose a matrix
24827 tree
24828 B-tree
24829 binary, see also binary tree
24830 combination viewed as
24831 counting leaves of
24832 enumerating leaves of
24833 fringe of
24834 Huffman
24835
24836 \flazy
24837 mapping over
24838 red-black
24839 represented as pairs
24840 reversing at all levels
24841 tree accumulation
24842 tree->list...
24843 tree-map
24844 tree-recursive process
24845 order of growth
24846 trigonometric relations
24847 true
24848 true
24849 true?
24850 truncation error
24851 truth maintenance
24852 try-again
24853 Turing machine
24854 Turing, Alan M., [2]
24855 Turner, David, [2], [3]
24856 type field
24857 type tag, [2]
24858 two-level
24859 type(s)
24860 cross-type operations
24861 dispatching on
24862 hierarchy in symbolic algebra
24863 hierarchy of
24864 lowering, [2]
24865 multiple subtype and supertype
24866 raising, [2]
24867 subtype
24868 supertype
24869 tower of
24870 type-inferencing mechanism
24871 type-tag
24872 using Scheme data types
24873 typed pointer
24874 typing input expressions
24875 unbound variable
24876 unev register
24877 unification
24878 discovery of algorithm
24879 implementation
24880 pattern matching vs., [2]
24881 unify-match
24882 union-set
24883 binary-tree representation
24884 ordered-list representation
24885 unordered-list representation
24886
24887 \funique (query language)
24888 unique-pairs
24889 unit square
24890 univariate polynomial
24891 universal machine
24892 explicit-control evaluator as
24893 general-purpose computer as
24894 University of California at Berkeley
24895 University of Edinburgh
24896 University of Marseille
24897 UNIX, [2]
24898 unknown-expression-type
24899 unknown-procedure-type
24900 unordered-list representation of sets
24901 unspecified values
24902 define
24903 display
24904 if without alternative
24905 newline
24906 set!
24907 set-car!
24908 set-cdr!
24909 up-split
24910 update-insts!
24911 upper-bound
24912 upward compatibility
24913 user-initial-environment (MIT Scheme)
24914 user-print
24915 modified for compiled code
24916 V operation on semaphore
24917 val register
24918 value
24919 of a combination
24920 of an expression, see also unspecified values
24921 value-proc
24922 variable, see also local variable
24923 bound
24924 free
24925 scope of, see also scope of a variable
24926 unbound
24927 value of, [2]
24928 variable
24929 variable-length code
24930 variable?, [2]
24931 vector (data structure)
24932 vector (mathematical)
24933 operations on, [2]
24934 in picture-language frame
24935 represented as pair
24936 represented as sequence
24937
24938 \fvector-ref (primitive procedure)
24939 vector-set! (primitive procedure)
24940 Venus
24941 verbs
24942 very high-level language
24943 Wadler, Philip
24944 Wadsworth, Christopher
24945 Wagner, Eric G.
24946 Walker, Francis Amasa
24947 Wallis, John
24948 Wand, Mitchell, [2]
24949 Waters, Richard C.
24950 weight
24951 weight-leaf
24952 Weyl, Hermann
24953 ‘‘what is’’ vs. ‘‘how to’’ description, see declarative vs. imperative knowledge
24954 wheel (rule), [2]
24955 width
24956 width of an interval
24957 Wilde, Oscar (Perlis’s paraphrase of)
24958 Wiles, Andrew
24959 Winograd, Terry
24960 Winston, Patrick Henry, [2]
24961 wire, in digital circuit
24962 Wisdom, Jack
24963 Wise, David S.
24964 wishful thinking, [2]
24965 withdraw
24966 problems in concurrent system
24967 without-interrupts
24968 world line of a particle, [2]
24969 Wright, E. M.
24970 Wright, Jesse B.
24971 xcor-vect
24972 Xerox Palo Alto Research Center, [2]
24973 Y operator
24974 ycor-vect
24975 Yochelson, Jerome C.
24976 Zabih, Ramin
24977 zero crossings of a signal, [2], [3]
24978 zero test (generic)
24979 for polynomials
24980 Zetalisp
24981 Zilles, Stephen N.
24982 Zippel, Richard E.
24983
24984 \f[Go to first, previous, next page; contents; index]
24985
24986 \f