• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

source-academy / backend / 7e642952d0555adb4cd923cf9682728662eb8fbb-PR-1259

14 May 2025 04:25PM UTC coverage: 93.607%. Remained the same
7e642952d0555adb4cd923cf9682728662eb8fbb-PR-1259

Pull #1259

github

RichDom2185
Update CD workflow
Pull Request #1259: Upgrade CI runner version and README

3104 of 3316 relevant lines covered (93.61%)

1064.61 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

0.0
/lib/cadet/chatbot/sicp_notes.ex
1
defmodule Cadet.Chatbot.SicpNotes do
2
  @moduledoc """
3
  Module to store SICP notes.
4
  """
5
  @summary_1 """
6
  1. Building Abstractions with Functions
7
  1. **Introduction to Programming Concepts:**
8
  - Discusses John Locke's ideas on mental processes, emphasizing abstraction as a key concept in forming general ideas
9
  - Introduces the concept of computational processes, likening them to abstract beings that manipulate data according to program rules.
10
  2. **Programming Language Selection:**
11
  - Chooses JavaScript as the programming language for expressing procedural thoughts.
12
  - Traces the development of JavaScript from its origins in controlling web browsers to its current status as a general-purpose programming language.
13
  3. **JavaScript Characteristics and Standardization:**
14
  - Highlights JavaScript's core features inherited from Scheme and Self languages.
15
  - Notes the standardization efforts, leading to ECMAScript, and its evolution, with ECMAScript 2015 as a significant edition.
16
  - Discusses JavaScript's initial interpretation in web browsers and its subsequent efficient execution using techniques like JIT compilation.
17
  4. **Practical Application of JavaScript:**
18
  - Emphasizes the practicality of embedding JavaScript in web pages and its role in web browser interactions.
19
  - Recognizes JavaScript's expanding role as a general-purpose programming language, especially with the advent of systems like Node.js.
20
  - Points out JavaScript's suitability for an online version of a book on computer programs due to its execution capabilities in web browsers.
21
  """
22

23
  @summary_1_1 """
24
  1.1: The Elements of Programming
25
  1. **Programming Language Components:**
26
  - A powerful programming language involves more than instructing a computer; it's a framework for organizing ideas about processes.
27
  - Focuses on three mechanisms: primitive expressions, means of combination, and means of abstraction.
28
  2. **Elements in Programming:**
29
  - Programming deals with two key elements: functions and data.
30
  - Defines data as manipulable 'stuff' and functions as rules for manipulating data.
31
  - Emphasizes the importance of a language describing primitive data and functions and combining/abstracting them.
32
  3. **Chapter Scope:**
33
  - Chapter focuses on simple numerical data to explore rules for building functions.
34
  - Acknowledges the complexity of handling numbers in programming languages, deferring detailed exploration to later chapters.
35
  4. **Numerical Considerations:**
36
  - Raises issues in dealing with numbers, such as distinctions between integers and real numbers.
37
  - Acknowledges challenges like arithmetic operations, representation limits, and roundoff behavior.
38
  - Declares the book's focus on large-scale program design, deferring detailed numerical analysis.
39
  """
40

41
  @summary_1_1_1 """
42
  1.1.1  Expressions
43
  1. **JavaScript Interpreter Interaction:**
44
  - Introduction to programming via interactions with a JavaScript interpreter.
45
  - Statements involve typing expressions, and the interpreter responds by displaying the evaluated results.
46
  2. **Expression Statements:**
47
  - Expression statements consist of an expression followed by a semicolon.
48
  - Primitive expressions include numbers; evaluation involves clicking, displaying the interpreter, and running the statement.
49
  3. **Compound Expressions:**
50
  - Expressions combining numbers with operators form compound expressions.
51
  - Examples of operator combinations with arithmetic operators and infix notation are provided.
52
  4. **Read-Evaluate-Print Loop:**
53
  - JavaScript interpreter operates in a read-evaluate-print loop.
54
  - Complex expressions are handled, and the interpreter reads, evaluates, and prints results in a cycle.
55
  """
56

57
  @summary_1_1_2 """
58
  1.1.2  Naming and the Environment
59
  1. **Constants and Declarations:**
60
  - JavaScript uses constant declarations (e.g., const size = 2;) to associate names with values (constants).
61
  - Names like size can then be used in expressions, providing a means of abstraction for simple values.
62
  2. **Abstraction with Constants:**
63
  - Constant declaration is a simple form of abstraction, allowing the use of names for results of compound operations.
64
  - Examples include using constants like pi and radius in calculations for circumference.
65
  3. **Incremental Program Development:**
66
  - JavaScript's incremental development involves step-by-step construction of computational objects using name-object associations.
67
  - The interpreter supports this process by allowing incremental creation of associations in successive interactions.
68
  4. **Program Environment:**
69
  - The interpreter maintains a memory called the program environment, tracking name-object pairs.
70
  - This environment is crucial for understanding interpreter operation and implementing interpreters in later chapters.
71
  """
72

73
  @summary_1_1_3 """
74
  1.1.3: Evaluating Operator Combinations
75
  1. **Evaluation of Operator Combinations:**
76
  - The interpreter follows a procedure to evaluate operator combinations.
77
  - Recursive evaluation involves assessing operand expressions and applying the operator's function.
78
  - Recursive nature simplifies the understanding of complex, nested combinations in a hierarchical, tree-like structure.
79
  2. **Recursion in Evaluation:**
80
  - Recursion efficiently handles deeply nested combinations.
81
  - A tree representation illustrates the percolation of operand values upward during evaluation.
82
  - General process type known as 'tree accumulation.'
83
  3. **Handling Primitive Expressions:**
84
  - Primitive cases involve evaluating numerals and names.
85
  - Numerals represent the numbers they name.
86
  - Names derive values from the environment where associations are stored.
87
  4. **Role of Environment in Evaluation:**
88
  - The environment is crucial for determining name meanings in expressions.
89
  - In JavaScript, a name's value depends on the environment, especially in interactive contexts.
90
  - Declarations, like `const x = 3;`, associate names with values and aren't handled by the evaluation rule.
91
  """
92

93
  @summary_1_1_4 """
94
  1.1.4 Compound Functions
95
  1. **Compound Functions in JavaScript:**
96
  - Function declarations offer a powerful abstraction, allowing compound operations to be named.
97
  - Declaring a function involves specifying parameters, a return expression, and associating it with a name.
98
  - Function applications, like `square(21)`, execute the named function with specified arguments, yielding a result.
99
  2. **Function Application in JavaScript:**
100
  - To evaluate a function application, subexpressions (function and arguments) are evaluated, and the function is applied to the arguments.
101
  - Nested function applications, such as `square(square(3))`, demonstrate the versatility of this approach.
102
  3. **Building Functions with Compound Functions:**
103
  - Functions like `sum_of_squares` can be defined using previously declared functions (e.g., `square`) as building blocks.
104
  - Primitive functions provided by the JavaScript environment, like `math_log`, are used similarly to compound functions.
105
  4. **Syntax and Naming Conventions:**
106
  - Function declaration syntax involves naming, specifying parameters, and defining the return expression.
107
  - Common JavaScript conventions, like camel case or snake case, affect the readability of multi-part function names (e.g., `sum_of_squares`).
108
  """
109

110
  @summary_1_1_5 """
111
  1.1.5 The Substitution Model for Function Application
112
  1. **Substitution Model for Function Application:**
113
  - The interpreter follows a substitution model when evaluating function applications in JavaScript.
114
  - For compound functions, it involves replacing parameters with corresponding arguments in the return expression.
115
  - This model helps conceptualize function application but differs from the actual interpreter's workings.
116
  2. **Applicative-Order vs. Normal-Order Evaluation:**
117
  - Applicative-order evaluation, used by JavaScript, evaluates arguments before function application.
118
  - Normal-order evaluation substitutes arguments for parameters until only operators and primitive functions remain, then evaluates.
119
  - Both methods yield the same result for functions modeled using substitution, but normal order is more complex.
120
  3. **Implications of Evaluation Models:**
121
  - The substitution model serves as a starting point for thinking formally about evaluation.
122
  - Over the book, more refined models will replace the substitution model, especially when dealing with 'mutable data.'
123
  - JavaScript uses applicative-order evaluation for efficiency, while normal-order evaluation has its own implications explored later.
124
  4. **Challenges in Substitution Process:**
125
  - The substitution process, despite its simplicity, poses challenges in giving a rigorous mathematical definition.
126
  - Issues arise from potential confusion between parameter names and identical names in expressions to which a function is applied.
127
  - Future chapters will explore variations, including normal-order evaluation and its use in handling infinite data structures.
128
  """
129

130
  @summary_1_1_6 """
131
  1.1.6 Conditional Expressions and Predicates
132
  1. **Conditional Expressions and Predicates:**
133
  - JavaScript's conditional expressions involve a predicate, a consequent expression, and an alternative expression.
134
  - The interpreter evaluates the predicate; if true, it returns the consequent expression, else the alternative expression.
135
  - Predicates include boolean operators (&&, ||) and logical negation (!), aiding in conditional logic.
136
  2. **Handling Multiple Cases:**
137
  - Nested conditional expressions handle multiple cases, enabling complex case analyses.
138
  - The structure uses clauses with predicates and consequent expressions, ending with a final alternative expression.
139
  - Logical composition operations like && and || assist in constructing compound predicates.
140
  3. **Examples and Applications:**
141
  - Functions, like absolute value (abs), can be defined using conditional expressions.
142
  - Logical operations (&&, ||, !) and comparison operators enhance the expressiveness of conditional expressions.
143
  - Exercises demonstrate practical applications, such as evaluating sequences of statements and translating expressions into JavaScript.
144
  4. **Evaluation Models:**
145
  - Applicative-order evaluation (JavaScript's approach) evaluates arguments before function application.
146
  - Normal-order evaluation fully expands and then reduces expressions, leading to potential multiple evaluations.
147
  - Substitution models are foundational for understanding function application but become inadequate in detailed analyses.
148
  """
149

150
  @summary_1_1_7 """
151
  1.1.7 Example: Square Roots by Newton's Method
152
  1. **Newton's Method for Square Roots:**
153
  - Mathematical and computer functions differ; computer functions must be effective.
154
  - Newton's method, an iterative approach, is used to compute square roots.
155
  - The process involves successive approximations, improving guesses through simple manipulations.
156
  2. **Functional Approach to Square Roots:**
157
  - Functions like `sqrt_iter`, `improve`, `average`, and `is_good_enough` formalize the iterative square-root computation.
158
  - The basic strategy is expressed through recursion without explicit iterative constructs.
159
  - The example demonstrates that a simple functional language can handle numerical programs efficiently.
160
  3. **Declarative vs. Imperative Knowledge:**
161
  - The distinction between mathematical and computer functions reflects declarative (what is) vs. imperative (how to) knowledge.
162
  - Computer science deals with imperative descriptions, focusing on how to perform tasks.
163
  - Newton's method for square roots exemplifies the transition from declarative to imperative knowledge in programming.
164
  4. **Exercises and Challenges:**
165
  - Exercises involve evaluating the effectiveness of conditional expressions and exploring improvements to the square-root program.
166
  - Newton's method is extended to cube roots, showcasing the general applicability of the approach.
167
  - Considerations for precision and handling small/large numbers in square-root computation are discussed.
168
  """
169

170
  @summary_1_1_8 """
171
  1.1.8 Functions as Black-Box Abstractions
172
  1. **Function Decomposition:**
173
  - The square root program illustrates a cluster of functions decomposing the problem into subproblems.
174
  - Functions like `is_good_enough` and `improve` operate as modules, contributing to the overall process.
175
  - Decomposition is crucial for readability and modularity, enabling the use of functions as black-box abstractions.
176
  2. **Functional Abstraction:**
177
  - Functions should act as black boxes, allowing users to focus on the result, not implementation details.
178
  - Parameter names, being bound, don't affect function behavior, promoting functional abstraction.
179
  - The significance of local names and the independence of function meaning from parameter names are emphasized.
180
  3. **Lexical Scoping:**
181
  - Lexical scoping allows functions to have internal declarations, localizing subfunctions.
182
  - Block structure and lexical scoping enhance the organization of large programs.
183
  - Free names in internal declarations derive their values from the enclosing function's arguments.
184
  4. **Simplification and Organization:**
185
  - Internalizing declarations simplifies auxiliary functions in a block structure.
186
  - Lexical scoping eliminates the need to pass certain arguments explicitly, enhancing clarity.
187
  - The combination of block structure and lexical scoping aids in the organization of complex programs.
188
  """
189

190
  @summary_1_2 """
191
  1.2 Functions and the Processes They Generate
192
  1. **Programming Expertise Analogy:**
193
  - Programming is likened to chess, where knowing piece movements isn't enough without strategic understanding.
194
  - Similar to a novice chess player, knowing primitive operations isn't sufficient without understanding common programming patterns.
195
  2. **Importance of Process Visualization:**
196
  - Expert programmers visualize consequences and patterns of actions, akin to a photographer planning exposure for desired effects.
197
  - Understanding the local evolution of computational processes is crucial for constructing programs with desired behaviors.
198
  3. **Function as Process Pattern:**
199
  - A function serves as a pattern for the local evolution of a computational process.
200
  - Describing global behavior based on local evolution is challenging but understanding typical process patterns is essential.
201
  4. **Analysis of Process Shapes:**
202
  - Examining common shapes of processes generated by simple functions.
203
  - Investigating how these processes consume computational resources like time and space.
204
  """
205

206
  @summary_1_2_1 """
207
  1.2.1 Linear Recursion and Iteration
208
  1. **Factorial Computation:**
209
  - Two methods for computing factorial: recursive (linear recursive process) and iterative (linear iterative process).
210
  - Recursive process involves a chain of deferred operations, while iterative process maintains fixed state variables.
211
  2. **Recursive vs. Iterative:**
212
  - Recursive process builds a chain of deferred operations, resulting in linear growth of information.
213
  - Iterative process maintains fixed state variables, described as a linear iterative process with constant space.
214
  3. **Tail-Recursion and Implementation:**
215
  - Tail-recursive implementations execute iterative processes in constant space.
216
  - Common languages may consume memory with recursive functions; JavaScript (ECMAScript 2015) supports tail recursion.
217
  4. **Exercise: Ackermann's Function:**
218
  - Illustration of Ackermann's function.
219
  - Definition of functions f, g, and h in terms of Ackermann's function.
220
  """
221

222
  @summary_1_2_2 """
223
  1.2.2 Tree Recursion
224
  1. **Tree Recursion:**
225
  - Tree recursion is illustrated using the Fibonacci sequence computation.
226
  - Recursive function `fib` exhibits a tree-recursive process with exponential growth in redundant computations.
227
  2. **Iterative Fibonacci:**
228
  - An alternative linear iterative process for Fibonacci computation is introduced.
229
  - Contrast between the exponential growth of tree recursion and linear growth of the iterative process is highlighted.
230
  3. **Smart Compilation and Efficiency:**
231
  - Tree-recursive processes, while inefficient, are often easy to understand.
232
  - A 'smart compiler' is proposed to transform tree-recursive functions into more efficient forms.
233
  4. **Example: Counting Change:**
234
  - The problem of counting change for a given amount is introduced.
235
  - A recursive solution is presented, demonstrating tree recursion with a clear reduction rule.
236
  """
237

238
  @summary_1_2_3 """
239
  1.2.3 Orders of Growth
240
  1. **Orders of Growth:**
241
  - Processes exhibit varying resource consumption rates, described by the order of growth.
242
  - Represented as Θ(f(n)), indicating resource usage between k₁f(n) and k₂f(n) for large n.
243
  2. **Examples of Order of Growth:**
244
  - Linear recursive factorial process has Θ(n) steps and space.
245
  - Iterative factorial has Θ(n) steps but Θ(1) space.
246
  - Tree-recursive Fibonacci has Θ(ϕⁿ) steps and Θ(n) space, where ϕ is the golden ratio.
247
  3. **Crude Description:**
248
  - Orders of growth offer a basic overview, e.g., Θ(n²) for quadratic processes.
249
  - Useful for anticipating behavior changes with problem size variations.
250
  4. **Upcoming Analysis:**
251
  - Future exploration includes algorithms with logarithmic order of growth.
252
  - Expected behavior changes, such as doubling problem size's impact on resource utilization.
253
  """
254

255
  @summary_1_2_4 """
256
  1.2.4 Exponentiation
257
  1. **Exponentiation Process:**
258
  - Recursive process for exponentiation: bⁿ = b * bⁿ⁻¹.
259
  - Linear recursive process: Θ(n) steps and Θ(n) space.
260
  - Improved iterative version: Θ(n) steps but Θ(1) space.
261
  2. **Successive Squaring:**
262
  - Successive squaring reduces steps for exponentiation.
263
  - Fast_expt function exhibits logarithmic growth: Θ(log n) steps and space.
264
  3. **Multiplication Algorithms:**
265
  - Design logarithmic steps multiplication using successive doubling and halving.
266
  - Utilize observation from exponentiation for efficient iterative multiplication.
267
  4. **Fibonacci Numbers:**
268
  - Clever algorithm for Fibonacci in logarithmic steps.
269
  - Transformation T and Tⁿ for Fibonacci computation using successive squaring.
270
  """
271

272
  @summary_1_2_5 """
273
  1.2.5 Greatest Common Divisors
274
  1. **Greatest Common Divisors (GCD):**
275
  - GCD of a and b is the largest integer dividing both with no remainder.
276
  - Euclid's Algorithm efficiently computes GCD using recursive reduction.
277
  - Algorithm based on the observation: GCD(a, b) = GCD(b, a % b).
278
  2. **Algorithm Complexity:**
279
  - Euclid's Algorithm has logarithmic growth.
280
  - Lamé's Theorem relates Euclid's steps to Fibonacci numbers.
281
  - Order of growth: Θ(log n).
282
  3. **Euclid's Algorithm Function:**
283
  - Express Euclid's Algorithm as a function: `gcd(a, b)`.
284
  - Iterative process with logarithmic growth in steps.
285
  4. **Exercise:**
286
  - Normal-order evaluation impacts the process generated by gcd function.
287
  - Lamé's Theorem applied to estimate the order of growth for Euclid's Algorithm.
288
  """
289

290
  @summary_1_2_6 """
291
  1.2.6 Example: Testing for Primality
292
  1. **Primality Testing Methods:**
293
  - Methods for checking primality: Order Θ(n) and probabilistic method with Θ(log n).
294
  - Finding divisors: Program to find the smallest integral divisor of a given number.
295
  - Fermat's Little Theorem: Θ(log n) primality test based on number theory.
296
  - Fermat test and Miller–Rabin test as probabilistic algorithms.
297
  2. **Fermat's Little Theorem:**
298
  - If n is prime, a^(n-1) ≡ 1 (mod n) for a < n.
299
  - Fermat test: Randomly choosing a and checking congruence.
300
  - Probabilistic nature: Result is probably correct, with rare chances of error.
301
  3. **Algorithm Implementation:**
302
  - Implementation of Fermat test using expmod function.
303
  - Miller–Rabin test: Squaring step checks for nontrivial square roots of 1.
304
  - Probabilistic algorithms and their reliability in practical applications.
305
  4. **Exercises:**
306
  - Exercise 1.21: Finding the smallest divisor using the smallest_divisor function.
307
  - Exercise 1.22: Timed prime tests for different ranges, comparing Θ(n) and Θ(log n) methods.
308
  - Exercise 1.23: Optimizing smallest_divisor for efficiency.
309
  - Exercise 1.24: Testing primes using the Fermat method (Θ(log n)).
310
  - Exercise 1.25: Comparing expmod and fast_expt for primality testing.
311
  - Exercise 1.26: Identifying algorithmic transformation affecting efficiency.
312
  - Exercise 1.27: Testing Carmichael numbers that fool the Fermat test.
313
  - Exercise 1.28: Implementing the Miller–Rabin test and testing its reliability.
314
  """
315

316
  @summary_1_3 """
317
  1.3 Formulating Abstractions with Higher-Order Functions
318
  1. **Higher-Order Functions:**
319
  - Functions as abstractions for compound operations on numbers.
320
  - Declaring functions allows expressing concepts like cubing, enhancing language expressiveness.
321
  - Importance of building abstractions using function names.
322
  - Introduction of higher-order functions that accept or return functions, increasing expressive power.
323
  2. **Abstraction in Programming:**
324
  - Programming languages should allow building abstractions through named common patterns.
325
  - Functions enable working with higher-level operations beyond primitive language functions.
326
  - Limitations without abstractions force work at the level of primitive operations.
327
  - Higher-order functions extend the ability to create abstractions in programming languages.
328
  """
329

330
  @summary_1_3_1 """
331
  1.3.1 Functions as Arguments
332
  1. **Common Pattern in Functions:**
333
  - Three functions share a common pattern for summing series.
334
  - Functions differ in name, term computation, and next value.
335
  - Identification of the summation abstraction in mathematical series.
336
  - Introduction of a common template for expressing summation patterns.
337
  2. **Higher-Order Function for Summation:**
338
  - Introduction of a higher-order function for summation, named 'sum.'
339
  - 'sum' takes a term, lower and upper bounds, and next function as parameters.
340
  - Examples of using 'sum' to compute sum_cubes, sum_integers, and pi_sum.
341
  - Application of 'sum' in numerical integration and approximation of π.
342
  3. **Iterative Formulation:**
343
  - Transformation of summation function into an iterative process.
344
  - Example of an iterative summation function using Simpson's Rule.
345
  - Extension to a more general notion called 'accumulate' for combining terms.
346
  4. **Filtered Accumulation:**
347
  - Introduction of filtered accumulation using a predicate for term selection.
348
  - Examples of filtered accumulation: sum of squares of prime numbers and product of relatively prime integers.
349
  - Acknowledgment of the expressive power attained through appropriate abstractions.
350
  """
351

352
  @summary_1_3_2 """
353
  1.3.2 Constructing Functions using Lambda Expressions
354
  1. **Lambda Expressions for Function Creation:**
355
  - Introduction of lambda expressions for concise function creation.
356
  - Lambda expressions used to directly specify functions without declaration.
357
  - Elimination of the need for auxiliary functions like pi_term and pi_next.
358
  - Examples of pi_sum and integral functions using lambda expressions.
359
  2. **Lambda Expression Syntax:**
360
  - Lambda expressions written as `(parameters) => expression`.
361
  - Equivalent functionality to function declarations but without a specified name.
362
  - Readability and equivalence demonstrated with examples.
363
  - Usage of lambda expressions in various contexts, such as function application.
364
  3. **Local Names Using Lambda Expressions:**
365
  - Lambda expressions employed to create anonymous functions for local names.
366
  - Example of computing a function with intermediate quantities like 'a' and 'b'.
367
  - Comparison with alternative approaches, including using auxiliary functions.
368
  - Utilization of constant declarations within function bodies for local names.
369
  4. **Conditional Statements in JavaScript:**
370
  - Introduction of conditional statements using `if-else` syntax.
371
  - Example of applying conditional statements in the 'expmod' function.
372
  - Scope considerations for constant declarations within conditional statements.
373
  - Efficient use of conditional statements to improve function performance.
374
  5. **Exercise 1.34:**
375
  - A function `f` that takes a function `g` and applies it to the value 2.
376
  - Demonstrations with `square` and a lambda expression.
377
  - A hypothetical scenario of evaluating `f(f)` and its explanation as an exercise.
378
  - Illustration of function composition and its outcome.
379
  """
380

381
  @summary_1_3_3 """
382
  1.3.3 Functions as General Methods
383
  1. **Introduction to General Methods:**
384
  - Compound functions and higher-order functions for abstracting numerical operations.
385
  - Higher-order functions express general methods of computation.
386
  - Examples of general methods for finding zeros and fixed points of functions.
387
  2. **Half-Interval Method for Finding Roots:**
388
  - A strategy for finding roots of continuous functions using the half-interval method.
389
  - Implementation of the method in JavaScript with the `search` function.
390
  - Use of the method to approximate roots, e.g., finding π and solving a cubic equation.
391
  3. **Fixed Points of Functions:**
392
  - Definition of a fixed point of a function and methods to locate it.
393
  - Introduction of the `fixed_point` function for finding fixed points with a given tolerance.
394
  - Examples using cosine and solving equations involving trigonometric functions.
395
  4. **Square Root Computation and Averaging:**
396
  - Attempt to compute square roots using fixed-point search and the challenge with convergence.
397
  - Introduction of average damping to control oscillations and improve convergence.
398
  - Illustration of square root computation using average damping in the `sqrt` function.
399
  5. **Exercises and Further Exploration:**
400
  - Exercise 1.35: Golden ratio as a fixed point.
401
  - Exercise 1.36: Modifying `fixed_point` and solving equations.
402
  - Exercise 1.37: Continued fraction representation and approximating values.
403
  - Exercise 1.38: Approximating Euler's number using continued fractions.
404
  - Exercise 1.39: Lambert's continued fraction for the tangent function.
405
  """
406

407
  @summary_1_3_4 """
408
  1.3.4 Functions as Returned Values
409
  1. **Programming Concepts:**
410
  - Demonstrates the use of functions as first-class citizens in JavaScript.
411
  - Highlights the application of higher-order functions in expressing general methods.
412
  - Shows how to create abstractions and build upon them for more powerful functionalities.
413
  - Discusses the significance of first-class functions in JavaScript and their expressive power.
414
  2. **Specific Programming Techniques:**
415
  - Introduces and applies average damping and fixed-point methods in function computations.
416
  - Explores Newton's method and expresses it as a fixed-point process.
417
  - Provides examples of implementing functions for square roots, cube roots, and nth roots.
418
  - Discusses iterative improvement as a general computational strategy.
419
  3. **Exercises and Problem Solving:**
420
  - Includes exercises like implementing functions for cubic equations, function composition, and iterative improvement.
421
  - Addresses challenges in computing nth roots using repeated average damping.
422
  4. **General Programming Advice:**
423
  - Emphasizes the importance of identifying and building upon underlying abstractions in programming.
424
  - Encourages programmers to think in terms of abstractions and choose appropriate levels of abstraction for tasks.
425
  - Discusses the benefits and challenges of first-class functions in programming languages.
426
  """
427

428
  @summary_2 """
429
  2 Building Abstractions with Data
430
  1. **Focus on Compound Data:** The chapter discusses the importance of compound data in programming languages to model complex phenomena and improve design modularity.
431
  2. **Data Abstraction:** Introduces the concept of data abstraction, emphasizing how it simplifies program design by separating the representation and usage of data objects.
432
  3. **Expressive Power:** Compound data enhances the expressive power of programming languages, allowing the manipulation of different data types without detailed knowledge of their representations.
433
  4. **Symbolic Expressions and Generic Operations:** Explores symbolic expressions, alternatives for representing sets, and the need for generic operations in handling differently represented data, illustrated with polynomial arithmetic.
434
  """
435

436
  @summary_2_1 """
437
  2.1 Introduction to Data Abstraction
438
  1. **Data Abstraction Definition:** Data abstraction is a methodology separating how compound data is used from its construction details using selectors and constructors.
439
  2. **Functional Abstraction Analogy:** Similar to functional abstraction, data abstraction allows replacing details of data implementation while preserving overall behavior.
440
  3. **Program Structuring:** Programs should operate on "abstract data" without unnecessary assumptions, with a defined interface using selectors and constructors for concrete data representation.
441
  4. **Illustration with Rational Numbers:** The concept is illustrated by designing functions for manipulating rational numbers through data abstraction techniques.
442
  """
443

444
  @summary_2_1_1 """
445
  2.1.1 Example: Arithmetic Operations for Rational Numbers
446
  1. **Rational Number Operations:** Describes arithmetic operations for rational numbers: add, subtract, multiply, divide, and equality tests.
447
  2. **Synthetic Strategy:** Utilizes "wishful thinking" synthesis, assuming constructor and selectors for rational numbers without defining their implementation details.
448
  3. **Pairs and Glue:** Introduces pairs as the glue for implementing concrete data abstraction and list-structured data, illustrating their use in constructing complex data structures.
449
  4. **Rational Number Representation:** Represents rational numbers as pairs of integers (numerator and denominator) and implements operations using pairs as building blocks. Also addresses reducing rational numbers to lowest terms.
450
  """
451

452
  @summary_2_1_2 """
453
  2.1.2 Abstraction Barriers
454
  1. **Abstraction Barriers:** Discusses the concept of abstraction barriers, separating program levels using interfaces for data manipulation.
455
  2. **Advantages of Data Abstraction:** Simplifies program maintenance and modification by confining data structure representation changes to a few modules.
456
  3. **Flexibility in Implementation:** Illustrates the flexibility of choosing when to compute certain values, such as gcd, based on use patterns without modifying higher-level functions.
457
  4. **Exercise Examples:** Presents exercises on representing line segments and rectangles, highlighting the application of abstraction barriers and flexibility in design.
458
  """
459

460
  @summary_2_1_3 """
461
  2.1.3 What Is Meant by Data?
462
  1. **Defining Data:** Discusses the concept of data, emphasizing the need for specific conditions that selectors and constructors must fulfill.
463
  2. **Data as Collections of Functions:** Demonstrates the functional representation of pairs, illustrating that functions can serve as data structures fulfilling necessary conditions.
464
  3. **Functional Pairs Implementation:** Presents an alternative functional representation of pairs and verifies its correctness in terms of head and tail functions.
465
  4. **Church Numerals:** Introduces Church numerals, representing numbers through functions, and provides exercises to define one, two, and addition in this system.
466
  """
467

468
  @summary_2_1_4 """
469
  2.1.4 Extended Exercise: Interval Arithmetic
470
  1. **Interval Arithmetic Concept:** Alyssa P. Hacker is designing a system for interval arithmetic to handle inexact quantities with known precision.
471
  2. **Interval Operations:** Alyssa defines operations like addition, multiplication, and division for intervals based on their lower and upper bounds.
472
  3. **Interval Constructors and Selectors:** The text introduces an interval constructor and selectors, and there are exercises to complete the implementation and explore related concepts.
473
  4. **User Issues:** The user, Lem E. Tweakit, encounters discrepancies in computing parallel resistors using different algebraic expressions in Alyssa's system.
474
  """
475

476
  @summary_2_2 """
477
  2.2 Hierarchical Data and the Closure Property
478
  1. **Pair Representation:** Pairs, represented using box-and-pointer notation, serve as a primitive "glue" to create compound data objects.
479
  2. **Universal Building Block:** Pairs, capable of combining numbers and other pairs, act as a universal building block for constructing diverse data structures.
480
  3. **Closure Property:** The closure property of pairs enables the creation of hierarchical structures, facilitating the combination of elements with the same operation.
481
  4. **Importance in Programming:** Closure is crucial in programming, allowing the construction of complex structures made up of parts, leading to powerful combinations.
482
  """
483

484
  @summary_2_2_1 """
485
  2.2.1 Representing Sequences
486
  1. **Sequence Representation:** Pairs are used to represent sequences, visualized as chains of pairs, forming a list structure in box-and-pointer notation.
487
  2. **List Operations:** Lists, constructed using pairs, support operations like head and tail for element extraction, length for counting, and append for combining.
488
  3. **Mapping with Higher-Order Function:** The higher-order function map abstracts list transformations, allowing the application of a function to each element, enhancing abstraction in list processing.
489
  4. **For-Each Operation:** The for_each function applies a given function to each element in a list, useful for actions like printing, with the option to return an arbitrary value.
490
  """
491

492
  @summary_2_2_2 """
493
  2.2.2 Hierarchical Structures
494
  1. **Hierarchical Sequences:** Sequences of sequences are represented as hierarchical structures, extending the list structure to form trees.
495
  2. **Tree Operations:** Recursion is used for tree operations, such as counting leaves and length, demonstrating natural tree processing with recursive functions.
496
  3. **Mobile Representation:** Binary mobiles, consisting of branches and weights, are represented using compound data structures, with operations to check balance and calculate total weight.
497
  4. **Mapping Over Trees:** Operations like scale_tree demonstrate mapping over trees, combining sequence operations and recursion for efficient tree manipulation.
498
  """
499

500
  @summary_2_2_3 """
501
  2.2.3 Sequences as Conventional Interfaces
502
  1. **Sequence Operations:**
503
  - Use signals flowing through stages to design programs, enhancing conceptual clarity.
504
  - Represent signals as lists, enabling modular program design with standard components.
505
  2. **Operations on Sequences:**
506
  - Implement mapping, filtering, and accumulation operations for sequence processing.
507
  - Examples: map, filter, accumulate functions for various computations, providing modularity.
508
  3. **Signal-Flow Structure:**
509
  - Organize programs to manifest signal-flow structure for clarity.
510
  - Utilize sequence operations like map, filter, and accumulate to express program designs.
511
  4. **Exercises and Solutions:**
512
  - Includes exercises involving list-manipulation operations and matrix operations.
513
  - Demonstrates nested mappings for problem-solving, like permutations and eight-queens puzzle.
514
  """
515

516
  @summary_2_2_4 """
517
  2.2.4 Example: A Picture Language
518
  1. **Picture Language Overview:**
519
  - Utilizes a simple language for drawing pictures, showcasing data abstraction, closure, and higher-order functions.
520
  - Painters, representing images, draw within designated frames, enabling easy experimentation with patterns.
521
  - Operations like flip, rotate, and squash transform painters, while combinations like beside and below create compound painters.
522
  2. **Painter Operations:**
523
  - `transform_painter` is a key operation, transforming painters based on specified frame points.
524
  - Operations like flip_vert, rotate90, and squash_inwards leverage `transform_painter` to achieve specific effects.
525
  - `beside` and `below` combine painters, each transformed to draw in specific regions of the frame.
526
  3. **Stratified Design Principles:**
527
  - Embraces stratified design, structuring complexity through levels and languages.
528
  - Primitives like primitive painters are combined at lower levels, forming components for higher-level operations.
529
  - Enables robust design, allowing changes at different levels with minimal impact.
530
  4. **Examples and Exercises:**
531
  - Illustrates examples like square_limit, flipped_pairs, and square_of_four.
532
  - Exercises involve modifying patterns, defining new transformations, and demonstrating the versatility of the picture language.
533
  """
534

535
  @summary_2_3 """
536
  2.3 Symbolic Data
537
  1. **Compound Data Objects:**
538
  - Constructed from numbers in previous sections.
539
  - Introduction of working with strings as data.
540
  2. **Representation Extension:**
541
  - Enhances language capabilities.
542
  - Adds versatility to data representation.
543
  """
544

545
  @summary_2_3_1 """
546
  2.3.1 Strings
547
  1. **String Usage:**
548
  - Strings used for messages.
549
  - Compound data with strings in lists.
550
  2. **String Representation:**
551
  - Strings in double quotes.
552
  - Distinction from names in code.
553
  3. **Comparison Operations:**
554
  - Introduction of === and !== for strings.
555
  - Example function using ===: `member(item, x)`.
556
  4. **Exercises:**
557
  - Evaluation exercises with lists and strings.
558
  - Implementation exercise: `equal` function.
559
  """
560

561
  @summary_2_3_2 """
562
  2.3.2 Example: Symbolic Differentiation
563
  1. **Symbolic Differentiation:**
564
  - Purpose: Deriving algebraic expressions symbolically.
565
  - Historical Significance: Influential in Lisp development and symbolic mathematical systems.
566
  2. **Differentiation Algorithm:**
567
  - Abstract algorithm for sums, products, and variables.
568
  - Recursive reduction rules for symbolic expressions.
569
  3. **Expression Representation:**
570
  - Use of prefix notation for mathematical structure.
571
  - Variables represented as strings, sums, and products as lists.
572
  4. **Algorithm Implementation:**
573
  - `deriv` function for symbolic differentiation.
574
  - Examples and the need for expression simplification.
575
  """
576

577
  @summary_2_3_3 """
578
  2.3.3 Example: Representing Sets
579
  1. **Set Representation:**
580
  - Informal definition: a collection of distinct objects.
581
  - Defined using data abstraction with operations: union_set, intersection_set, is_element_of_set, adjoin_set.
582
  - Various representations: unordered lists, ordered lists, binary trees.
583
  2. **Sets as Unordered Lists:**
584
  - Represented as a list with no duplicate elements.
585
  - Operations: is_element_of_set, adjoin_set, intersection_set.
586
  - Efficiency concerns: is_element_of_set may require Θ(n) steps.
587
  3. **Sets as Ordered Lists:**
588
  - Elements listed in increasing order for efficiency.
589
  - Operations like is_element_of_set benefit from ordered representation.
590
  - Intersection_set exhibits significant speedup (Θ(n) instead of Θ(n^2)).
591
  4. **Sets as Binary Trees:**
592
  - Further speedup using a tree structure.
593
  - Each node holds an entry and links to left and right subtrees.
594
  - Operations: is_element_of_set, adjoin_set with Θ(log n) complexity.
595
  - Balancing strategies needed to maintain efficiency.
596
  Note: Code snippets and exercises provide implementation details for each representation.
597
  """
598

599
  @summary_2_3_4 """
600
  2.3.4 Example: Huffman Encoding Trees
601
  1. **Huffman Encoding Basics:**
602
  - Describes the concept of encoding data using sequences of 0s and 1s (bits).
603
  - Introduces fixed-length and variable-length codes for symbols.
604
  - Illustrates an example of a fixed-length code and a variable-length code for a set of symbols.
605
  2. **Variable-Length Codes:**
606
  - Explains the concept of variable-length codes, where different symbols may have different bit lengths.
607
  - Highlights the efficiency of variable-length codes in comparison to fixed-length codes.
608
  - Introduces the idea of prefix codes, ensuring no code is a prefix of another.
609
  3. **Huffman Encoding Method:**
610
  - Presents the Huffman encoding method, a variable-length prefix code.
611
  - Describes how Huffman codes are represented as binary trees.
612
  - Explains the construction of Huffman trees based on symbol frequencies.
613
  4. **Decoding with Huffman Trees:**
614
  - Outlines the process of decoding a bit sequence using a Huffman tree.
615
  - Describes the algorithm to traverse the tree and decode symbols.
616
  - Provides functions for constructing, representing, and decoding Huffman trees in JavaScript.
617
  """
618

619
  @summary_2_4 """
620
  2.4 Multiple Representations for Abstract Data
621
  1. **Data Abstraction:**
622
  - Introduces data abstraction as a methodology for structuring systems.
623
  - Explains the use of abstraction barriers to separate design from implementation for rational numbers.
624
  2. **Need for Multiple Representations:**
625
  - Recognizes the limitation of a single underlying representation for data objects.
626
  - Discusses the importance of accommodating multiple representations for flexibility.
627
  3. **Generic Functions:**
628
  - Highlights the concept of generic functions that operate on data with multiple representations.
629
  - Introduces type tags and data-directed style for building generic functions.
630
  4. **Complex-Number Example:**
631
  - Illustrates the implementation of complex numbers with both rectangular and polar representations.
632
  - Emphasizes the role of abstraction barriers in managing different design choices.
633
  """
634

635
  @summary_2_4_1 """
636
  2.4.1 Representations for Complex Numbers
637
  1. **Complex Number Representations:**
638
  - Discusses two representations for complex numbers: rectangular form (real and imaginary parts) and polar form (magnitude and angle).
639
  - Emphasizes the need for generic operations that work with both representations.
640
  2. **Operations on Complex Numbers:**
641
  - Describes arithmetic operations on complex numbers, highlighting differences in representation for addition, subtraction, multiplication, and division.
642
  - Illustrates the use of selectors and constructors for implementing these operations.
643
  3. **Programming Choices:**
644
  - Introduces two programmers, Ben and Alyssa, independently choosing different representations for complex numbers.
645
  - Presents the implementations of selectors and constructors for both rectangular and polar forms.
646
  4. **Data Abstraction Discipline:**
647
  - Ensures that the same generic operations work seamlessly with different representations.
648
  - Acknowledges the example's simplification for clarity, noting the preference for rectangular form in practical computational systems.
649
  """
650

651
  @summary_2_4_2 """
652
  2.4.2 Tagged data
653
  1. **Principle of Least Commitment:**
654
  - Data abstraction follows the principle of least commitment, allowing flexibility in choosing representations at the last possible moment.
655
  - Maintains maximum design flexibility by deferring the choice of concrete representation for data objects.
656
  2. **Tagged Data Implementation:**
657
  - Introduces type tags to distinguish between different representations of complex numbers (rectangular or polar).
658
  - Utilizes functions like `attach_tag`, `type_tag`, and `contents` to manage type information.
659
  3. **Coexistence of Representations:**
660
  - Shows how Ben and Alyssa can modify their representations to coexist in the same system using type tags.
661
  - Ensures that functions do not conflict by appending "rectangular" or "polar" to their names.
662
  4. **Generic Complex-Arithmetic System:**
663
  - Implements generic complex-number arithmetic operations that work seamlessly with both rectangular and polar representations.
664
  - The resulting system is decomposed into three parts: complex-number-arithmetic operations, polar implementation, and rectangular implementation.
665
  """
666

667
  @summary_2_4_3 """
668
  2.4.3 Data-Directed Programming and Additivity
669
  1. **Dispatching on Type:**
670
  - Dispatching on type involves checking the type of a datum and calling an appropriate function.
671
  - Provides modularity but has weaknesses, such as the need for generic functions to know about all representations.
672
  2. **Data-Directed Programming:**
673
  - Data-directed programming modularizes system design further.
674
  - Uses an operation-and-type table, allowing easy addition of new representations without modifying existing functions.
675
  3. **Implementation with Tables:**
676
  - Uses functions like `put` and `get` for manipulating the operation-and-type table.
677
  - Ben and Alyssa implement their packages by adding entries to the table, facilitating easy integration.
678
  4. **Message Passing:**
679
  - Message passing represents data objects as functions that dispatch on operation names.
680
  - Provides an alternative to data-directed programming, where the data object receives operation names as "messages."
681
  """
682

683
  @summary_2_5 """
684
  2.5 Systems with Generic Operations
685
  1. **Generic Operations Design:**
686
  - Systems designed to represent data objects in multiple ways through generic interface functions.
687
  - These generic functions link various representations, providing flexibility and modularity.
688
  2. **Data-Directed Techniques:**
689
  - Extend the idea of generic operations to define operations generic over different argument types.
690
  - Utilizes data-directed techniques for constructing a unified arithmetic package from various existing arithmetic packages.
691
  3. **Unified Arithmetic System:**
692
  - Figure 2.23 illustrates the structure of a generic arithmetic system.
693
  - Abstraction barriers allow uniform access to ordinary, rational, and complex arithmetic packages through a single generic interface.
694
  4. **Additive Structure:**
695
  - Individual arithmetic packages (ordinary, rational, complex) designed separately.
696
  - Additive structure allows combination to produce a comprehensive generic arithmetic system.
697
  """
698

699
  @summary_2_5_1 """
700
  2.5.1 Generic Arithmetic Operations
701
  1. **Generic Arithmetic Operations:**
702
  - Designing generic arithmetic operations similar to complex-number operations.
703
  - Generic functions (add, sub, mul, div) dispatch to appropriate packages based on argument types.
704
  2. **Package for Ordinary Numbers:**
705
  - Install package for primitive (JavaScript) numbers tagged as "javascript_number."
706
  - Arithmetic operations defined using primitive functions.
707
  3. **Extension to Rational Numbers:**
708
  - Add package for rational arithmetic with internal functions from section 2.1.1.
709
  - Utilize additivity for seamless integration with the existing generic arithmetic system.
710
  4. **Complex Number Package:**
711
  - Implement a package for complex numbers using the tag "complex."
712
  - Use existing functions (add_complex, sub_complex) from rectangular and polar packages.
713
  5. **Two-Level Tag System:**
714
  - Complex numbers have an outer tag ("complex") directing to the complex package.
715
  - Inner tag ("rectangular" or "polar") further directs within the complex package.
716
  6. **Error Resolution:**
717
  - Resolve an error in magnitude(z) by defining complex selectors for "complex" numbers.
718
  - Add real_part, imag_part, magnitude, and angle functions to the complex package.
719
  7. **Internal Functions Simplification:**
720
  - Internal arithmetic functions in packages (add_rat, add_complex) can have the same names.
721
  - Naming simplification is possible once declarations are internal to different installation functions.
722
  """
723

724
  @summary_2_5_2 """
725
  2.5.2 Combining Data of Different Types
726
  1. **Cross-Type Operations:**
727
  - Consideration of operations crossing type boundaries, like adding a complex number to an ordinary number.
728
  - Current approach involves designing separate functions for each valid combination, which is cumbersome.
729
  2. **Coercion Technique:**
730
  - Introduction of coercion to handle operations between different types.
731
  - Coercion functions transform objects of one type into an equivalent object of another type.
732
  3. **Apply_Generic Modification:**
733
  - Modify the apply_generic function to include coercion.
734
  - Check if the operation is defined for the arguments' types; if not, attempt coercion.
735
  4. **Hierarchy of Types:**
736
  - Introduction of a hierarchical structure (tower) to simplify coercion.
737
  - Types arranged as subtypes and supertypes, enabling a systematic approach to adding new types.
738
  """
739

740
  @summary_2_5_3 """
741
  2.5.3 Example: Symbolic Algebra
742
  1. **Symbolic Algebra Overview:**
743
  - Symbolic algebra involves manipulating expressions with variables and operators.
744
  - Expressions are hierarchical structures, often viewed as trees of operators and operands.
745
  - Abstractions like linear combination, polynomial, and trigonometric function are common in symbolic algebra.
746
  2. **Polynomial Arithmetic:**
747
  - Polynomials are represented as a sum of terms, each comprising a coefficient and a power of an indeterminate.
748
  - Designing a system involves abstracting data using a "poly" data structure with addition and multiplication operations.
749
  - Generic operations are applied to manipulate terms and term lists for addition and multiplication of polynomials.
750
  3. **Data Abstraction and Generic Operations:**
751
  - Data abstraction principles, including type tags, are used for polynomial representation and manipulation.
752
  - Generic operations like add and multiply enable flexibility in handling various coefficient types.
753
  4. **Challenges and Extensions:**
754
  - Challenges include defining polynomials with different variables and addressing coercion issues.
755
  - Exercises involve extending the system for subtraction, handling dense and sparse polynomials, and implementing rational functions.
756
  5. **Hierarchies and GCD Computation:**
757
  - Symbolic algebra illustrates complex type hierarchies where polynomials may have coefficients as polynomials.
758
  - Greatest Common Divisor (GCD) computation is crucial for operations on rational functions but presents challenges.
759
  6. **Reducing Rational Functions:**
760
  - Rational functions are reduced to lowest terms using GCD computation and an integerizing factor.
761
  - The process involves multiplying by the GCD's leading coefficient's power and reducing coefficients to their greatest common divisor.
762
  7. **Implementation Exercises:**
763
  - Exercises cover pseudodivision, modifying GCD computation, and implementing a system for reducing rational functions to lowest terms.
764
  - The challenges include efficiently computing polynomial GCDs, a crucial aspect of algebraic-manipulation systems.
765
  """
766

767
  @summary_3 """
768
  3 Modularity, Objects, and State
769
  1. **Organizational Strategies:**
770
  - Programs designed for modeling physical systems can benefit from mirroring the system's structure.
771
  - Two main strategies: object-based (objects with changing behaviors) and stream-processing (focus on information flow).
772
  2. **Linguistic Challenges:**
773
  - Object-based approach deals with identity maintenance amid changes, moving away from the substitution model.
774
  - Stream-processing requires decoupling simulated time, using delayed evaluation for optimal exploitation.
775
  3. **Program Organization:**
776
  - Successful system organization allows easy addition of new features without strategic program changes.
777
  - Large program structure is influenced by the perception of the system being modeled.
778
  4. **Computational Models:**
779
  - Object-based models involve computational objects mirroring real-world objects.
780
  - Stream-processing involves viewing systems as information flows, decoupling simulated time for effective evaluation.
781
  """
782

783
  @summary_3_1 """
784
  3.1 Assignment and Local State
785
  1. **Object State:**
786
  - Objects in a system have states influenced by their history, crucial for behavior determination.
787
  - State variables, like a bank account's balance, capture enough information for current behavior.
788
  2. **Interconnected Objects:**
789
  - In systems, objects rarely act independently; interactions couple state variables, influencing each other.
790
  - Modular computational models mirror actual system objects, each with its local state variables.
791
  3. **Time-Dependent Behavior:**
792
  - Computational models must change over time to mirror evolving system states.
793
  - Assignment operations in programming languages are vital for updating state variables during program execution.
794
  """
795

796
  @summary_3_1_1 """
797
  3.1.1 Local State Variables
798
  1. **Time-Varying State:**
799
  - Illustrates time-varying state in computational objects using the example of withdrawing from a bank account.
800
  - Function `withdraw` exhibits changing behavior with each call, influenced by the account's history.
801
  2. **Variable Declarations and Assignment:**
802
  - Introduces variable declarations (`let`) and assignment operations for mutable state, enabling dynamic changes.
803
  - Demonstrates the use of `balance` as a mutable variable, updating its value based on withdrawal operations.
804
  3. **Encapsulation and Local State:**
805
  - Addresses the issue of unrestricted access to `balance` by making it internal to `withdraw`.
806
  - `make_withdraw_balance_100` encapsulates `balance` within a local environment, enhancing modularity.
807
  4. **Creating Independent Objects:**
808
  - Shows the creation of independent objects using functions like `make_withdraw` and `make_account`.
809
  - Each object maintains its local state, demonstrating modularity and independence of objects.
810
  """
811

812
  @summary_3_1_2 """
813
  3.1.2 The Benefits of Introducing Assignment
814
  1. **Random Number Generation:**
815
  - Demonstrates the use of assignment in implementing a random number generator (`rand`) with time-varying state.
816
  - Utilizes `rand_update` function to generate sequences with desired statistical properties.
817
  2. **Monte Carlo Simulation:**
818
  - Applies the concept of local state to implement a Monte Carlo simulation for approximating π.
819
  - Shows how assignment enhances modularity by encapsulating the random-number generator's state.
820
  3. **Modularity with Assignment:**
821
  - Compares the modular design of Monte Carlo simulation using `rand` with the non-modular version without local state.
822
  - Assignment encapsulates the state within `rand`, simplifying the expression of the Monte Carlo method.
823
  4. **Challenges and Complexity:**
824
  - Acknowledges the conceptual challenges introduced by assignment in programming languages.
825
  - Highlights the complexity of handling time-varying local state and the trade-offs in achieving modularity.
826
  """
827

828
  @summary_3_1_3 """
829
  3.1.3 The Costs of Introducing Assignment
830
  1. **Substitution Model Challenge:**
831
  - Assignment disrupts the substitution model, hindering the interpretation of functions and altering the predictability of outcomes.
832
  2. **Functional vs. Imperative Programming:**
833
  - Describes functional programming as assignment-free, ensuring consistency in results with identical inputs.
834
  - Imperative programming, with assignment, complicates reasoning and introduces bugs due to order-sensitive assignments.
835
  3. **Identity and Change:**
836
  - Discusses the profound issue of identity and change in computational models when assignments are introduced.
837
  - Examines challenges in determining "sameness" and "change" with evolving objects and the breakdown of referential transparency.
838
  4. **Pitfalls of Imperative Programming:**
839
  - Highlights potential traps in imperative programming, emphasizing the importance of careful consideration of assignment order.
840
  - Notes the increased complexity in concurrent execution scenarios and sets the stage for exploring computational models with assignments.
841
  """
842

843
  @summary_3_2 """
844
  3.2 The Environment Model of Evaluation
845
  1. **Assignment and Function Application:**
846
  - Substitution model insufficient with assignment.
847
  - Introduces environment model: frames, bindings, pointers, and the concept of "place."
848
  2. **Environment Structure:**
849
  - Environments are sequences of frames, each with bindings associating names with values.
850
  - Illustrates a simple environment structure (Figure 3.1) with frames, pointers, and shadowing.
851
  3. **Value Determination:**
852
  - Value of a name determined by the first frame in the environment with a binding for that name.
853
  - Shadowing explained: inner frame bindings take precedence, influencing value determination.
854
  4. **Contextual Meaning:**
855
  - Expressions acquire meaning in an environment.
856
  - Global environment introduced, consisting of a single frame with primitive function names.
857
  - Programs extend global environment with a program frame for top-level declarations.
858
  """
859

860
  @summary_3_2_1 """
861
  3.2.1 The Rules for Evaluation
862
  1. **Function Application in the Environment Model:**
863
  - Environment model replaces substitution model for function application.
864
  - Functions are pairs of code and an environment pointer, created by evaluating lambda expressions.
865
  2. **Function Creation:**
866
  - Functions created only by evaluating lambda expressions.
867
  - Function code from lambda expression text, environment from evaluation environment.
868
  3. **Applying Functions:**
869
  - Create a new environment, bind parameters to argument values.
870
  - Enclosing environment of the new frame is the specified function environment.
871
   Evaluate the function body in the new environment.
872
  4. **Assignment Behavior:**
873
  - Expression "name = value" in an environment locates the binding for the name.
874
  - If variable binding, change to reflect the new value; if constant, signal an error.
875
  - If the name is unbound, signal a "variable undeclared" error.
876

877
  Evaluation rules, while more complex than substitution, provide an accurate description of interpreter behavior.
878
  """
879

880
  @summary_3_2_2 """
881
  3.2.2 Applying Simple Functions
882
  1. **Environment Model for Function Calls:**
883
  - Illustrates function application using the environment model.
884
  - Analyzes function calls for `f(5)` using the functions `square`, `sum_of_squares`, and `f`.
885
  2. **Environment Structures:**
886
  - Functions create new environments for each call.
887
  - Different frames keep local variables separate; each call to `square` generates a new environment.
888
  3. **Evaluation Process:**
889
  - Evaluates subexpressions of return expressions.
890
  - Calls to functions create new environments.
891
  - Focus on environment structures, details of value passing discussed later.
892
  4. **Exercise 3.9:**
893
  - Analyze environment structures for recursive and iterative factorial functions.
894
  - Environment model won't clarify space efficiency claims; tail recursion discussed later.
895
  """
896

897
  @summary_3_2_3 """
898
  3.2.3 Frames as the Repository of Local State
899
  1. **Object with Local State:**
900
  - Illustrates using functions and assignment to represent objects with local state.
901
  - Example: "withdrawal processor" function, `make_withdraw(balance)`, is evaluated.
902
  2. **Environment Structures:**
903
  - Function application creates frames with local state.
904
  - Examines environment structures for `make_withdraw(100)` and subsequent call `W1(50)`.
905
  3. **Local State Handling:**
906
  - Frame enclosing environment holds local state (e.g., balance).
907
  - Different objects (e.g., `W1` and `W2`) have independent local state, preventing interference.
908
  4. **Alternate Version - Exercise 3.10:**
909
  - Analyzes an alternate version of `make_withdraw` using an immediately invoked lambda expression.
910
  - Compares environment structures for objects created with both versions.
911
  """
912

913
  @summary_3_2_4 """
914
  3.2.4 Internal Declarations
915
  1. **Block Scoping:**
916
  - Examines evaluation of blocks (e.g., function bodies) with declarations, introducing block scope.
917
  - Each block creates a new scope for declared names, preventing interference with external names.
918
  2. **Example: Square Root Function:**
919
  - Demonstrates internal declarations within the `sqrt` function for square roots.
920
  - Uses the environment model to explain the behavior of internal functions.
921
  3. **Properties of Internal Declarations:**
922
  - Names of local functions don't interfere with external names.
923
  - Internal functions can access enclosing function's arguments due to nested environments.
924
  4. **Exercise 3.11: Bank Account Function:**
925
  - Analyzes the environment structure for a bank account function with internal declarations.
926
  - Explores how local states for multiple accounts are kept distinct in the environment model.
927
  5. **Mutual Recursion:**
928
  - Explains how mutual recursion works with the environment model.
929
  - Illustrates with a recursive example checking if a nonnegative integer is even or odd.
930
  6. **Top-Level Declarations:**
931
  - Revisits top-level name declarations.
932
  - Explains that the whole program is treated as an implicit block evaluated in the global environment.
933
  - Describes how locally declared names are handled within blocks.
934
  """
935

936
  @summary_3_3 """
937
  3.3 Modeling with Mutable Data
938
  1. **Introduction to Mutable Data:**
939
  - Addresses the need to model systems with changing states, requiring modifications to compound data objects.
940
  2. **Data Abstractions Extension:**
941
  - Extends data abstractions with mutators, alongside constructors and selectors.
942
  - Demonstrates the necessity of modifying compound data objects for modeling dynamic systems.
943
  3. **Example: Banking System:**
944
  - Illustrates the concept of mutators using a banking system example.
945
  - Describes an operation `set_balance(account, new_value)` to change the balance of a designated account.
946
  4. **Pairs as Building Blocks:**
947
  - Enhances pairs with basic mutators, expanding their representational power beyond sequences and trees.
948
  - Introduces the concept of mutable data objects and their importance in modeling complex systems.
949
  """
950

951
  @summary_3_3_1 """
952
  3.3.1 Mutable List Structure
953
  1. **Limitations of Basic Operations:**
954
  - Pair operations (pair, head, tail) and list operations (append, list) cannot modify list structures.
955
  - Introduction of new mutators, set_head, and set_tail for modifying pairs in list structures.
956
  2. **Set_Head Operation:**
957
  - Modifies the head pointer of a pair, demonstrated with an example.
958
  - Illustrates the impact on the structure, showing detached pairs and modified list.
959
  3. **Set_Tail Operation:**
960
  - Similar to set_head but replaces the tail pointer of a pair.
961
  - Demonstrates the effect on the list structure, highlighting changes in pointers.
962
  4. **Pair Construction vs. Mutators:**
963
  - Describes the difference between constructing new list structures with pair and modifying existing ones with mutators.
964
  - Presents a function pair implementation using mutators set_head and set_tail.
965
  """
966

967
  @summary_3_3_2 """
968
  3.3.2 Representing Queues
969
  1. **Queue Definition:**
970
  - Queues are sequences with insertions at the rear and deletions at the front, known as FIFO (first in, first out) buffers.
971
  - Operations: make_queue, is_empty_queue, front_queue, insert_queue, delete_queue.
972
  2. **Efficient Queue Representation:**
973
  - Efficiently represent queues using pairs with front_ptr and rear_ptr, reducing insertion time from Θ(n) to Θ(1).
974
  - Queue is a pair (front_ptr, rear_ptr) where the front_ptr points to the first item, and rear_ptr points to the last item.
975
  3. **Queue Operations:**
976
  - Define operations using functions like front_ptr, rear_ptr, set_front_ptr, and set_rear_ptr.
977
  - Efficiently implement is_empty_queue, make_queue, front_queue, insert_queue, and delete_queue.
978
  4. **Implementation Insight:**
979
  - Overcoming inefficiencies of standard list representation for queues by maintaining pointers to both ends.
980
  - Explanation of how the modification enables constant-time insertions and deletions.
981

982
  **Note:** The text also includes exercises related to queue implementation and representation, involving debugging and alternative representations.
983
  """
984

985
  @summary_3_3_3 """
986
  3.3.3 Representing Tables
987
  1. **One-Dimensional Table:**
988
  - Table represented as a list of records (key, value pairs) with a special "backbone" pair.
989
  - Lookup function retrieves values by key, insert function adds or updates key-value pairs.
990
  2. **Two-Dimensional Table:**
991
  - Extends one-dimensional table concept to handle two keys, creating subtables.
992
  - Lookup and insert functions adapted for two keys, providing efficient indexing.
993
  3. **Local Tables and Procedural Representation:**
994
  - Procedural representation using a table object with internal state.
995
  - Functions (lookup, insert) encapsulated within the object for multiple table access.
996
  4. **Memoization with Tables:**
997
  - Memoization technique enhances function performance by storing previously computed values.
998
  - Example: memoized Fibonacci function using a local table to store computed results.
999

1000
  **Note:** The text also includes exercises related to table construction, key testing, generalizing tables, binary tree organization, and memoization.
1001
  """
1002

1003
  @summary_3_3_4 """
1004
  3.3.4 A Simulator for Digital Circuits
1005
  1. **Digital Circuit Simulation:**
1006
  - Digital systems engineers use computer simulation to design and analyze complex circuits.
1007
  - Event-driven simulation triggers actions based on events, creating a sequence of interconnected events.
1008
  2. **Computational Model of Circuits:**
1009
  - Circuits composed of wires and primitive function boxes (and-gate, or-gate, inverter).
1010
  - Signals propagate with delays, affecting circuit behavior.
1011
  3. **Simulation Program Design:**
1012
  - Program constructs computational objects for wires and function boxes.
1013
  - Simulation driven by an agenda, scheduling actions at specific times.
1014
  4. **Circuit Construction with Functions:**
1015
  - Functions (e.g., `half_adder`, `full_adder`) defined to wire primitive function boxes into complex circuits.
1016
  - Memoization enhances function performance using local tables.
1017

1018
  **Note:** The text delves into detailed examples and exercises for building a digital circuit simulator, including functions for wires, primitive functions, and agenda-based simulation.
1019
  """
1020

1021
  @summary_3_3_5 """
1022
  3.3.5 Propagation of Constraints
1023
  1. **Introduction to Constraint Modeling:**
1024
  - Traditional programs follow one-directional computations, while systems modeling often involves relations among quantities.
1025
  - Constraints express relationships between quantities; for example, in a mechanical model, the deflection of a rod relates to force, length, area, and modulus.
1026
  2. **Constraint-Based Language Design:**
1027
  - Language primitives include constraints like adder(a, b, c), multiplier(x, y, z), and constant(3.14, x).
1028
  - Constraint networks combine constraints using connectors to express complex relations, allowing bidirectional computation.
1029
  3. **Constraint System Implementation:**
1030
  - Procedural objects represent connectors with local state for value, informant, and constraints.
1031
  - Functions like set_value, forget_value, and connect enable connectors to interact with constraints.
1032
  4. **Example: Celsius-Fahrenheit Converter:**
1033
  - The language is applied to a converter using connectors C and F, demonstrating bidirectional computation.
1034
  - Probes monitor and report changes in connector values, showcasing the flexibility of constraint-based systems.
1035

1036
  **Key Concepts:**
1037
  - Constraint propagation involves notifying constraints of value changes and handling bidirectional computations.
1038
  - Constraint networks utilize connectors and primitives to model complex relationships in a constraint-based language.
1039
  """
1040

1041
  @summary_3_4 """
1042
  3.4 Concurrency: Time Is of the Essence
1043
  1. **Introduction to Time and State:**
1044
  - Computational objects with local state sacrifice referential transparency, introducing time complexities.
1045
  - Assignment introduces time into models, challenging the substitution model and demanding an environment model.
1046
  2. **Modeling Time with State:**
1047
  - Stateful computations lead to time-dependent results, as illustrated by bank account withdrawals yielding different balances.
1048
  - Assignment execution delineates moments in time, and expressions' values depend on timing.
1049
  3. **Concurrent Computation:**
1050
  - Modeling systems concurrently as collections of threads (sequential processes) reflects real-world objects' simultaneous actions.
1051
  - Concurrent programming enhances modularity and can provide speed advantages by leveraging multiple processors.
1052
  4. **Challenges of Concurrency and Assignment:**
1053
  - Assignment complexities intensify in concurrent execution, demanding a nuanced understanding of time.
1054
  - Concurrent computation introduces additional time-related complexities, emphasizing shared memory threads.
1055
  **Key Concepts:**
1056
  - Local state introduces time considerations, altering the deterministic nature of timeless expressions.
1057
  - Concurrent computation enhances modularity and potential speed advantages but adds complexity to understanding time.
1058
  """
1059

1060
  @summary_3_4_1 """
1061
  3.4.1 The Nature of Time in Concurrent Systems
1062
  1. **Time Ordering in Events:**
1063
  - Time orders events as preceding, simultaneous, or following, illustrated with bank account balance changes.
1064
  - Sequential assignments model changing balances, but complex scenarios emerge in distributed systems.
1065
  2. **Challenges in Concurrent Systems:**
1066
  - Concurrent threads sharing state variables pose indeterminacy in event order, leading to potential bugs.
1067
  - Example: Shared variable assignments in withdrawals, when interleaved, may violate system integrity.
1068
  3. **Concurrency and Shared State:**
1069
  - Concurrent programs face complexities due to shared state variables, especially in managing simultaneous changes.
1070
  - Concurrent restrictions may be needed to ensure correct behavior, posing challenges in designing efficient and effective systems.
1071
  4. **Requirements for Correct Execution:**
1072
  - Stringent concurrency restrictions may hinder efficiency, prompting exploration of less strict requirements.
1073
  - Correct execution may demand producing the same result as a sequential run, allowing some flexibility in outcomes.
1074
  """
1075

1076
  @summary_3_4_2 """
1077
  3.4.2 Mechanisms for Controlling Concurrency
1078
  1. **Concurrency Challenges:**
1079
  - Concurrency challenges arise from interleaving events in different threads.
1080
  - Managing the order of events becomes complex with increasing threads and events.
1081
  2. **Serialization for Shared State:**
1082
  - Serialization ensures that certain functions cannot be executed concurrently.
1083
  - Mechanisms like serializers create sets of functions, allowing only one execution at a time.
1084
  3. **Serializer Implementation:**
1085
  - Serializer creates serialized functions, controlling access to shared variables.
1086
  - Example: make_serializer function uses a mutex for atomic test_and_set operations.
1087
  4. **Deadlock and Deadlock Avoidance:**
1088
  - Deadlock occurs when threads are stuck waiting for each other in a circular dependency.
1089
  - Deadlock avoidance involves numbering shared resources and acquiring them in order to prevent circular dependencies.
1090
  """
1091

1092
  @summary_3_5 """
1093
  3.5 Streams
1094
  1. **Modeling State with Streams:**
1095
  - Streams serve as an alternative to assignments for modeling state in computational objects.
1096
  - Stream processing represents time histories of systems without mutable data, using delayed evaluation.
1097
  2. **Sequence Representation:**
1098
  - Streams are viewed as sequences, offering a way to model time-varying behavior without direct assignment.
1099
  - Time functions are represented as (potentially infinite) sequences, introducing the concept of world lines.
1100
  3. **Delayed Evaluation Technique:**
1101
  - Implementation of streams involves delayed evaluation to handle large or infinite sequences effectively.
1102
  - This technique allows representing extensive sequences without fully realizing them, addressing practical challenges.
1103
  4. **Challenges and Implications:**
1104
  - Stream processing avoids assignment-related drawbacks but introduces its own difficulties.
1105
  - The choice between modeling techniques for modular and maintainable systems remains an open question.
1106
  """
1107

1108
  @summary_3_5_1 """
1109
  **3.5.1 Streams Are Delayed Lists**
1110
  1. **Efficiency Challenges with Lists:**
1111
  - Representing sequences as lists introduces inefficiencies in time and space for certain computations.
1112
  - Example: Summing prime numbers in an interval, comparing iterative and sequence operation approaches.
1113
  2. **Stream Introduction and Structure:**
1114
  - Streams offer a solution to list inefficiencies by allowing demand-driven programming.
1115
  - Stream pairs, consisting of head and a promise (delayed evaluation), represent elements and future construction.
1116
  3. **Stream Operations and Functions:**
1117
  - Stream analogs of list operations are defined, including stream_ref, stream_map, and stream_for_each.
1118
  - Delayed evaluation in streams allows elegant formulations, separating apparent structure from actual computation.
1119
  4. **Memoization for Efficiency:**
1120
  - Memoization optimizes stream implementation by avoiding repeated evaluation of delayed objects.
1121
  - Function memo is introduced to memoize stream construction, enhancing efficiency in recursive programs.
1122
  """
1123

1124
  @summary_3_5_2 """
1125
  **3.5.2 Infinite Streams**
1126
  1. **Representation of Infinite Streams:**
1127
  - Infinite streams efficiently represent sequences, even infinite ones.
1128
  - Streams allow for delayed evaluation, computing only as much as needed.
1129
  2. **Examples of Infinite Streams:**
1130
  - Positive integers, non-divisible by 7, Fibonacci, and prime numbers as infinite streams.
1131
  - Streams created using generating functions and filtering mechanisms.
1132
  3. **Implicit Definition of Streams:**
1133
  - Streams can be implicitly defined, taking advantage of delayed evaluation.
1134
  - Operations like `add_streams` and `scale_stream` manipulate and generate streams.
1135
  4. **Signal Processing with Sieve:**
1136
  - The prime sieve is presented as a signal-processing system.
1137
  - Sieving process efficiently generates an infinite stream of prime numbers.
1138
  Note: Omitted specific programming details and exercises.
1139
  """
1140

1141
  @summary_3_5_3 """
1142
  **3.5.3 Exploiting the Stream Paradigm**
1143
  1. **Stream Processing Paradigm:**
1144
  - Streams with delayed evaluation model signal-processing systems.
1145
  - Allows modeling systems with different module boundaries, emphasizing time series.
1146
  2. **Formulating Iterations as Streams:**
1147
  - Iterative processes represented as streams with infinite sequences.
1148
  - Examples include sqrt_stream for square roots and pi_stream for approximating π.
1149
  3. **Advanced Techniques:**
1150
  - Acceleration techniques like Euler's transform enhance convergence.
1151
  - Super-acceleration achieved by recursively applying the acceleration process.
1152
  4. **Memoization and Optimization:**
1153
  - Memoization used for efficient repeated evaluation of streams.
1154
  - Elegant stream formulation facilitates manipulation with uniform operations.
1155
  """
1156

1157
  @summary_3_5_4 """
1158
  **3.5.4 Streams and Delayed Evaluation Summary**
1159
  1. **Signal Processing with Feedback Loops:**
1160
  - Streams used to model systems with feedback loops in signal processing.
1161
  - Integral function's internal stream is defined recursively, demonstrating feedback loop modeling.
1162
  2. **Implicit Definition Challenges:**
1163
  - Implicit definitions in signal processing systems pose challenges without delayed evaluation.
1164
  - Streams may require delays beyond typical stream programming patterns for effective modeling.
1165
  3. **Redefined Integral Function:**
1166
  - Integral function modified to expect a delayed integrand argument.
1167
  - Delayed evaluation crucial for generating streams without fully knowing the input.
1168
  4. **Example: Solving Differential Equations:**
1169
  - Solve function demonstrated for approximating e using a delayed approach.
1170
  - Caller must delay the integrand argument for integral, ensuring proper evaluation sequence.
1171
  """
1172

1173
  @summary_3_5_5 """
1174
  **3.5.5 Modularity of Functional Programs and Modularity of Objects Summary**
1175
  1. **Stream-Based Modularity:**
1176
  - Streams provide modularity without assignment, encapsulating state evolution.
1177
  - Illustration using Monte Carlo estimation of π, stream-processing version.
1178
  2. **Stream Withdrawal Processor:**
1179
  - Comparison of withdrawal processor as a computational object and as a stream function.
1180
  - Stream representation has no assignment or local state, emphasizing temporal stream behavior.
1181
  3. **Functional Time Modeling:**
1182
  - Streams explicitly represent time in modeling changing quantities.
1183
  - Functional approach allows state representation with well-defined mathematical functions.
1184
  4. **Challenges in Functional Models:**
1185
  - Challenges arise in functional models, especially in interactive systems with independent entities.
1186
  - Merging streams introduces time-related problems, reminiscent of synchronization issues in object-oriented models.
1187
  """
1188

1189
  @notes %{
1190
    "1" => @summary_1,
1191
    "1.1" => @summary_1_1,
1192
    "1.1.1" => @summary_1_1_1,
1193
    "1.1.2" => @summary_1_1_2,
1194
    "1.1.3" => @summary_1_1_3,
1195
    "1.1.4" => @summary_1_1_4,
1196
    "1.1.5" => @summary_1_1_5,
1197
    "1.1.6" => @summary_1_1_6,
1198
    "1.1.7" => @summary_1_1_7,
1199
    "1.1.8" => @summary_1_1_8,
1200
    "1.2" => @summary_1_2,
1201
    "1.2.1" => @summary_1_2_1,
1202
    "1.2.2" => @summary_1_2_2,
1203
    "1.2.3" => @summary_1_2_3,
1204
    "1.2.4" => @summary_1_2_4,
1205
    "1.2.5" => @summary_1_2_5,
1206
    "1.2.6" => @summary_1_2_6,
1207
    "1.3" => @summary_1_3,
1208
    "1.3.1" => @summary_1_3_1,
1209
    "1.3.2" => @summary_1_3_2,
1210
    "1.3.3" => @summary_1_3_3,
1211
    "1.3.4" => @summary_1_3_4,
1212
    "2" => @summary_2,
1213
    "2.1" => @summary_2_1,
1214
    "2.1.1" => @summary_2_1_1,
1215
    "2.1.2" => @summary_2_1_2,
1216
    "2.1.3" => @summary_2_1_3,
1217
    "2.1.4" => @summary_2_1_4,
1218
    "2.2" => @summary_2_2,
1219
    "2.2.1" => @summary_2_2_1,
1220
    "2.2.2" => @summary_2_2_2,
1221
    "2.2.3" => @summary_2_2_3,
1222
    "2.2.4" => @summary_2_2_4,
1223
    "2.3" => @summary_2_3,
1224
    "2.3.1" => @summary_2_3_1,
1225
    "2.3.2" => @summary_2_3_2,
1226
    "2.3.3" => @summary_2_3_3,
1227
    "2.3.4" => @summary_2_3_4,
1228
    "2.4" => @summary_2_4,
1229
    "2.4.1" => @summary_2_4_1,
1230
    "2.4.2" => @summary_2_4_2,
1231
    "2.4.3" => @summary_2_4_3,
1232
    "2.5" => @summary_2_5,
1233
    "2.5.1" => @summary_2_5_1,
1234
    "2.5.2" => @summary_2_5_2,
1235
    "2.5.3" => @summary_2_5_3,
1236
    "3" => @summary_3,
1237
    "3.1" => @summary_3_1,
1238
    "3.1.1" => @summary_3_1_1,
1239
    "3.1.2" => @summary_3_1_2,
1240
    "3.1.3" => @summary_3_1_3,
1241
    "3.2" => @summary_3_2,
1242
    "3.2.1" => @summary_3_2_1,
1243
    "3.2.2" => @summary_3_2_2,
1244
    "3.2.3" => @summary_3_2_3,
1245
    "3.2.4" => @summary_3_2_4,
1246
    "3.3" => @summary_3_3,
1247
    "3.3.1" => @summary_3_3_1,
1248
    "3.3.2" => @summary_3_3_2,
1249
    "3.3.3" => @summary_3_3_3,
1250
    "3.3.4" => @summary_3_3_4,
1251
    "3.3.5" => @summary_3_3_5,
1252
    "3.4" => @summary_3_4,
1253
    "3.4.1" => @summary_3_4_1,
1254
    "3.4.2" => @summary_3_4_2,
1255
    "3.5" => @summary_3_5,
1256
    "3.5.1" => @summary_3_5_1,
1257
    "3.5.2" => @summary_3_5_2,
1258
    "3.5.3" => @summary_3_5_3,
1259
    "3.5.4" => @summary_3_5_4,
1260
    "3.5.5" => @summary_3_5_5
1261
    # ...
1262
  }
1263

1264
  @spec get_summary(String.t()) :: String.t() | nil
1265
  def get_summary(section) do
1266
    Map.get(@notes, section)
×
1267
  end
1268
end
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc