We introduced compound procedures in section 1.1.4 as a mechanism for
abstracting patterns of numerical operations so as to make them
independent of the particular numbers involved. With higher-order
procedures, such as the ` integral` procedure of section 1.3.1, we began to
see a more powerful kind of abstraction: procedures used to express
general methods of computation, independent of the particular
functions involved. In this section we discuss two more elaborate
examples general methods for finding zeros and fixed points of
functions and show how these methods can be expressed directly
as procedures.

**Finding roots of equations by the half-interval method**

The * half-interval method* is a simple but powerful
technique for finding roots of an equation *f (x)* = 0, where
*f* is a continuous function. The idea is that, if we are
given points *a* and *b* such that *f (a)* < 0 <
*f (b)*, then *f* must have at least one zero between
*a* and *b*. To locate a zero, let *x* be the
average of *a* and *b* and compute *f (x)*. If
*f (x)* > 0, then *f* must have a zero between *a*
and *x*. If *f (x)* < 0, then *f* must have a zero
between *x* and *b*. Continuing in this way, we can
identify smaller and smaller intervals on which *f* must have a
zero. When we reach a point where the interval is small enough, the
process stops. Since the interval of uncertainty is reduced by half
at each step of the process, the number of steps required grows as
(log (*L/T*)), where *L* is the
length of the original interval and *T* is the error tolerance
(that is, the size of the interval we will consider "small enough").
Here is a procedure that implements this strategy:

We assume that we are initially given the function(define (search f neg-point pos-point) (let ((midpoint (average neg-point pos-point))) (if (close-enough? neg-point pos-point) midpoint (let ((test-value (f midpoint))) (cond ((positive? test-value) (search f neg-point midpoint)) ((negative? test-value) (search f midpoint pos-point)) (else midpoint))))))

To test whether the endpoints are "close enough" we can use a
procedure similar to the one used in section 1.1.7 for computing square roots:
^{55}

(define (close-enough? x y) (< (abs (- x y)) 0.001))

The following example uses the half-interval method to approximate as the root between 2 and 4 of sin(define (half-interval-method f a b) (let ((a-value (f a)) (b-value (f b))) (cond ((and (negative? a-value) (positive? b-value)) (search f a b)) ((and (negative? b-value) (positive? a-value)) (search f b a)) (else (error "Values are not of opposite sign" a b)))))

`
(half-interval-method sin 2.0 4.0)
3.14111328125
`

Here is another example, using the half-interval method to search for
a root of the equation*x*^{3} - 2*x* - 3 = 0
between 1 and 2:

(half-interval-method (lambda (x) (- ( x x x) ( 2 x) 3)) 1.0 2.0)1.89306640625

A number *x* is called a * fixed point* of a function
*f* if *x* satisfies the equation *f (x) = x*.
For some functions *f* we can locate a fixed point by beginning
with an initial guess and applying *f* repeatedly,

*f (x), f (f (x) ), f (f (f (x) ) ), ...*

until the value does not change very much. Using this idea, we can
devise a procedure ` fixed-point` that takes as inputs a
function and an initial guess and produces an approximation to a fixed
point of the function. We apply the function repeatedly until we find
two successive values whose difference is less than some prescribed
tolerance:

For example, we can use this method to approximate the fixed point of the cosine function, starting with 1 as an initial approximation:(define tolerance 0.00001) (define (fixed-point f first-guess) (define (close-enough? v1 v2) (< (abs (- v1 v2)) tolerance)) (define (try guess) (let ((next (f guess))) (if (close-enough? guess next) next (try next)))) (try first-guess))

`
(fixed-point cos 1.0)
.7390822985224023
`

Similarly, we can find a solution to the equation *y* = sin
*y* + cos *y*:

The fixed-point process is reminiscent of the process we used for finding square roots in section 1.1.7. Both are based on the idea of repeatedly improving a guess until the result satisfies some criterion. In fact, we can readily formulate the square-root computation as a fixed-point search. Computing the square root of some number(fixed-point (lambda (y) (+ (sin y) (cos y))) 1.0)1.2587315962971173

Unfortunately, this fixed-point search does not converge. Consider an initial guess(define (sqrt x) (fixed-point (lambda (y) (/ x y)) 1.0))

One way to control such oscillations is to prevent the guesses from
changing so much. Since the answer is always between our guess
*y* and *x / y*, we can make a new guess that is not as
far from *y* as *x/y* by averaging *y* with
*x / y*, so that the next guess after *y* is
_{}(*y = x / y*)
instead of *(x / y)*. The process of making such a sequence of
guesses is simply the process of looking for a fixed point of
*y*(*y + x / y*):

(Note that(define (sqrt x) (fixed-point (lambda (y) (average y (/ x y))) 1.0))

With this modification, the square-root procedure works. In fact, if
we unravel the definitions, we can see that the sequence of
approximations to the square root generated here is precisely the same
as the one generated by our original square-root procedure of section
1.1.7. This approach of averaging
successive approximations to a solution, a technique we that we call
average damping * average damping,* often aids the convergence
of fixed-point searches.

**Exercise 1.35**

Show that the golden ratio _{} (section
1.2.2) is a fixed point of the transformation *x* 1 + 1 / *x*, and use this fact to compute
_{} by means of the `fixed-point`
procedure.

**Exercise 1.36**

Modify `fixed-point` so that it prints the sequence of
approximations it generates, using the `newline` and
`display` primitives shown in exercise 1.22. Then find a
solution to *x ^{x}* = 1000 by finding a fixed point of

**Exercise 1.37**

a. An infinite *continued fraction* is an expression of the form

As an example, one can show that the infinte continued fraction
expansion with the *N _{i}* and the

Suppose that `n` and `d` are procedures of one argument
(the term index *i*) that return the *N _{i}* and

for successive values of(cont-frac (lambda (i) 1.0) (lambda (i) 1.0) k)

b. If your `cont-frac` procedure generates a recursive process,
write one that generates an iterative process. If it generates an
interative process, write one that generates a recursive process.

**Exercise 1.38**

In 1737, the Swiss mathematicial Leonhard Euler published a memoir
*De Fractionibus Continuis*, which included a continued
fraction expansion for *e* - 2, where *e* is the base of
the natural logarithms. In this fraction, the *N _{i}*
are all 1, and the

**Exercise 1.39**

A continued fraction representation of the tangent function was published in 1770 by the German mathematician J. H. Lambert:

where *x* is in the radians. Define a procedure `(tan-cf x
k)` that computes an approximation to the tangent function based
on Lambert's formula. `K` specifies the number of terms to
compute, as in exercise 1.37.