SICP Exercise 1.7

Question

The good-enough? test used in computing square roots will not be very effective for finding the square roots of very small numbers. Also, in real computers, arithmetic operations are almost always performed with limited precision. This makes our test inadequate for very large numbers. Explain these statements, with examples showing how the test fails for small and large numbers. An alternative strategy for implementing good-enough? is to watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess. Design a square-root procedure that uses this kind of end test. Does this work better for small and large numbers?

Answer

These are two different questions with two rather different answers.

Let us begin with the more obvious one - very small numbers. The good-enough? test will be inadequate in this scenario because the precision is hardcoded as 0.001 - a small number to be sure, but not a really tiny number. So if we are trying to calculate the square root of a very small number, 0.001 is actually a very large margin of error. The result of such an operation would be very inaccurate.

Let us try calculating the square root of 0.0005 as an example. The expected value for this would be approximately 0.022.

(sqrt 0.0005)
; returns 0.03640532954316447

So we receive a result of approximately 0.036. As you can see, this is not a good approximation of the true square root of 0.005 at all.

How about very large numbers?

Understanding why the test fails for very large numbers requires an understanding of the way floating-point numbers work. A detailed explanation of this would be beyond the scope here. I will let Tom Scott from Computerphile explain.

What it boils down to is this: the larger floating-point numbers become, the lower the accuracy with which they can be represented. As these numbers get very large, there comes a point at which the difference between two adjacent numbers is greater than 0.001, but no intermediate value can be represented. In this case, you will never get within 0.001 of the target value, and the program will keep running indefinitely.

So how can these problems be solved?

One possibility would be to remove the arbitrary tolerance interval of 0.001 and keep iterating until guess is equal to the previous value for guess. This will take longer and use more computing resources, but solve the problems explained above and also return the most accurate possible result every time.