Weekly commentary: MAT335 - Chaos, Fractals and Dynamics


Top , Previous (Cellular automata, random fractals), Next (Symbolic dynamics, Shadowing Lemma)

March 3 - Chaos

Random vs Deterministic Processes

Scientists are often concerned with the evolution (change in time) of something. Most often the evolution happens in steps (discretely) or continuously. An example of the former is the population of a species in each generation, and an example of the latter is the motion of the planets. The scientist looks for the underlying 'rules' or process that drives the evolution, and tries to say something about the future state of the process given information about the present state.

Historically, the dominant view was that the evolution of a process was governed by laws and so the main objective of science was to determine those laws. It was inconcievable that a process could evolve 'randomly', i.e., for 'no reason'. However, scientists realized that sometimes it is not possible to know the laws governing a process exactly, and so they studied 'random' processes as models of such cases. It's important to note here that even in these cases of random processes the evolution was still assumed to be governed by laws, it's just that the scientists didn't know the laws completely.

An evolution that is determined by laws is called a deterministic process, and an evolution that is random (i.e., that is not governed by laws) is called a random process. How can we decide whether a process is deterministic or random? Since the laws are not supposed to change, if we repeat a deterministic process starting with the same initial state, then we will see exactly the same result. If we repeat a random process starting with the same initial state, the resulting evolution may be different than what we saw before. Furthermore, we can not say exactly what the future evolution will be of a random process even if we know exactly the initial state (however, what we can say is with what probability the future evolution will assume a certain state).

Beginning around 1900 with the work of Poincare in celestial mechanics, it was realized that some deterministic systems behave like a random one in the sense that it is impossible to predict the future of the system exactly. Like many of the topics we cover in this course, it wasn't until the advent of the computer age that scientists were able to really understand what was going on. Another milestone in this story was Edward Lorenz's discovery of 'chaos' in simple models of the weather. He concluded, in 1969, that it may be impossible to make accurate predictions about the weather even though the model describing the weather is perfectly deterministic. This was the beginning of what has come to be called "Chaos Theory" (or "Deterministic Chaos").

Chaos Theory

One of the goals, and to a large extent one of the great successes, of chaos theory was to understand how deterministic systems can behave randomly (in the sense that they are unpredictable). We will study a few very simple models of deterministic evolution (such as the logistic equation and the Henon map) that show the essential features of how deterministic chaos arises. Still, these simple models are not completely understood; there remain mathematical question that are unanswered. But what scientists have learned about these simple models has helped them understand the much more complicated models like the weather. We will see that it is the same sort of infinite complexity we see in fractals that causes chaos in deterministic systems.

It is because so many important (deterministic) systems in science have turned out to be chaotic ones that chaos theory is so important today (otherwise it would remain just a curiosity). What to do with chaotic systems? Well, to answer that one first has to understand how a deterministic system can behave like a random one, and then perhaps one can begin to address the problem of how to make predictions about chaotic systems (but we may need to change our notion of 'prediction') and how to 'control' a chaotic system (many biological systems are chaotic, like certain states of fibrillation of the heart, and you can see why one would want to control those states of the heart).

For more background and historical remarks about chaos theory, see the preface and forward of our text, Sections 1.4 and 1.5, and the introduction to Chapter 10.


What is Chaos?

One hallmark of chaos is extreme sensitivity to errors in initial data (see Section 10.1). That is, slightly changing the initial state leads to very large changes in the future evolution. This can make the predictability of such systems impossible in practice, and so in practice these systems behave like random ones. Now, errors are always present - that is not something new - but historically, scientists have tacitly assumed that errors could be made as small as they want simply by making more accurate measurements, so errors were something that could always be overcome.

When discussing error propagation, what's important is not the absolute size of the error but the relative size of the error. That is, the size of the error compared to the size of the signal. For linear systems, errors grow, but they grow at the same rate as the signal, so that the relative error does not grow. However, for nonlinear systems this may not be the case; relative errors may grow, and may grow expontentially fast. For example, for the logistic function with a = 4, errors roughly double at each iteration. Since the signal here is always between 0 and 1, no matter how small the initial error is, it will dominate the signal eventually (see the handout, 3 March).

Another feature of chaos is mixing (see Section 10.2). Suppose the system we are considering starts with initial data in some set S and that all subsequent states in the system are in S too. That is, if the system starts in some state that is in S then it remains in the set S forever. As an example, we have the logistic function for 0 < a < 4; here if x_0 is in [0,1] then x_n is in [0,1] for all n. Mixing means that any small subset of S is eventually spread out evenly over all of S. A consequence of this is that no matter what the initial state is, the future evolution will "go through all possibilities" available to the system. For example, for the logistic function when a = 4, the values of the sequence {x_n} fill out essentially all of [0,1]. Compare this to the case when a < 3.57; here only finitely many numbers in [0,1] are visited by the x_n no matter what x_0 is.

One way a dynamical system achieves sensitive dependence on initial conditions and mixing, is through the so-called stretch-and-fold paradigm (see Section 10.4). For the logistic function when a = 4 this is easy to see by first approximating the logistic function by a linear function (the "tent transformation"; see page 540). Figure 10.29 should convince you that repeated iterations of stretch-and-fold results in initial errors growing and in mixing. Although we have only looked at the logistic function, in other chaotic systems there often is this stretch-and-folding going on as the system evolves. Like with fractals, it is a simple rule that applied over and over again produces extreme complexity.


Distributions of Sequences: Deterministic vs Random sequences (Refer to the 6 March handout)

Given an infinite sequence {x_0,x_1,...} of numbers between 0 and 1, we can look at its distribution v(x). This is a function defined on the interval [0,1] that is the limit of the histograms of the finite sequences {x_0,x_1, ...,x_m} as m tends to infinity and as the partition of [0,1] becomes finer and finer. The distribution function v(x) tells us how the numbers in the sequence are distributed. In fact, v(x) tells us the probability distribution of the numbers in the sequence: If J is any small interval in [0,1], the probability P(x_i ; J) that a number x_i from the sequence will be in J is the integral of v(x) over the interval J. So for instance, if v(x) is zero on some interval J, then the probability of a number from the sequence being in that interval is zero.

A random sequence with distribution v(x) is a sequence of numbers {x_0,x_1, ... } where each x_i is a random variable with distribution v(x). This means that the numbers are choosen randomly according to the probability distribution v(x) (and so have no relation to the numbers chosen before). It's a fact that the distribution of the random sequence {x_0,x_1,... } as discussed in the preceeding paragraph (considering the sequence as just any sequence of numbers) is the same v(x). Now, we can calculate many statistical quantities about the random sequence using the distribution v(x). For example, we can calculate the probability that three consecutive terms in the sequence are increasing. There are infinitely many such "order statistics" we could calculate.

The question now is, given a sequence {x_0,x_1, ... }, how can you tell whether it is deterministic or random? Start by defining the distribution v(x) of the sequence. Then as we have seen, if the sequence is a random sequence (of identically distributed random variables) the v(x) we calculate from the histograms of the sequence is the probability distribution of the random variables. So we can calculate all the order statistics of a random sequence of variables with this distribution. Then we compare these results to the actual sequence we were given. If there is good agreement than we are confident that the sequence is indeed random, while if there is no agreement then we can conclude that the sequence is not a random sequence of variables with that distribution.

For example, the sequence {1,2,3,1,2,3,1,2,3, ... } is certainly not a random sequence of 1's, 2's, and 3's (each term in the sequence is determined by its predecessor). Here, the distribution is p_1 = 1/3, p_2 = 1/3, and p_3 = 1/3. But a random sequence of 1's, 2's, and 3's with this probability distribution would look very different than the one above. For example, all possible finite sequences of 1's, 2's, and 3's would occur with some frequency in the random sequence (because each such finite sequence would have a non-zero chance of occuring). Clearly, the sequence {1,2,3,1,2,3, ..} fails many statistical tests for randomness.

Now consider the sequence {x_0,x_1,x_3, ...} generated by the logistic function f(x) = 4x(1-x) (so x_n = f^n(x_0) ). The distribution of this sequence is v(x) = 1/[pi*(x(1-x)^(1/2)] (see pages 526,527). In some sense this sequence looks random, since the numbers visit every neighborhood of every point in the whole interval [0,1]. What happens when we use this sequence in the chaos game? For example, from this sequence could make a sequence of 1's, 2's, and 3's by dividing the interval [0,1] into three peices, say, [0,1/3), [1/3,2/3), and [2/3,1]. Whenever a number x_n from the sequence generated by the logistic equation falls into the interval [0,1/3) we pick 1. If it falls in the interval [1/3,2/3) we pick a 2, and if it falls in the interval [2/3,1] we pick a 3. Is this sequence of 1's, 2's, and 3's random? Well, if we play the chaos game for the Sierpinski triangle with this sequence the result is a resounding NO! In fact, hardly any of the Sierpinski triangle is drawn; See Figures 6.26 and 6.27. This is not surprising when you look at how the logistic function is generating those numbers. For instance, if you take a look at Figure 6.30 you see that if x_n is in the first interval (so you choose a 1 in the chaos game), then the next point x_n+1 will be either a 1 or a 2, that is, the pair 1,3 will never occur in the sequence of 1's, 2's and 3's we use to draw the Sierpinski triangle. Consequently, points will never land in the subtriangle with address 31 (see Figure 6.29). There are infinitely many subtriangles that no point will land in. This example suggests a way to test a sequence for randomness; play the chaos game with that sequence and see if the fractal is drawn. If there are gaps in the fractals, then those strings corresponding to those addresses do not occur in the sequence, and so the sequence is not random.

For a discussion of how to resolve the seeming paradox between determinism and randomness using complexity theory, see J. Ford's article in Physics Today (1983), vol 36 (4), page 40. For a rather complete discussion of what a random sequence is and how to test a sequence for randomness, see The Art of Computer Programming, Volume 2, by Donald Knuth, Chapter 3.


Top , Previous (Cellular automata, random fractals), Next (Symbolic dynamics, Shadowing Lemma)