Topics: MAT335 Winter 2001

(See also the Commentary from last year)




  • 12 Jan. The Cantor set (pp 67-72, 75-77).


  • 15 Jan. The Sierpinski triangle; pages 78-79. Self-similarity; read sections 3.1 and 3.2 (see also the comments section of last year for 21 Jan.).


  • 19 Jan. The von Koch curve; section 2.4, pages 89-93. Measuring the lengths of curves; section 4.2 pages 192-201. (see also the comments section of last year for 14 Jan).


  • 22 Jan. Fractal Dimension; Section 4.3, pages 202-211. (see also the comments section of last year for 21 Jan ).

  • 26 Jan. We watched the first 20 minutes of the video Fractals: An Animated Discussion (this video is available in the Audio Visual Library at Gerstein. Call number 002948). You can find those topics mentioned in the video in our text. For the pendulum experiment, see section 12.8 (and colour plates 27,28). For Lorentz's numerical experiment, see section 12.4 (and colour plate 23 for the "Lorentz attractor").
    In the lecture we continued with the discussion of fractal dimension: Fractal dimensions of the (middle-thirds) Cantor set and the middle-fifths Cantor set, the Sierpinski triangle, and the Peano space filling curve (read pages 94-101 for a discription of the Peano curve, and page 225 for a discussion of its fractal dimension).


  • 29 Jan. Box counting method for calculating fractal dimension; section 4.4 (pages 212-216).
    Drawing fractals: Chapter 5. The Multiple Reduction Copy Machine (MRCM); section 5.1 (pages 231-233). The blueprint of an Iterated Function System (IFS); pages 238-243. Variations of the Sierpinski triangle; section 5.3 (pages 244-248).
    See also the 28 January comments from last year.


  • 2 Feb.: Iterated Function Systems (IFS). We derived the IFS for the Cantor set and the von Koch curve. In general, to derive the IFS of a fractal (this is the problem of encoding a fractal), look first at a way to decompose the fractal into self-similar pieces and then which affine transformations w_i (i.e., w_i = A + v where A is a 2 by 2 matrix and v is a vector in R^2 (the 'shift') ) will map the fractal F onto these pieces, i.e., w_i(F) = F_i. See the 28 January comments from last year and page 239 in the text.
    For a review of matrices as transformations, read pages 234-236 in the text.

    We did some experiments with the Fractal Pattern VB program and the Fractal Movie applet. For example, using the Fractal Pattern program we saw what happens if one of the transformations that make up an IFS is not a 'contraction'; the image 'explodes' (it keeps changing as the IFS iterates). For this I modified the Sierpinski IFS to make the third transformation (the one that produces the square on the top in the blue print) a dilation by 1.5 (so the entries are a = 1.5, b = 0, c = 0, d = 1.5, in the matrix and the shift was e = 0, f = 0).
    You can see for yourself the 'Sierpinski variations' that are displayed on pages 246-248 of the text by using this program. In addition, you can make up your own fractal (just make sure the 'p' parameter is set to 1/k if there are k transformations in your IFS; this is for the 'Chaos Game' command ('Chaos Game' button) that will draw the fractal - remember, the 'Chaos Game' is much more efficient in drawing the fractal than iterating the IFS (this is the 'Fractal' command button).

    With the applet Fractal Movie, we saw how sliding the top transformation of the Sierpinski IFS from one side to the other affects the resulting fractal. Here, the initial w_3 transformation was (a,b,c,d,e,f) = (0.5, 0, 0, 0.5, 0, 0.5) and the final w_3 transformation was (0.5, 0, 0, 0.5, 0.5, 0.5). Note that you can check the blueprints of these transformation (as well as a bigger picture of the resulting fractals) using the VB program Fractal Pattern (input the above parameter values for w_3 of the Sierpinski data).

    Using the applet Fractal Movie we also saw how rotating the top transformation of the sierpinski transformation changes the fractal (here, the initial w_3 transformation was (0.5, 0, 0, 0.5, 0.25, 0.5) and the final w_3 transformation was (0, -0.5, 0.5, 0, 0.75, 0.5) ). You may want to look at these initial and final blueprints using the VB program Fractal Pattern just to see what they are. The final w_3 transformation is the initial one rotated by 90 degrees counter clockwise (see page 235 to remind yourself what the matrix of a rotation is). Can you identify which of the Sierpinski variations the final fractal is? (look at Figure 5.21)

    With the applet Fractal Movie we looked at the transformation of the von Koch curve into the Fern (like Figure 5.36); here we took the intial pattern to be the von Koch curve and the final pattern to be the Fern.



  • 5 Feb.: The Contraction Mapping Principle (Section 5.6, pages 263, 266-269). Contractive functions on R^2 with respect to the Euclidean norm, and contractive functions on the space K of bounded subsets of R^2 (i.e., images) with respect to the Hausdorff metric. If the w_i of an IFS are contractions on R^2 with respect to the Euclidean metric, then the IFS W is a contraction on K and thus has a unique fixed point F (the fractal); W(F) = F. Furthermore, if S is any subset of R^2, then W^n(S) --> F as n tends to infinity (the convergence being with respect to the Hausdorff metric). Note that if two sets are close together with respect to the Hausdorff metric, then they look similar as images. However, two sets A and B may be a distance zero apart with respect to the Hausdorff metric (i.e., h(A,B) = 0 ) even though they are not equal. See also the 28 January comments from last year.


  • 9 Feb.: Hausdorff distance, infimum (greatest lower bound) of a set, Hutchinson's Theorem (again). We showed that if W is the IFS for the Cantor set C, then h(W^n(I), C) --> 0 as n --> infinity, where I = [0,1]. We also showed that h(W^n(p), C) --> 0 as n --> infinity, where p = (0,0). This agrees with Hutchinson's Theorem; If F is a fixed point of an IFS W (so W(F)=F), then W^n(S) --> F as n tends to infinity for any bounded set S (convergence in the sense of Hausdorf distance). Note that W^n(I) = C_n, the nth step in the geometric construction of the Cantor set (which we did in class weeks ago), so of course we expect W^n(I) to converge to C. But note too that W^n(p) --> E , the set of end points in the Cantor set construction. We know that the Cantor set is much more than just the end points E. However, because C is the set of accumulation points of E, the Hausdorff distance between C and E is zero.

    Encoding and decoding images; see the Feb 4 comments from last year.
    Decoding an IFS (see pages 259-260) means drawing the fractal that is its fixed point. We estimated how long it would take to draw the fractal for the Sierpinski triangle and the Barnsley fern. For the Sierpinski triangle, it should take our computer just a few seconds, but for the fern it would take billions of years! We will see a much more efficient way to draw fractals using the Chaos Game (Chapter 6).
    Encoding and image A into an IFS (see pages 261, 278-281) means trying to find an IFS W such that the image is the fixed point of this IFS; W(A) = A. See the example in the text with the leaf (Figure 5.49).


  • 12 Feb.: The Chaos Game (Chapter 6). Chaos game for Sierpinski; pages 297-299. We looked at several other 'simple' chaos games that produced fractals. I exhibited these games with the Chaos Game Applet (which has many other chaos games too).
    To understand how the chaos game works we looked at the mathematical formulation; starting with an IFS W=w_1 U w_2 U ... U w_k, pick a number s_i randomly from the set {1,2,...,k} and apply w_(s_i) to the current game point to obtain the next game point. We proved two facts; (1) if the starting point is on the fractal, then all the points lie on the fractal, and (2) the sequence {z_0, z_1, z_2, ...} of game points fills out the fractal 'almost surely' (see page 302). To prove these facts we defind addresses of the fractal; see pages 309-314 (see also pages 305 and 306).
    See also the 21 Feb comments from last year.


  • 16 Feb. : The Chaos Game (continued). Finding the 'rules' of the chaos game: we derived the formula
    (*) z_(k+1) = q_i + A_i(z_k - q_i)
    where z_k is the current game point and z_(k+1) is the next game point, obtained by applying the transformation w_i to z_k (so the number i was chosen randomly at that iteration of the game). Here, w_i = A_i + v_i, where A_i is a matrix and v_i is the shift vector, and q_i is the fixed point of w_i (we obtained formula (*) by solving the equation w_i(q_i) = q_i for v_i, and then substituting that into the equation z_(k+1) = w_i(z_k) ). From this we can find the rules of the game in 'plain english'. For example, if the IFS is for the Sierpinski triangle, then each A_i is the matrix (a,b,c,d) = (0.5, 0, 0, 0.5), and equation (*) reads "from z_k move one-half the distance towards the point q_i". If the IFS is for the Sierpinski variation, where w_1 and w_2 are as for the Sierpinski IFS, but w_3 is dilation by 0.5 followed by rotation by 90 degrees counterclockwise, then if 3 is chosen at that iteration of the game, (*) reads, "from z_k move one half the distance towards the point p_3, and then rotate by 90 degrees counterclockwise about the point p_3". The rules for moving the game point when 2 or 3 are chosen are as for the Sierpinski game.

    Adjusting the probabilities (Section 6.3; pages 321 - 327). Note that a run of the chaos game for an IFS with k transformations w_1,...,w_k is determined by the sequence {s_i} of random numbers from {1,2,...,k} and the initial game point z_0 (since the game points are {z_0, z_1 = w_s1(z_0), z_2 = w_s2 w_s1(z_0), ...} ). Let's consider the Sierpinski IFS (so {1,..k} = {1,2,3} and the random sequence {s_i} is a sequence of 1's, 2's and 3's). Whenever a 1 appears in the sequence {s_i}, the transformation w_1 is applied to the current game point to obtain the next game point. If the initial game point z_0 is on the fractal, then all subsequent game points will be on the fractal (in particular, z_k is on the fractal), so the next game point z_(k+1) = w_1(z_k) will land in the triangle D_1 with address 1 (see page 317). More generally, if the string t_1...t_m of integers of 1's, 2's, and 3's appears in the sequence {s_i}, then a game point will land in the small triangle with address t_m...t_1. So to determine how many game points will lie in some small triangle with address t_m...t_1, you need to know how many times the string t_1...t_m appears in the sequence {s_i}. This can be calculated knowing the probabilities p_1, p_2 and p_3. By the definition of probability, the probability that the string t_1...t_m occurs is p_(t_1)p(t_2)...p_(t_m) (the product of probabilities; if p_1=p_2=p_3=1/3, then this product is (1/3)^m). Call this number p. Then if the chaos game is played with 10,000 game points (so the sequence of random numbers {s_i} is of length 10,000), then we expect (10,000)(p) points to land in the triangle with address t_1...t_m (see page 325, 326). Now we can understand what affect changing the probabilities p_1,p_2,p_3 has on the image obtained with the chaos game. If the product p = p_(t_1)p_(t_2)...p_(t_m) is small, then few points will lie in the small triangle with the address t_1...t_m, while if the product p is large then a lot of points will lie in that small triangle. See Figure 6.21. Applying these ideas to the Barnsley Fern fractal, we saw how the chaos game with equal probabilities p_i = 1/4 can not draw the complete fern (see Figure 6.4). However, after adjusting the probabilities we are able to draw the fractal (page 326). To understand this, we need to assign addresses to the fern like we did with the Sierpinsli triangle; see Figures 5.48 and 6.24.

    We did some 'experiments' with the Fractal Movie applet where we adjusted the probabilities p of the blue print. The examples shown in class were;

    See also the 21 Feb comments from last year.

  • 26 Feb: Adjusting the probabilites in the chaos game
    For IFS whose transformations have all the same rate of contraction, we saw that when the probabilities are all equal the fractal is drawn most efficiently (over other, non-equal probabilities). However, if the transformations have differing rates of contractions, then we gave heuristic arguments to show that, at least in some approximation, that choosing the probabilities p_i such that the ratios p_1/c_1, p_2/c_2, ... are all equal. Here c_i = |det A_i|, the absolute value of the determinent of the matrix A_i of the transformation w_i, is the amount w_i changes areas. The argument was to make the densities of points in the parts of the fractal with addresses of equal length the same. You can try this with the Fern IFS. See pages 327-329 and Section 6.5 of the text.

    Playing the chaos game with sequences that are not random
    Recall that the only property needed of the sequence {s_i} in order that the fractal gets drawn when using this sequence in the chaos game, was that it contain every possible string of the integers {1,..., k} (where the IFS has k transformations). Actually, if your computer screen can only resolve detail down to level 'm' (i.e., addresses of length m are the smallest areas you can see; they are about one pixel in size), then you only need a sequence that contains all possible strings of length m (then a point will land in each of these addresses when the chaos game is played with that sequence). One can write down a sequence that contains all strings of length m; the 'naive' sequence will be of length m*k^m . However, there will be a lot of redundance from using such a string in the chaos game (for example, suppose we want to draw Sierpinski triangle to level 3, then the naive sequence has 3*3^3 = 81 numbers, from which we will obtain 81 points. However, there are only 27 addresses of length 3, so we only need 27 points). There are 'best' sequences that have no redundancies. Such sequences have length k^m + m-2. Note this is much smaller than the length of the 'naive' sequences.
    How well do random sequences do? We saw in class, for the cases of the 'full triangle' IFS and the Sierpinski IFS, that random sequences do an intermediate job, i.e., they are not as 'efficient' as the 'best' sequences, but they are not as bad as the 'naive' sequences.

    Some references about these topics;


  • 5 March: Discrete dynamical systems, Graphical iteration. A one-dimensional discrete dynamical system is defined by a function f(x) . The dynamics of the system are the orbits produced by iterating the function f(x) for various initial points x_0; the orbit of x_0 is the sequence {x_0, x_1,x_2,...} where x_n = f^n(x_0). Note that the notation f^n(x) denotes the n-fold composition of f, not the product. Some questions one would like to answer about a dynamical system is the behavior of orbits. We discussed fixed points p; f(p) = p, which generate orbits that are constant; {p,p,p,... }. The next simplest orbits are the periodic orbits. These orbits cycle through a finite sequence of points and then repeat; {x_0, x_1, ...x_k-1, x_0, x_1, ...}. The length of the cycle is the period of the point x_0 (or any one of the points in the cycle). The stability of a periodic point, or periodic orbit, refers to the bahavior of orbits that start near x_0. If the orbit converges to the periodic orbit then we say the orbit is stable. If the orbit moves away from the periodic orbit, then we say the periodic orbit is unstable. Note that only stable periodic orbits are "observed". See the comments for March 3 from last year.

    I used the Graphical Iteration applet to demonstrate stable and unstable orbits of the logistic function f(x) = ax(1-x); when a is between 1 and 3, all orbits converge to the fixed point (so it is stable); when a is between 3 and 3.45 the fixed point is unstable (so orbits will move away from it) while a stable periodic orbit of period 2 appears (and all orbits converge to it). We also used the applet to study the orbits of f^2; here the fixed points are period 2 points of f and we saw how the fixed points that correspond to the stable period two orbit of f are stable fixed points of f^2 (so orbits converged to them). (These values of a are stated on page 1 of the handout "Distribution of Sequences/Random Sequences".)

    We discussed the time series and histograms of orbits and saw how the periodic behavior was very apparent in the histograms (I handed out some time series and histograms of the logistic function which we will discuss again in Chapter 11). In contrast, the so-called ergodic orbits have histograms that are continuous (see Figure 10.17). Our immediate goal is to describe these ergodic orbits for the logistic function f(x) = 4x(1-x) and also its periodic orbits. This is an example of a chaotic system (for an intoriduction to what a chaotic system is, see the introduction to Chapter 10, pp 507-508, and Section 1.4, and also the comments for 3 March fo last year).

    References for this material; in the text, Figures 10.3, 10.4, 10.16, 10.17, 11.8, 11.9, 11.10, 11.11, 11.16, pages 525-527, 594-595. From the handouts; pages 1-4 of "Graphical Analysis" and page 1 of "Distribution of Sequences/Random Sequences".

    I also played a bit of Cynthia Church's fractal music based on the Sierpinski triangle. For more info about her composition and relevant links, go to the Simmers website.


  • 9 March: Symbolic dynamics of the tent transformation
    We first observed, by looking at the graph of f^k and y=x on the applet Graphical Iteration, that the logistic equation f(x) = 4x(1-x) has periodic points of all periods (the graph of f^k intersects the diagonal y=x in many points), that they are all unstable (the slope of the graph of f^k at the fixed points is greater than 1 in absolute value), and that the periodic points are 'everywhere' in the interval [0,1]. To obtain the exact values of these periodic points, and also to discuss 'ergodic' points (next week), we introduced symbolic dynamics.

    We have the three maps T~, T, and f(x). T~ is the transformation on the space S of binary sequences and acts on them via 'shift left' or 'shift left then conjugate'. T is the tent transformation on [0,1], and f(x) = 4x(1-x) is the logistic transformation. T~ and T are related by the function (I called S) that takes a binary sequence a to the number in [0,1] that has a as its binary expansion. T and f(x) are related by the function h(x) = sin^2(pi*x/2). Our goal is to study the orbits of f(x), and because of the above relations we can accomplish this by studying the orbits of T~ and then 'translate' them first into orbits of T via S, and then into orbits of f(x) via h.

    See Figures 10.42 and 10.44 which show how h relates T to f(x).
    The function h matches orbits of T with orbits of f(x) and visa versa; see pages 560-564 (by 'orbits of T' we mean the orbits of points under iteration by T).

    We saw that it is easy to find periodic points of T~ by solving the equation T~^k(a) = a for the sequence(s) a. We found period 1 and period 2 points this way. Then we translated these points to periodic points for T and then f(x) and confirmed that they were indeed periodic points for these functions.

    The symbolic dynamics of T, i.e., the transformation T~, is described on pages 556-557. See also the 10 March comments from last year.

  • 12 March: Symbolic dynamics, ergodic orbits, shadowing lemma
    Using the map T~ we proved that periodic points of the tent map T are dense in the interval [0,1]. Next we discussed the existence of so-called ergodic orbits of T. These are orbits that 'fill out' the whole interval [0,1]; in stark contrast to periodic orbits which assume just finitely many points (see Figure 10.17 for a histogram of an ergodic orbit). Again, to prove this we looked at the map T~ and proved it first for sequences. We saw that what we needed was a sequence of 0's and 1's that contains every possible string of 0's and 1's. One such sequence would be any random sequence of 0's and 1's where the probabilities of choosing 0 or 1 are both positive (but could otherwise be arbitrary). Or we could use the 'naive' sequence described when we discussed the chaos game. Then the number x with this as its binary expansion would be an ergodic point of T (that is, the orbit starting at x would be ergodic).
    See pages 552,553 for the density of periodic points, and pages 554,555 for a discussion of ergodic behavior of orbits.

    Using the function h(x) = sin^2(pi*x/2), we can translate all these results to the logistic function f(x) = 4x(1-x) (see pages 562-566); The last point is refered to as sensitive dependence on initial conditions and is a hall mark of chaos. Thus, a chaotic system is hard, if not impossible, to predict since there are always errors present in measurements or calculations and these errors grow to the point where they 'overwhelm' the underlying dynamics (see pages 509-515).

    We should note that it was a fairly straight forward thing to translate results about orbits of T~ to results about the orbits of T because the map S which converts a binary sequence into a number (and whose inverse converts a number into its binary sequence) 'preserved distances', i.e., if two sequences were 'close' together (meaning that their first n terms agreed), then the corresponding numbers (via S) were also close, and visa versa. However, it is not so obvious that results like the existence of an ergodic orbit of T or that periodic orbits of T are dense in [0,1] should translate to orbits of f(x) = 4x(1-x). We have to verify that h(x) also 'preserves distances'. It can be seen from the graph of h(x) (see Figure 10.44) that it does indeed preserve distances so that, for example, if an orbit {x_o, x_1, x_2, ...} of T passes close to the point y, then the orbit {h(x_o), h(x_1), h(x_2), ... } of f(x) passes close (but maybe not as close) to h(y). This is what we need to establish that periodic points of f(x) are dense in [0,1] and the existence and denseness of ergodic points of f(x) (as stated in the previous paragraph).

    Now, if the orbits of f(x) are very sensitive to errors, then since the computer is always introducing errors in the calculation by truncating numbers, what relation, if any, is there between the computed orbit and the exact (or true) orbit? It turns out that even though the computed orbit is not an exact orbit, there is an exact orbit that shadows the computed orbit, so that the computed orbit does tell us something about the exact orbits. This is the content of the Shadowing Lemma (page 577 and Figure 10.45).

    See also the 10 March comments from last year.


  • 16 March: Bifurcation, final state diagram for the logistic equation
    A family of functions parameterized by a is a collection of functions f_a(x). For each fixed value of a, we study the qualitative behavior of the orbits of f_a(x). As a varies, we expect the orbits of f_a(x) to change, as well as the periodic points of f_a(x) (see the hand out "Experiments with the Logistic Equation"). We say that the family f_a(x) undergoes a bifurcation at parameter value a = a_o if the stability of a periodic point changes at a_o, or if a periodic point appears or disappears at a=a_o. A bifurcation diagram for the family f_a(x) is a plot of the periodic points p vs a (see page 6 of the handout "Graphical Analysis" and Figure 11.17 in the text). This shows us graphically how the periodic points change as a changes. Related to the bifurcation diagram is the final state diagram. A final state diagram is a plot of the "final state" of an orbit (any orbit) vs a (see Figures 11.1 and 11.2 in the text). The "final state" of an orbit is the "end points" of the orbit, eg., the points {x_100, x_101, x_102, ...}. If there is a stable periodic orbit, and if its stable set is all of [0,1] (in the case of the logistic equation, say), then the final state of an orbit will be this periodic orbit and so the points of this periodic orbit will be plotted on the final state diagram. But these points are precisely what are plotted on the bifurcation diagram (because each point of the periodic orbit is a periodic point of f_a(x) ). However, if there is an unstable periodic orbit then it will not be the final state of (essentially) any orbit, so it will not appear on the final state diagram even though it will appear on the bifurcation diagram (compare Figures 11.6 to 11.17). The bifurcation diagram for the logistic equation is calculated on pages 604-605.

    The particular type of bifurcations we see with the logistic equation, where the period one point bifurcates into a period 2 orbit at a=3, and then the periodi two orbit bifurcates into a period 4 orbit at a=3.449, etc., are called period-doubling bifurcations. We observe a series of period-doubling bifurcations at a values b_1, b_2, b_3, ... that accumulate at the number a_infinity=3.5699..... See pages 607-612. At a=3.5699...., ther is a period orbit of "infinite period" (i.e., it's not really periodic at all). For a greater than 3.5699....., the orbits of the logistic equation behave in a very different way than for a less than 3.5699.... For example, there are ergodic orbits (the antithesis of periodic orbits). We will study why the logistic equation goes through period-doubling bifurcations and look more closely at the structure of the final state diagram for a between 3.5699... and 4, next week.

    See pages 4-6 of the handout, "Graphical Analysis" for a discussion of bifurcations, and the comments for 17 March of last year. Read pages 585-595, 603-605, 607-612 in the text.


  • 19 March: Explaining the period-doubling bifurcations
    (I handed out 3, two-sided pages of notes about explaining the final state diagram of the logistic equation.)
    We observe a sequence of period-doublings (of the periodic points) as the parameter a increases from 3 to 3.57 (see the handout, "Experiments with the Logistic Function"). If you look at the graph of f_a^2(x) as a varies from 2.8 to 3.2 (use the applet "Bifurcations of the Logistic Equation"), you will see how at a=3 the graph of f^2_a(x) is tangent to the diagonal y = x, and for a greater than 3 the graph of f^2_a(x) intersects the diagonal y = x at 3 points. The middle point being the fixed point of f_a(x) while the two points on either side being the (new) period 2 points (see Figure 11.20). Thus, the period-doubling bifurcation is a consequence of the shape of the graph of the logistic function. Now look at f^4_a(x) near a = 3.45. You will see again that the graph of f^4_a(x) becomes tangent to the diagonal y = x for a = 3.45 and that two fixed points of f^4_a(x) appear on either side of each of the period 2 fixed points as a increases. This is just what happened at the period 2 bifurcation. In fact, all the period doubling bifurcations can be explained in this way. See pages 607 - 610.

    Now, the fact that the logistic equation goes through a series of period-doubling bifurcations is just a consequence of the shape of the graph of f(x) = x(1-x); it has one maximum on the interval [0,1]. Defining f_a(x) = a*f(x), as a varies the height of the peak changes. So if you start with any function g(x) that has one maximum on the interval [0,1], and define the one-parameter family of functions g_a(x) = a*g(x), then as a varies, we expect to see a series of period-doubling bifurcations for the same reason we saw it for the logistic family f_a(x) = ax(1-x). One such other function is g(x) = x^2 sin(pi*x) for a between 1.5 and 2.3265; see pages 616-617. The final state diagram for g_a(x) looks very similar to the final state diagram of the logistic function (compare Figures 11.23 with 11.2).

    So the period-doubling scenario is not so surprising for functions that have the same general shape as the logistic function. What is surprising, and remarkaable, is the fact that the rate the period doublings occur is the same for all these (different) families of functions. Explaining this phenomena (using the Renormalization Group theory of physics) was a major accomplishment of Michael Feigenbaum in the late '70's. The result has come to be known as Universality. Many different types of dynamical systems go through a period-doubling route to chaos and the rate of bifurcations is just as with the logistic equation; see Table 11.24.

    Now we want to explain some of the structure we see in the final state diagram of the logisitc function for parameter values a between 3.57 (where the perio-doubling cascade ends) and 4 (where we have purely ergodic behavior on the whole interval [0,1]). We can see clear periodic 'windows' with period-doubling bifurcations within those. We also see many 'shadow lines' curving through the ergodic regions. The period doubling bifurcations in the period windows can be explained once again by looking at the shape of the graph of f^k_a(x) for appropriate k. For example, the period doubling scenario in the period 3 window can be explained by looking at f^3_a(x); see Figures 11.41 and 11.42. See also Figures 11.35 and 11.37 which explain why the two bands from a = 3.57 to 3.68 look like minature copies of the ergodic behavior of f_4(x). The 'shadow lines' can be explained by realizing that points near 0.5 get squeezed together under iteration by f_a(x), leading to enhanced densities of points in the histograms near the value v_a = f_a(0.5) and subsequent iterations; see Figure 11.38. These regions of enhanced densities show up as spikes in the histograms (see Figure 11.36 for example, and check out the applet "Logistic Movie"). By drawing the curves of the lines v_a, f_a(v_a), f^2_a(v_a), etc, one can explain all the 'shadow lines' (see Figure 11.39).

    See also the comments for 17 March of last year.


  • 23 March: Charkovsky's Theorem Pages 638,639 in text. See also the comments for 17 March of last year.


  • 26 March: Two Dimensional Dynamical Systems and the Henon Map
    (I handed out 2, two-sided pages of notes on two-dimensional dynamical systems, and an introduction to complex numbers. Also, 2, two-sided pages of time-series of the Henon map.)
    A two dimensional discrete dynamical system is defined by a function F(x,y) of two variables x and y. An orbit of a point z = (x,y) is the sequence {z, F(z), F^2(z), ...}. We want to describe the behavior of orbits of F. There may be fixed points p; F(p) = p, and periodic points ; a point p is periodic of period k if F^k(p) = p. Periodic points (and their orbits) can be stable (i.e., attract nearby orbits) or unstable (i.e., repel nearby orbits). The derivative ( = Jacobian matrix of F) of F or its iterate F^k at the periodic point can tell us the stability of the periodic point, just like in the case of one dimension f(x).

    Some examples of two dimensional dynamical systems are the linear ones; F(x,y) = A(x,y) where A is a 2 by 2 matrix. For any two dimensional dynamical systes, the phase portrait is a plot of the orbits of F in the xy-plane (so for example, the orbit of a fixed point would just be the single point p, and the orbit of a periodic point would just be k points, but in general an orbit would be an infinite sequence of points spread out over the plane).

    The Henon map is defined by H(x,y) = (y+1-ax^2, bx) where a and b are parameters. The behavior of the orbits of H depend on the parameters a,b. H undergoes a sequence of period-doubling bifurcations as the parameters vary. Drawing the final state diagram (for the x corordinate of points in the orbit) leads to a diagram that looks very similar to the final state diagram of the logistic equation (see Figure 12.15). And if you calculate the rate at which the period doublings occur, you will obtain the same constant 4.666.... as for the logistic equation! (universality again).

    The Henon map has a strange attractor for certain parameter values.. The attractor is a set A that is invariant under F; F(A) = A, and that attracts nearby orbits. It is 'strange' because it has a fractal-like structure (see Figure 12.12). The orbits on the attractor move ergodically; they fill out the entire attractor (see the histograms of orbits given in the handout). It is the complicated geometry of the attractor which makes the dynamics of H on A chaotic.

    You can use the VB program Henon Map to draw the (two dimensional) time series of orbits of the Henon map as well as the historgrams (of the x and y coordinates). The applet Henon Movie plots an orbit on the strange attractor. Here you can see the 'chaotic' motion of orbits in the attractor.

    See pages 659-671 and 674-675 in the text, and also the comments for 21 March of last year.


  • 29 March: Julia Sets
    (I handed out four, 2-sided pages of figures of Julia sets and the Mandelbrot set)
    We defind the prisoner set and the escape set of complex functions f(z). In the case when f(z) is a polynomial, we defined the Julia set to be the boundary (i.e., 'edge') of the prisoner (or escape) set. The prisoner set is sometimes referred to as the filled Julia set (as in the figures from Devaney's book). Note that for other types of complex functions such as rational functions (quotients of polynomials) or transcendental functions (like exp(z), sin(z), etc), the definition of the Julia set is not the same as for polynnomials (see the comments from last year, 28 March for a discussion of the definition of Julia sets for these functions). For polynomials, the Julia set always has zero area (so it is a curve or points). But for other types of functions, the Julia set can have positive area (as in the examples I handed out).

    We use the derivative f '(z) of the (complex) function f(z) to study the stability of fixed points and periodic points of f just as we did for real functions f(x). If p is a fixed point of f, then if |f '(p)| is less than 1, p is stable and if |f '(p)| is greater than 1, p is unstable. If |f '(z)| = 1 then we can not tell from the derivative whether p is stable or unstable (either can happen, or neither). The same idea for periodic points; if p is a periodic point of period k, then if |(f^k) '(p)| is less than one or greater than one, p (and its orbit) is stable or unstable respectively. In the case of the quadratic family q_c(z) = z^2 + c, q '(z) = 2z, so if the fixed point is within the disc of radius 1/2 in the complex plane it will be a stable fixed point, and if it lies outside that disc it will be an unstable fixed point. We looked at the particular case of the quadratic map when c = -1/2 + 1/2 i.

    I also talked about the prisoner set of the logistic equation f_a(x) = ax(1-x). Here it is easy to see that if a is between 1 and 4, including 4, the prisoner set is [0,1] (any orbit starting outside of this interval will tend to minus infinity). If a is greater than 4, then the peak of the graph of f_a(x), which occurs at 1/2, will be outside the unit square and so any point that lands there (or nearby) under iteration by f_a(x) will enter the escape set (and go to minus infinity). If a is greater than 4, then by backwards iteration (see Figure 13.42) you will see that the prisoner set is a Cantor-like set (it is obtained by removing infinitely many open intervals from [0,1]). So we see a remarkable difference in the 'topology' of the prisoner sets for a less than 4 (when the prisoner set is 'totally connected', i.e., an interval) and for a greater than 4 (when the prisoner set is 'totally disconnected', i.e., dust); there are no other types of prisoner sets. Note that whether the prisoner set (and hence the 'Julia' set) of the logistic equation is one piece ('connected') or dust ('totally disconnected') depends on whether the critical point of the logistic equation (i.e., the point x where f '(x) = 0, which is always 1/2) is in the escape set or not; if the critical point is not in the escape set (so it is in the prisoner set) the prisoner set is connected, if the critical point is in the escape set the prisoner set is dust. We will see that the same criteria holds for the Julia sets of the complex quadratic map z^2 + c.

    The logistic function ax(1-x) is related to the (real) quadratic function x^2 + c; just turn over the parabola x^2 + c and move it over so that its peak occurs at 1/2 (see Figure 13.43). In the case of x^2 + c, instead of considering the unit square [0,1]X[0.1], one considers the essential square (see page 830). The essential square for x^2 + c plays the same role as the unit square for ax(1-x). Thus, if the critical point of x^2 + c (which is 0) is outside the essential square, the prisoner set of x^2 + c will be dust, while if the critical point lies inside the essential square the prisoner set will be an interval. This occurs precisely if c is greater than 1/4 and if c is less than 2 (so if c in [-2, 1/4] the prisoner set is connected, and if c is not in [-2, 1/4] the prisoner set is totally disconnected; see page 832).

    See pages 826-832 for a discussion of prisoner sets of ax(-1x) and x^2 + c, and the relation between the two functions.

    For the complex quadratic map z^2 + c, see Section 13.4 in the text as well as the comments for 28 March of last year.

  • April 2
    How big can P_c be? We showed that there is a threshold radius r(c) = max of (|c|,2) where if |z| is greater than r(c), then z is in E_c (see page 794). Thus, the Julia set J_c is contained in the disc of radius r(c). This gives us a way of drawing (an approximation to) the prisoner set P_c of q_c(z) using encirclements (see pages 795-798). Let Q_c^(-n) be the set of z's such that |z|, |q_c(z)|, |q_c^2(z)|, ..., |q_c^n(z)| are all less than r(c). Then Q_c^(-n) is an approximation of P_c, P_c is a subset of Q_c^(-n), and Q_c^(-n) --> P_c as n--> infinity (see Figures 13.17, 13.18 and 13.19).

    Some facts about Julia sets that are useful to know.

    We now describe another algorithm for drawing J_c (see Section 13.7). Suppose w = q_c(z). Then q_c^(-1)(w) = z. However, q_c^(-1) involves taking the square root so we expect two solutions to q_c^(-1)(w) (the only time there is only one solution to q_c^(-1)(w) is when w = c; then the only solution is 0). If W_1(w) = sqrt(w-c) and if W_2(w) = - sqrt(w-c) = -W_1, then q_c^(-1)(w) = W_1(w) and q_c^(-1)(w) = W_2(w). So, if z_1 = W_1(w), z_2 = W_2(w), then q_c(z_1) = w and q_c(z_2) = w. We see that z_1 and z_2 are the backward images or preimages of w under q_c.

    So we can define an "IFS" by W = W_1 U W_2 like we did before for drawing fractals. This time the transformation W_1 and W_2 are nonlinear transformations instead of affine ones like we used before. If W_1 and W_2 were contractions on R^2 (which we identify the complex numbers with), then we could appeal to the contraction mapping principle to conclude that there is a unique fixed point of W, and that if A is any (bounded) subset of R^2, that W^n(A) tends toward this fixed point (in Hausdorff measure, for example). Now what is the fixed point of W? By fact (1) above, the Julia set J_c is. Thus, we can iterate this IFS to obtain an aproximation of J_c (the only caveat is that W is not exactly an IFS, it may not be contracting everyhwere in R^2, but if A is a subset far enough away from the orgin, then W^n(A) will tend towards J_c; see figure 13.35). W_1 and W_2 also give us a way to define the 'self-similarity' of J_c (see Figure 13.37) and to define addresses on J_c.

    However, just as we saw with fractals, it may be more efficient to draw the Julia set using the chaos game instead of iterating the IFS. To play the chaos game with W, we use a random sequence of 1's and 2's and apply W_1 or W_2 to the current game point. We can start the game with any z in E_c (because we know this point moves towards J_c under backward iteration of q_c), but we know, by fact (4) above that the repelling fixed point of q_c is in J_c, so it is best to start the game with this point. Then the game points will fill out J_c (see Figure 13.36). But again, just as we saw with fractals, we may have to adjust the probabilities. This will make the chaos game even more efficient at drawing the Julia set (see Figure 13.36).

  • 6 April: The Mandelbrot set (Chapter 14)
    Recall that the Julia sets J_c of the quadratic function q_c(z) = z^2 + c are eitther connected (one piece) or dust (totally disconnected). The Mandelbrot set M is the set of c-values for which J_c is connected. An equivalent definition is that M is the set of complex numbers c such that the orbit of 0 under q_c(z) is bounded. This latter definition uses a theorem of Julia and Fatou which states that the Julia set is connected if 0 is in the prisoner set, and is disconnected if 0 is in the escape set. Thus we immediately conclude that M is contained inside the disc of radius 2 centered at the origin in the complex plane (because if |c| is greater than 2, then the orbit of c goes off to infinity; see page 794). We also know, by considering the real quadratic function x^2 + c where x and c are both real numbers, that if c is between -2 and 1/4 the prisoner P_c is connected. Thus, M contains the interval [-2,1/4].

    Since the prisoner set P_c (and hence the Julia set J_c) is connected when q_c has a stable periodic orbit, we know that c is in the Mandelbrot set M whenever q_c has a stable periodic orbit. This gives us a way to compute the general shape of the Mandelbrot set. For example, we can determine those c-values for which q_c has a stable fixed point. This produces the main heart-like (cardioid) region of M (see pages 857-858). The c-values for which q_c has a stable period 2 orbit produces the circular 'bulb' centered at -1 with radius 1/4 (see page 865). Continuing with higer periods we would produce bulbs of various sizes, each attached to either the main cardioid or to other bulbs; see Figure 14.19. What remains is the "hair-like" tantacles that protrude from the bulbs and cardioid. These are c-values for which the derivative of q^k_c at a period k point has absolute value 1. The Julia sets corresponding to these c-values have a variety of shapes (see figures 14.20, 14.22, 14.23).

    Read pages 843-846, 857-858, 862-867 in the text, and see the comments for April 7 of last year.

  • 9 April: The Mandelbrot set (continued)
    We can understand some (many) of the features of Julia sets by looking at the dynamics of the orbits of q_c(z). Here are some results:

    The Mandelbrot set is an extremely complicated set. However, we were able to explain "most" of its shape (i.e., 100% of its area!) by calculating those c-values for which q_c has a stable periodic orbit (see last week's lectures). In this way we explained the main cardioid region and all the bulbs along its perimeter and all the bulbs on those bulbs, etc. (see Figure 14.19). The rest of the Mandelbrot set, the "hair", has no area. For these c-values q_c has all unstable periodic orbits or has a neutral periodic orbit. The latter are the most difficult Julia sets to understand (see pages 866-871).

    We see that the Mandelbrot has some kind of self-similarity. In fact, there are many, many smaller copies of the Mandelbrot set within the Mandelbrot set; See Figures 14.25, 14.26, 14.32, 14.41, and 14.44.

    Another remarkable feature is that some regions of the Mandelbrot set look just like some parts of the Julia sets nearby (for those c-values). See Figures 14.28, 14.29, 14.30, and 14.31.

    I demonstrated the VB program Julia Set and Mandelbrot set written by Dmitry Brant which you can find a link to at the bottom of the resources page. Here you can see for yourself how J_c changes as you change c, and also the fractal-like nature of J_c (by zooming in on J_c). Using the Mandelbrot program, you can see the self-similarity of the Mandelbrot set, for example, by zooming in on one of the many "miniature" versions of the Mandelbrot set (for example, along the main "antenna" that runs along the real axis in the negative direction).

    We watched the last half of the video, "Fractals: An Animated Discussion", which you can find in the Audio-Visual Library of Gerstein (one floor down), call # 002948. We watched Julia sets changing as c changed, and the self-similarity of the Mandelbrot set, all done with spectacular supercomputer graphics.

    See the comments from April 7 of last year.
    Back to the MAT335 homepage