Weekly commentary: MAT335 - Chaos, Fractals and Dynamics


Top , Previous (Iterated function systems), Next (The chaos game)

February 4

Encoding and Decoding an IFS (Sections 5.5 and 5.8)

The problem of encoding an image into an IFS is the problem of finding an IFS that generates the given image (i.e., whose fixed point is the image). So far we have considered only encoding fractals. The problem here is to find the IFS that generates the fractal. We have done this in class for the Cantor set, the Sierpinski triangle, and the Koch curve. There we started by decomposing the fractal D into self-similar pieces, i.e., pieces that reproduce the whole fractal after suitable affine transformations (eg. expansions and translations); D = D_1 U D_2 U ... U D_k (here, U denotes union). Suppose that w_1, ..., w_k are the affine transformations that take these pieces onto D; w_1(D_1) = D, w_2(D_2) = D, ... w_k(D_k) = D. Then if W = w_1 U w_2 U ... U w_k, W(D) = D so the fixed point of this IFS is indeed the fractal. (All this was described in last week's comments; 28 January.)

For the Cantor set, the Sierpinski triangle, and the Koch curve, it was rather easy to pick out the self-similar pieces (and hence to find the w_i's that make up the IFS). For many other fractals this may not be so easy. For example, it may not be so obvious what the self-similar pieces of the Barnsley fern are (and keep in mind that there are many ways to partition a fractal into self-similar pieces); see Figure 5.34 (look also at the fractals in Figures 5.21 - 5.23). If you look at Figure 5.48 you will pick out the self-similar images that compose the fern. Once you have recognized these, you can compose the IFS that generates the fern by finding the w_i's that map the initial image onto the self-similar pieces (these transformations are given in Table 5.33).

Encoding a fractal into an IFS results in a very high compression ratio. For example, a 500 by 500 grey level digital image (at 8 bits per pixel) contains about 2 Mb of data. The best compression schemes have a compression ratio of about 16:1, so the resulting 2Mb image can be compressed into 125 Kb of data before being transmitted over the internet. Now consider the Sierpinski triangle (assume it occupies about 500 by 500 pixels). The image of the Sierpinski triangle contains about 125 Kb of (compressed) data, but the Sierpinski triangle can be encoded into an IFS, which we have seen can be written using only three affine contractions. Each affine contraction requires 6 parameters, so 18 numbers are required to specify the IFS for the Sierpinski triangle. So it's much more efficient just to send the IFS instead of the image of the fractal! To completely specify the image, you would also send the initial image (say, a square or triangle) and the number of iterations needed to produce the fractal. Still, the compression ratio is in the order of 1000:1! (you computer scientists or electrical engineers may correct me on these figures).

Encoding images that are not fractals, i.e., which do not have an exact self-similarity, is more complicated. Imagine, though, an object like a leaf that has an approximate self-similarity (like many objects in Nature do). In Figure 5.49 a procedure is illustrated which encodes the image of a (real) leaf into an IFS. Here one produces a collage of images, each one being an image of the outline of the original leaf under an affine contraction (the types that make up an IFS). Then the IFS that is made up of these affine contractions will produce, under iteration, an image which will have the outline of the original leaf, but an exact self-similarity. Never-the-less, the (fractal) image looks remarkably like the original leaf. There are many ways to cover the leaf's outline by a collage of images of itself, so one would experiment with several such collages to see which one produces a better looking leaf. (This would be a highly interactive software package, where the user can specify parameters for an affine transformation and the program would immediately produce the image of the leaf under this transformation. The user would then place the image somewhere on the leaf and look for the next transformation, etc.)

Since an IFS is such an efficient way to encode an image, it's natural to think about an image compression scheme for ordinary images that uses IFS's to at least compress part of the image. This is the topic of Appendix A in the text ("Fractal Image Compression"). In a nutshell, the idea is to look for self-similarity in an image and encode it into an IFS. The rest of the image would be compressed in conventional ways. Apparently, this technique is as efficient as the best compression schemes that are in use today (for example, via wavelets).


The problem of decoding an IFS is the problem of determining the fixed point of an IFS (the fixed point being the final image). Assuming that the w_i's that compose the IFS are contractions (so the IFS is a contraction and so has a unique fixed point), one way to determine the fixed point is to show that W(D) = D for some image D. This is difficult to do in general, though. Another way, and perhaps the more obvious thing to do, is simply to iterate the IFS and see what image is produced. Since the IFS is a contraction, the resulting image will be an approximation of the fixed point of the IFS, and the more one iterates the IFS the closer the resulting image becomes to the fixed point of the IFS (because the Hausdorff distance h(W^n(A),D) between the image W^n(A) and the fixed point D tends to zero as n tends to infinity, and sets whose Hausdorff distance is small look similar).

However, this is not always going to work. The problem is that it may take a very long time (many iterations) before the image gets close to the fixed point. One can estimate how many iterations are required before the form of the fixed point is clearly discernable by estimating how many iterations are required by the w_i's that make up the IFS to shrink the original image A to the size of one pixel (so that no further change will be seen under subsequent iterations). Suppose the intial image A is a square of size 500 by 500 pixels. If the contraction rate of w_i is c (that is ||w_i(x) - w_i(y)|| < c ||x -y||; see for example the solution to Homework problem 13a), then after N iterations of w_i the size of the square would be c^N * 500. If this quantity is to be 1, then N = -log(500)/log(c). For the Sierpinski IFS, c = 1/2 so N = 9; after 9 iterations of the IFS for the Sierpinsky triangle you would not see any changes in the resulting image. For the Barnsley fern, where w_1 has a contraction rate of only c = .85, you need 39 iterations of the IFS (see Figure 5.38 where they show the result of 5 and 10 iterations of the fern IFS; for the Sierpinski triangle, 10 iterations are enough to show all discernable detail of the final fractal). Now, each iteration of the IFS for the Barnsley fern produces a factor of 4 more images of the original square compared to the previous iteration (there are 4 'lenses' in the IFS for the fern). If we start the iteration with a square, then the total number of squares the computer must draw in N iterations of the fern IFS is 1 + 4 + 4^2 + ... + 4^N = [4^(N+1) -1]/3 (see page 260). for N = 39 this is a HUGE number, about 10^23. Supposing your computer can draw 1 million squares per second (a VERY fast computer!), it would still take your computer about 10^10 years before the smallest detail of the image was the size of one pixel. In other words, you can not draw the Barnsley fern by iterating its IFS. So in this case it is not so clear how to find out what the image produced by the IFS is. We will see a more efficient way of drawing the image of an IFS in Chapter 6 (the Chaos Game). This is in fact how the images of the fern were drawn in the text. (See also homework problem 14 where we estimate the time taken to draw the Koch curves).


Top , Previous (Iterated function systems), Next (The chaos game)